00:00:00.000 Started by upstream project "autotest-spdk-v24.09-vs-dpdk-v22.11" build number 199 00:00:00.000 originally caused by: 00:00:00.000 Started by upstream project "nightly-trigger" build number 3701 00:00:00.000 originally caused by: 00:00:00.000 Started by timer 00:00:00.092 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.093 The recommended git tool is: git 00:00:00.094 using credential 00000000-0000-0000-0000-000000000002 00:00:00.096 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.119 Fetching changes from the remote Git repository 00:00:00.122 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.144 Using shallow fetch with depth 1 00:00:00.144 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.144 > git --version # timeout=10 00:00:00.165 > git --version # 'git version 2.39.2' 00:00:00.165 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.190 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.190 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.024 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.038 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.051 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:05.051 > git config core.sparsecheckout # timeout=10 00:00:05.062 > git read-tree -mu HEAD # timeout=10 00:00:05.078 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:05.106 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:05.107 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:05.196 [Pipeline] Start of Pipeline 00:00:05.210 [Pipeline] library 00:00:05.212 Loading library shm_lib@master 00:00:05.212 Library shm_lib@master is cached. Copying from home. 00:00:05.226 [Pipeline] node 00:00:05.238 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:05.240 [Pipeline] { 00:00:05.249 [Pipeline] catchError 00:00:05.251 [Pipeline] { 00:00:05.264 [Pipeline] wrap 00:00:05.273 [Pipeline] { 00:00:05.279 [Pipeline] stage 00:00:05.281 [Pipeline] { (Prologue) 00:00:05.295 [Pipeline] echo 00:00:05.297 Node: VM-host-SM38 00:00:05.301 [Pipeline] cleanWs 00:00:05.312 [WS-CLEANUP] Deleting project workspace... 00:00:05.312 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.320 [WS-CLEANUP] done 00:00:05.501 [Pipeline] setCustomBuildProperty 00:00:05.602 [Pipeline] httpRequest 00:00:05.943 [Pipeline] echo 00:00:05.945 Sorcerer 10.211.164.20 is alive 00:00:05.956 [Pipeline] retry 00:00:05.958 [Pipeline] { 00:00:05.972 [Pipeline] httpRequest 00:00:05.978 HttpMethod: GET 00:00:05.979 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.979 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.990 Response Code: HTTP/1.1 200 OK 00:00:05.990 Success: Status code 200 is in the accepted range: 200,404 00:00:05.991 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.250 [Pipeline] } 00:00:07.265 [Pipeline] // retry 00:00:07.271 [Pipeline] sh 00:00:07.553 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.569 [Pipeline] httpRequest 00:00:08.152 [Pipeline] echo 00:00:08.154 Sorcerer 10.211.164.20 is alive 00:00:08.161 [Pipeline] retry 00:00:08.163 [Pipeline] { 00:00:08.174 [Pipeline] httpRequest 00:00:08.178 HttpMethod: GET 00:00:08.179 URL: http://10.211.164.20/packages/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:08.179 Sending request to url: http://10.211.164.20/packages/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:08.195 Response Code: HTTP/1.1 200 OK 00:00:08.196 Success: Status code 200 is in the accepted range: 200,404 00:00:08.197 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:01:16.471 [Pipeline] } 00:01:16.490 [Pipeline] // retry 00:01:16.498 [Pipeline] sh 00:01:16.776 + tar --no-same-owner -xf spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:01:19.358 [Pipeline] sh 00:01:19.644 + git -C spdk log --oneline -n5 00:01:19.644 b18e1bd62 version: v24.09.1-pre 00:01:19.644 19524ad45 version: v24.09 00:01:19.644 9756b40a3 dpdk: update submodule to include alarm_cancel fix 00:01:19.644 a808500d2 test/nvmf: disable nvmf_shutdown_tc4 on e810 00:01:19.644 3024272c6 bdev/nvme: take nvme_ctrlr.mutex when setting keys 00:01:19.667 [Pipeline] withCredentials 00:01:19.681 > git --version # timeout=10 00:01:19.695 > git --version # 'git version 2.39.2' 00:01:19.716 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:19.718 [Pipeline] { 00:01:19.728 [Pipeline] retry 00:01:19.730 [Pipeline] { 00:01:19.746 [Pipeline] sh 00:01:20.030 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:01:20.307 [Pipeline] } 00:01:20.326 [Pipeline] // retry 00:01:20.331 [Pipeline] } 00:01:20.348 [Pipeline] // withCredentials 00:01:20.359 [Pipeline] httpRequest 00:01:20.819 [Pipeline] echo 00:01:20.821 Sorcerer 10.211.164.20 is alive 00:01:20.831 [Pipeline] retry 00:01:20.833 [Pipeline] { 00:01:20.847 [Pipeline] httpRequest 00:01:20.853 HttpMethod: GET 00:01:20.854 URL: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:20.855 Sending request to url: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:20.861 Response Code: HTTP/1.1 200 OK 00:01:20.862 Success: Status code 200 is in the accepted range: 200,404 00:01:20.862 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:43.839 [Pipeline] } 00:01:43.861 [Pipeline] // retry 00:01:43.871 [Pipeline] sh 00:01:44.233 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:45.630 [Pipeline] sh 00:01:45.909 + git -C dpdk log --oneline -n5 00:01:45.909 caf0f5d395 version: 22.11.4 00:01:45.909 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:45.909 dc9c799c7d vhost: fix missing spinlock unlock 00:01:45.909 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:45.909 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:45.927 [Pipeline] writeFile 00:01:45.943 [Pipeline] sh 00:01:46.222 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:46.233 [Pipeline] sh 00:01:46.510 + cat autorun-spdk.conf 00:01:46.510 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:46.510 SPDK_TEST_NVME=1 00:01:46.510 SPDK_TEST_FTL=1 00:01:46.510 SPDK_TEST_ISAL=1 00:01:46.510 SPDK_RUN_ASAN=1 00:01:46.511 SPDK_RUN_UBSAN=1 00:01:46.511 SPDK_TEST_XNVME=1 00:01:46.511 SPDK_TEST_NVME_FDP=1 00:01:46.511 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:46.511 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:46.511 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:46.516 RUN_NIGHTLY=1 00:01:46.518 [Pipeline] } 00:01:46.529 [Pipeline] // stage 00:01:46.544 [Pipeline] stage 00:01:46.546 [Pipeline] { (Run VM) 00:01:46.558 [Pipeline] sh 00:01:46.838 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:46.838 + echo 'Start stage prepare_nvme.sh' 00:01:46.838 Start stage prepare_nvme.sh 00:01:46.838 + [[ -n 6 ]] 00:01:46.838 + disk_prefix=ex6 00:01:46.838 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:01:46.838 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:01:46.838 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:01:46.838 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:46.838 ++ SPDK_TEST_NVME=1 00:01:46.838 ++ SPDK_TEST_FTL=1 00:01:46.838 ++ SPDK_TEST_ISAL=1 00:01:46.838 ++ SPDK_RUN_ASAN=1 00:01:46.838 ++ SPDK_RUN_UBSAN=1 00:01:46.838 ++ SPDK_TEST_XNVME=1 00:01:46.838 ++ SPDK_TEST_NVME_FDP=1 00:01:46.838 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:46.838 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:46.838 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:46.838 ++ RUN_NIGHTLY=1 00:01:46.838 + cd /var/jenkins/workspace/nvme-vg-autotest 00:01:46.838 + nvme_files=() 00:01:46.838 + declare -A nvme_files 00:01:46.838 + backend_dir=/var/lib/libvirt/images/backends 00:01:46.838 + nvme_files['nvme.img']=5G 00:01:46.838 + nvme_files['nvme-cmb.img']=5G 00:01:46.838 + nvme_files['nvme-multi0.img']=4G 00:01:46.838 + nvme_files['nvme-multi1.img']=4G 00:01:46.838 + nvme_files['nvme-multi2.img']=4G 00:01:46.838 + nvme_files['nvme-openstack.img']=8G 00:01:46.838 + nvme_files['nvme-zns.img']=5G 00:01:46.838 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:46.838 + (( SPDK_TEST_FTL == 1 )) 00:01:46.838 + nvme_files["nvme-ftl.img"]=6G 00:01:46.838 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:46.838 + nvme_files["nvme-fdp.img"]=1G 00:01:46.838 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:46.838 + for nvme in "${!nvme_files[@]}" 00:01:46.838 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi2.img -s 4G 00:01:46.838 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:46.838 + for nvme in "${!nvme_files[@]}" 00:01:46.838 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-ftl.img -s 6G 00:01:46.838 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:46.838 + for nvme in "${!nvme_files[@]}" 00:01:46.838 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-cmb.img -s 5G 00:01:46.838 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:46.838 + for nvme in "${!nvme_files[@]}" 00:01:46.838 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-openstack.img -s 8G 00:01:46.838 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:46.838 + for nvme in "${!nvme_files[@]}" 00:01:46.838 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-zns.img -s 5G 00:01:46.838 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:46.838 + for nvme in "${!nvme_files[@]}" 00:01:46.838 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi1.img -s 4G 00:01:46.838 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:46.838 + for nvme in "${!nvme_files[@]}" 00:01:46.838 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi0.img -s 4G 00:01:46.838 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:47.099 + for nvme in "${!nvme_files[@]}" 00:01:47.099 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-fdp.img -s 1G 00:01:47.099 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:47.099 + for nvme in "${!nvme_files[@]}" 00:01:47.099 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme.img -s 5G 00:01:47.099 Formatting '/var/lib/libvirt/images/backends/ex6-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:47.099 ++ sudo grep -rl ex6-nvme.img /etc/libvirt/qemu 00:01:47.099 + echo 'End stage prepare_nvme.sh' 00:01:47.099 End stage prepare_nvme.sh 00:01:47.111 [Pipeline] sh 00:01:47.392 + DISTRO=fedora39 00:01:47.392 + CPUS=10 00:01:47.392 + RAM=12288 00:01:47.392 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:47.392 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex6-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex6-nvme.img -b /var/lib/libvirt/images/backends/ex6-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex6-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:01:47.392 00:01:47.392 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:01:47.392 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:01:47.392 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:01:47.392 HELP=0 00:01:47.392 DRY_RUN=0 00:01:47.392 NVME_FILE=/var/lib/libvirt/images/backends/ex6-nvme-ftl.img,/var/lib/libvirt/images/backends/ex6-nvme.img,/var/lib/libvirt/images/backends/ex6-nvme-multi0.img,/var/lib/libvirt/images/backends/ex6-nvme-fdp.img, 00:01:47.392 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:47.392 NVME_AUTO_CREATE=0 00:01:47.392 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img,, 00:01:47.392 NVME_CMB=,,,, 00:01:47.392 NVME_PMR=,,,, 00:01:47.392 NVME_ZNS=,,,, 00:01:47.392 NVME_MS=true,,,, 00:01:47.392 NVME_FDP=,,,on, 00:01:47.392 SPDK_VAGRANT_DISTRO=fedora39 00:01:47.392 SPDK_VAGRANT_VMCPU=10 00:01:47.392 SPDK_VAGRANT_VMRAM=12288 00:01:47.392 SPDK_VAGRANT_PROVIDER=libvirt 00:01:47.392 SPDK_VAGRANT_HTTP_PROXY= 00:01:47.392 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:47.392 SPDK_OPENSTACK_NETWORK=0 00:01:47.392 VAGRANT_PACKAGE_BOX=0 00:01:47.392 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:47.392 FORCE_DISTRO=true 00:01:47.392 VAGRANT_BOX_VERSION= 00:01:47.392 EXTRA_VAGRANTFILES= 00:01:47.392 NIC_MODEL=e1000 00:01:47.392 00:01:47.392 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:01:47.392 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:01:49.937 Bringing machine 'default' up with 'libvirt' provider... 00:01:50.509 ==> default: Creating image (snapshot of base box volume). 00:01:50.509 ==> default: Creating domain with the following settings... 00:01:50.509 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733460568_20fabf42fd47d5b0e7be 00:01:50.509 ==> default: -- Domain type: kvm 00:01:50.509 ==> default: -- Cpus: 10 00:01:50.509 ==> default: -- Feature: acpi 00:01:50.509 ==> default: -- Feature: apic 00:01:50.509 ==> default: -- Feature: pae 00:01:50.509 ==> default: -- Memory: 12288M 00:01:50.509 ==> default: -- Memory Backing: hugepages: 00:01:50.509 ==> default: -- Management MAC: 00:01:50.509 ==> default: -- Loader: 00:01:50.509 ==> default: -- Nvram: 00:01:50.509 ==> default: -- Base box: spdk/fedora39 00:01:50.509 ==> default: -- Storage pool: default 00:01:50.509 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733460568_20fabf42fd47d5b0e7be.img (20G) 00:01:50.509 ==> default: -- Volume Cache: default 00:01:50.509 ==> default: -- Kernel: 00:01:50.509 ==> default: -- Initrd: 00:01:50.509 ==> default: -- Graphics Type: vnc 00:01:50.509 ==> default: -- Graphics Port: -1 00:01:50.509 ==> default: -- Graphics IP: 127.0.0.1 00:01:50.509 ==> default: -- Graphics Password: Not defined 00:01:50.509 ==> default: -- Video Type: cirrus 00:01:50.509 ==> default: -- Video VRAM: 9216 00:01:50.509 ==> default: -- Sound Type: 00:01:50.509 ==> default: -- Keymap: en-us 00:01:50.509 ==> default: -- TPM Path: 00:01:50.509 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:50.509 ==> default: -- Command line args: 00:01:50.509 ==> default: -> value=-device, 00:01:50.509 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:50.509 ==> default: -> value=-drive, 00:01:50.509 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:50.509 ==> default: -> value=-device, 00:01:50.509 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:50.509 ==> default: -> value=-device, 00:01:50.509 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:50.509 ==> default: -> value=-drive, 00:01:50.509 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme.img,if=none,id=nvme-1-drive0, 00:01:50.509 ==> default: -> value=-device, 00:01:50.509 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:50.509 ==> default: -> value=-device, 00:01:50.509 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:01:50.509 ==> default: -> value=-drive, 00:01:50.509 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:50.509 ==> default: -> value=-device, 00:01:50.509 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:50.509 ==> default: -> value=-drive, 00:01:50.509 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:50.509 ==> default: -> value=-device, 00:01:50.509 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:50.509 ==> default: -> value=-drive, 00:01:50.509 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:50.509 ==> default: -> value=-device, 00:01:50.509 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:50.509 ==> default: -> value=-device, 00:01:50.509 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:50.509 ==> default: -> value=-device, 00:01:50.509 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:01:50.509 ==> default: -> value=-drive, 00:01:50.509 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:50.509 ==> default: -> value=-device, 00:01:50.509 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:50.771 ==> default: Creating shared folders metadata... 00:01:50.771 ==> default: Starting domain. 00:01:52.688 ==> default: Waiting for domain to get an IP address... 00:02:10.807 ==> default: Waiting for SSH to become available... 00:02:10.807 ==> default: Configuring and enabling network interfaces... 00:02:15.043 default: SSH address: 192.168.121.61:22 00:02:15.043 default: SSH username: vagrant 00:02:15.043 default: SSH auth method: private key 00:02:16.958 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:25.098 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:31.686 ==> default: Mounting SSHFS shared folder... 00:02:32.630 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:32.630 ==> default: Checking Mount.. 00:02:34.011 ==> default: Folder Successfully Mounted! 00:02:34.011 00:02:34.011 SUCCESS! 00:02:34.011 00:02:34.011 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:34.011 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:34.011 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:34.011 00:02:34.022 [Pipeline] } 00:02:34.038 [Pipeline] // stage 00:02:34.049 [Pipeline] dir 00:02:34.049 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:02:34.051 [Pipeline] { 00:02:34.065 [Pipeline] catchError 00:02:34.067 [Pipeline] { 00:02:34.080 [Pipeline] sh 00:02:34.368 + vagrant ssh-config --host vagrant 00:02:34.368 + sed -ne '/^Host/,$p' 00:02:34.368 + tee ssh_conf 00:02:37.671 Host vagrant 00:02:37.671 HostName 192.168.121.61 00:02:37.671 User vagrant 00:02:37.671 Port 22 00:02:37.671 UserKnownHostsFile /dev/null 00:02:37.671 StrictHostKeyChecking no 00:02:37.671 PasswordAuthentication no 00:02:37.671 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:37.671 IdentitiesOnly yes 00:02:37.671 LogLevel FATAL 00:02:37.671 ForwardAgent yes 00:02:37.671 ForwardX11 yes 00:02:37.671 00:02:37.686 [Pipeline] withEnv 00:02:37.689 [Pipeline] { 00:02:37.702 [Pipeline] sh 00:02:38.046 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:02:38.046 source /etc/os-release 00:02:38.046 [[ -e /image.version ]] && img=$(< /image.version) 00:02:38.046 # Minimal, systemd-like check. 00:02:38.046 if [[ -e /.dockerenv ]]; then 00:02:38.046 # Clear garbage from the node'\''s name: 00:02:38.046 # agt-er_autotest_547-896 -> autotest_547-896 00:02:38.046 # $HOSTNAME is the actual container id 00:02:38.046 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:38.046 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:38.046 # We can assume this is a mount from a host where container is running, 00:02:38.046 # so fetch its hostname to easily identify the target swarm worker. 00:02:38.046 container="$(< /etc/hostname) ($agent)" 00:02:38.046 else 00:02:38.046 # Fallback 00:02:38.046 container=$agent 00:02:38.046 fi 00:02:38.046 fi 00:02:38.046 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:38.046 ' 00:02:38.060 [Pipeline] } 00:02:38.077 [Pipeline] // withEnv 00:02:38.085 [Pipeline] setCustomBuildProperty 00:02:38.100 [Pipeline] stage 00:02:38.102 [Pipeline] { (Tests) 00:02:38.119 [Pipeline] sh 00:02:38.407 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:38.683 [Pipeline] sh 00:02:38.969 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:39.248 [Pipeline] timeout 00:02:39.248 Timeout set to expire in 50 min 00:02:39.250 [Pipeline] { 00:02:39.265 [Pipeline] sh 00:02:39.551 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:02:40.124 HEAD is now at b18e1bd62 version: v24.09.1-pre 00:02:40.140 [Pipeline] sh 00:02:40.432 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:02:40.713 [Pipeline] sh 00:02:41.000 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:41.279 [Pipeline] sh 00:02:41.566 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:02:41.828 ++ readlink -f spdk_repo 00:02:41.828 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:41.828 + [[ -n /home/vagrant/spdk_repo ]] 00:02:41.828 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:41.828 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:41.828 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:41.828 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:41.828 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:41.828 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:41.828 + cd /home/vagrant/spdk_repo 00:02:41.828 + source /etc/os-release 00:02:41.828 ++ NAME='Fedora Linux' 00:02:41.828 ++ VERSION='39 (Cloud Edition)' 00:02:41.829 ++ ID=fedora 00:02:41.829 ++ VERSION_ID=39 00:02:41.829 ++ VERSION_CODENAME= 00:02:41.829 ++ PLATFORM_ID=platform:f39 00:02:41.829 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:41.829 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:41.829 ++ LOGO=fedora-logo-icon 00:02:41.829 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:41.829 ++ HOME_URL=https://fedoraproject.org/ 00:02:41.829 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:41.829 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:41.829 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:41.829 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:41.829 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:41.829 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:41.829 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:41.829 ++ SUPPORT_END=2024-11-12 00:02:41.829 ++ VARIANT='Cloud Edition' 00:02:41.829 ++ VARIANT_ID=cloud 00:02:41.829 + uname -a 00:02:41.829 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:41.829 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:42.091 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:42.350 Hugepages 00:02:42.350 node hugesize free / total 00:02:42.350 node0 1048576kB 0 / 0 00:02:42.350 node0 2048kB 0 / 0 00:02:42.350 00:02:42.350 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:42.350 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:42.608 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:42.608 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:42.608 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:02:42.608 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:02:42.608 + rm -f /tmp/spdk-ld-path 00:02:42.608 + source autorun-spdk.conf 00:02:42.608 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:42.608 ++ SPDK_TEST_NVME=1 00:02:42.608 ++ SPDK_TEST_FTL=1 00:02:42.608 ++ SPDK_TEST_ISAL=1 00:02:42.608 ++ SPDK_RUN_ASAN=1 00:02:42.608 ++ SPDK_RUN_UBSAN=1 00:02:42.608 ++ SPDK_TEST_XNVME=1 00:02:42.608 ++ SPDK_TEST_NVME_FDP=1 00:02:42.608 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:42.608 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:42.608 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:42.608 ++ RUN_NIGHTLY=1 00:02:42.608 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:42.608 + [[ -n '' ]] 00:02:42.608 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:42.608 + for M in /var/spdk/build-*-manifest.txt 00:02:42.608 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:42.608 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:42.608 + for M in /var/spdk/build-*-manifest.txt 00:02:42.608 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:42.608 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:42.608 + for M in /var/spdk/build-*-manifest.txt 00:02:42.608 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:42.608 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:42.608 ++ uname 00:02:42.608 + [[ Linux == \L\i\n\u\x ]] 00:02:42.608 + sudo dmesg -T 00:02:42.608 + sudo dmesg --clear 00:02:42.608 + dmesg_pid=5768 00:02:42.608 + [[ Fedora Linux == FreeBSD ]] 00:02:42.608 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:42.608 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:42.608 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:42.608 + [[ -x /usr/src/fio-static/fio ]] 00:02:42.608 + sudo dmesg -Tw 00:02:42.608 + export FIO_BIN=/usr/src/fio-static/fio 00:02:42.608 + FIO_BIN=/usr/src/fio-static/fio 00:02:42.608 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:42.608 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:42.608 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:42.608 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:42.608 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:42.608 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:42.608 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:42.608 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:42.608 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:42.608 Test configuration: 00:02:42.608 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:42.608 SPDK_TEST_NVME=1 00:02:42.608 SPDK_TEST_FTL=1 00:02:42.608 SPDK_TEST_ISAL=1 00:02:42.608 SPDK_RUN_ASAN=1 00:02:42.608 SPDK_RUN_UBSAN=1 00:02:42.608 SPDK_TEST_XNVME=1 00:02:42.608 SPDK_TEST_NVME_FDP=1 00:02:42.608 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:42.608 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:42.608 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:42.608 RUN_NIGHTLY=1 04:50:20 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:02:42.608 04:50:20 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:42.608 04:50:20 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:42.608 04:50:20 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:42.608 04:50:20 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:42.608 04:50:20 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:42.608 04:50:20 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:42.608 04:50:20 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:42.608 04:50:20 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:42.608 04:50:20 -- paths/export.sh@5 -- $ export PATH 00:02:42.608 04:50:20 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:42.608 04:50:20 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:42.608 04:50:20 -- common/autobuild_common.sh@479 -- $ date +%s 00:02:42.867 04:50:20 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1733460620.XXXXXX 00:02:42.867 04:50:20 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1733460620.MSPZvk 00:02:42.867 04:50:20 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:02:42.867 04:50:20 -- common/autobuild_common.sh@485 -- $ '[' -n v22.11.4 ']' 00:02:42.867 04:50:20 -- common/autobuild_common.sh@486 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:42.867 04:50:20 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:42.867 04:50:20 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:42.867 04:50:20 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:42.867 04:50:20 -- common/autobuild_common.sh@495 -- $ get_config_params 00:02:42.867 04:50:20 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:02:42.867 04:50:20 -- common/autotest_common.sh@10 -- $ set +x 00:02:42.867 04:50:20 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:02:42.867 04:50:20 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:02:42.867 04:50:20 -- pm/common@17 -- $ local monitor 00:02:42.867 04:50:20 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:42.867 04:50:20 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:42.867 04:50:20 -- pm/common@25 -- $ sleep 1 00:02:42.867 04:50:20 -- pm/common@21 -- $ date +%s 00:02:42.867 04:50:20 -- pm/common@21 -- $ date +%s 00:02:42.867 04:50:20 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733460620 00:02:42.867 04:50:20 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733460620 00:02:42.867 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733460620_collect-vmstat.pm.log 00:02:42.867 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733460620_collect-cpu-load.pm.log 00:02:43.803 04:50:21 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:02:43.803 04:50:21 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:43.803 04:50:21 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:43.803 04:50:21 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:43.803 04:50:21 -- spdk/autobuild.sh@16 -- $ date -u 00:02:43.803 Fri Dec 6 04:50:21 AM UTC 2024 00:02:43.803 04:50:21 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:43.803 v24.09-1-gb18e1bd62 00:02:43.803 04:50:21 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:43.803 04:50:21 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:43.803 04:50:21 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:43.803 04:50:21 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:43.803 04:50:21 -- common/autotest_common.sh@10 -- $ set +x 00:02:43.803 ************************************ 00:02:43.803 START TEST asan 00:02:43.803 ************************************ 00:02:43.803 using asan 00:02:43.803 04:50:21 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan' 00:02:43.803 00:02:43.803 real 0m0.000s 00:02:43.803 user 0m0.000s 00:02:43.803 sys 0m0.000s 00:02:43.803 04:50:21 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:43.803 ************************************ 00:02:43.803 END TEST asan 00:02:43.803 ************************************ 00:02:43.803 04:50:21 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:43.803 04:50:21 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:43.803 04:50:21 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:43.803 04:50:21 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:43.803 04:50:21 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:43.803 04:50:21 -- common/autotest_common.sh@10 -- $ set +x 00:02:43.803 ************************************ 00:02:43.803 START TEST ubsan 00:02:43.803 ************************************ 00:02:43.803 using ubsan 00:02:43.803 04:50:21 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:02:43.803 00:02:43.803 real 0m0.000s 00:02:43.803 user 0m0.000s 00:02:43.803 sys 0m0.000s 00:02:43.803 04:50:21 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:43.803 ************************************ 00:02:43.803 END TEST ubsan 00:02:43.803 04:50:21 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:43.803 ************************************ 00:02:43.803 04:50:21 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:02:43.803 04:50:21 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:43.803 04:50:21 -- common/autobuild_common.sh@442 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:43.803 04:50:21 -- common/autotest_common.sh@1101 -- $ '[' 2 -le 1 ']' 00:02:43.803 04:50:21 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:43.803 04:50:21 -- common/autotest_common.sh@10 -- $ set +x 00:02:43.803 ************************************ 00:02:43.803 START TEST build_native_dpdk 00:02:43.803 ************************************ 00:02:43.804 04:50:21 build_native_dpdk -- common/autotest_common.sh@1125 -- $ _build_native_dpdk 00:02:43.804 04:50:21 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:43.804 04:50:21 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:43.804 04:50:21 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:43.804 04:50:21 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:43.804 04:50:21 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:43.804 04:50:21 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:43.804 04:50:21 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:43.804 04:50:21 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:43.804 04:50:21 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:43.804 04:50:21 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:43.804 04:50:21 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:43.804 04:50:21 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:43.804 04:50:21 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:43.804 04:50:21 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:43.804 04:50:22 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:43.804 04:50:22 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:43.804 04:50:22 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:43.804 04:50:22 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:43.804 04:50:22 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:43.804 04:50:22 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:43.804 caf0f5d395 version: 22.11.4 00:02:43.804 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:02:43.804 dc9c799c7d vhost: fix missing spinlock unlock 00:02:43.804 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:02:43.804 6ef77f2a5e net/gve: fix RX buffer size alignment 00:02:43.804 04:50:22 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:43.804 04:50:22 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:43.804 04:50:22 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:02:43.804 04:50:22 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:43.804 04:50:22 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:43.804 04:50:22 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:43.804 04:50:22 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:43.804 04:50:22 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:43.804 04:50:22 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:43.804 04:50:22 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:43.804 04:50:22 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:43.804 04:50:22 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:43.804 04:50:22 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:43.804 04:50:22 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:43.804 04:50:22 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:43.804 04:50:22 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:02:43.804 04:50:22 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:43.804 04:50:22 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:02:43.804 04:50:22 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:02:43.804 04:50:22 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:43.804 04:50:22 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:43.804 04:50:22 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:43.804 04:50:22 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:43.804 04:50:22 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:43.804 04:50:22 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:43.804 04:50:22 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:43.804 04:50:22 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:43.804 04:50:22 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:43.804 04:50:22 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:43.804 04:50:22 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:43.804 04:50:22 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:43.804 04:50:22 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:43.804 04:50:22 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:43.804 04:50:22 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:43.804 04:50:22 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:43.804 04:50:22 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:43.804 04:50:22 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:43.804 04:50:22 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:43.804 04:50:22 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:02:43.804 04:50:22 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:02:43.804 04:50:22 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:43.804 04:50:22 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:02:43.804 04:50:22 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:02:43.804 04:50:22 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:43.804 04:50:22 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:43.804 04:50:22 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:43.804 patching file config/rte_config.h 00:02:43.804 Hunk #1 succeeded at 60 (offset 1 line). 00:02:43.804 04:50:22 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 22.11.4 24.07.0 00:02:43.804 04:50:22 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:02:43.804 04:50:22 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:43.804 04:50:22 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:43.804 04:50:22 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:43.804 04:50:22 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:43.804 04:50:22 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:43.804 04:50:22 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:43.804 04:50:22 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:43.804 04:50:22 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:43.804 04:50:22 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:43.804 04:50:22 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:43.804 04:50:22 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:43.804 04:50:22 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:43.804 04:50:22 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:43.804 04:50:22 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:44.063 04:50:22 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:44.063 04:50:22 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:44.063 04:50:22 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:44.063 04:50:22 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:44.063 04:50:22 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:44.063 04:50:22 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:44.063 04:50:22 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:44.063 04:50:22 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:44.063 04:50:22 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:44.063 04:50:22 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:44.063 04:50:22 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:44.063 04:50:22 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:44.063 04:50:22 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:02:44.063 04:50:22 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:02:44.063 patching file lib/pcapng/rte_pcapng.c 00:02:44.063 Hunk #1 succeeded at 110 (offset -18 lines). 00:02:44.063 04:50:22 build_native_dpdk -- common/autobuild_common.sh@179 -- $ ge 22.11.4 24.07.0 00:02:44.063 04:50:22 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 22.11.4 '>=' 24.07.0 00:02:44.063 04:50:22 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:44.063 04:50:22 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:44.063 04:50:22 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:44.063 04:50:22 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:44.063 04:50:22 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:44.063 04:50:22 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:44.063 04:50:22 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:02:44.063 04:50:22 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:44.063 04:50:22 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:44.063 04:50:22 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:44.063 04:50:22 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:44.063 04:50:22 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:02:44.063 04:50:22 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:44.063 04:50:22 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:44.063 04:50:22 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:44.063 04:50:22 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:44.063 04:50:22 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:44.063 04:50:22 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:44.063 04:50:22 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:44.063 04:50:22 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:44.063 04:50:22 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:44.063 04:50:22 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:44.063 04:50:22 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:44.063 04:50:22 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:44.063 04:50:22 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:44.063 04:50:22 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:44.063 04:50:22 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:02:44.063 04:50:22 build_native_dpdk -- common/autobuild_common.sh@183 -- $ dpdk_kmods=false 00:02:44.063 04:50:22 build_native_dpdk -- common/autobuild_common.sh@184 -- $ uname -s 00:02:44.063 04:50:22 build_native_dpdk -- common/autobuild_common.sh@184 -- $ '[' Linux = FreeBSD ']' 00:02:44.063 04:50:22 build_native_dpdk -- common/autobuild_common.sh@188 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:44.063 04:50:22 build_native_dpdk -- common/autobuild_common.sh@188 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:48.250 The Meson build system 00:02:48.250 Version: 1.5.0 00:02:48.250 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:48.250 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:48.250 Build type: native build 00:02:48.250 Program cat found: YES (/usr/bin/cat) 00:02:48.250 Project name: DPDK 00:02:48.250 Project version: 22.11.4 00:02:48.250 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:48.250 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:48.250 Host machine cpu family: x86_64 00:02:48.250 Host machine cpu: x86_64 00:02:48.250 Message: ## Building in Developer Mode ## 00:02:48.250 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:48.250 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:48.250 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:48.250 Program objdump found: YES (/usr/bin/objdump) 00:02:48.250 Program python3 found: YES (/usr/bin/python3) 00:02:48.250 Program cat found: YES (/usr/bin/cat) 00:02:48.250 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:48.250 Checking for size of "void *" : 8 00:02:48.250 Checking for size of "void *" : 8 (cached) 00:02:48.250 Library m found: YES 00:02:48.250 Library numa found: YES 00:02:48.250 Has header "numaif.h" : YES 00:02:48.250 Library fdt found: NO 00:02:48.250 Library execinfo found: NO 00:02:48.250 Has header "execinfo.h" : YES 00:02:48.250 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:48.250 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:48.250 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:48.250 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:48.250 Run-time dependency openssl found: YES 3.1.1 00:02:48.250 Run-time dependency libpcap found: YES 1.10.4 00:02:48.250 Has header "pcap.h" with dependency libpcap: YES 00:02:48.250 Compiler for C supports arguments -Wcast-qual: YES 00:02:48.250 Compiler for C supports arguments -Wdeprecated: YES 00:02:48.250 Compiler for C supports arguments -Wformat: YES 00:02:48.250 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:48.250 Compiler for C supports arguments -Wformat-security: NO 00:02:48.250 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:48.250 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:48.250 Compiler for C supports arguments -Wnested-externs: YES 00:02:48.250 Compiler for C supports arguments -Wold-style-definition: YES 00:02:48.250 Compiler for C supports arguments -Wpointer-arith: YES 00:02:48.250 Compiler for C supports arguments -Wsign-compare: YES 00:02:48.250 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:48.250 Compiler for C supports arguments -Wundef: YES 00:02:48.250 Compiler for C supports arguments -Wwrite-strings: YES 00:02:48.250 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:48.250 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:48.250 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:48.250 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:48.250 Compiler for C supports arguments -mavx512f: YES 00:02:48.250 Checking if "AVX512 checking" compiles: YES 00:02:48.250 Fetching value of define "__SSE4_2__" : 1 00:02:48.250 Fetching value of define "__AES__" : 1 00:02:48.250 Fetching value of define "__AVX__" : 1 00:02:48.250 Fetching value of define "__AVX2__" : 1 00:02:48.250 Fetching value of define "__AVX512BW__" : 1 00:02:48.250 Fetching value of define "__AVX512CD__" : 1 00:02:48.250 Fetching value of define "__AVX512DQ__" : 1 00:02:48.250 Fetching value of define "__AVX512F__" : 1 00:02:48.250 Fetching value of define "__AVX512VL__" : 1 00:02:48.250 Fetching value of define "__PCLMUL__" : 1 00:02:48.250 Fetching value of define "__RDRND__" : 1 00:02:48.250 Fetching value of define "__RDSEED__" : 1 00:02:48.250 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:48.250 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:48.250 Message: lib/kvargs: Defining dependency "kvargs" 00:02:48.250 Message: lib/telemetry: Defining dependency "telemetry" 00:02:48.250 Checking for function "getentropy" : YES 00:02:48.250 Message: lib/eal: Defining dependency "eal" 00:02:48.250 Message: lib/ring: Defining dependency "ring" 00:02:48.250 Message: lib/rcu: Defining dependency "rcu" 00:02:48.250 Message: lib/mempool: Defining dependency "mempool" 00:02:48.250 Message: lib/mbuf: Defining dependency "mbuf" 00:02:48.250 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:48.250 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:48.250 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:48.250 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:48.250 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:48.250 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:48.250 Compiler for C supports arguments -mpclmul: YES 00:02:48.250 Compiler for C supports arguments -maes: YES 00:02:48.250 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:48.250 Compiler for C supports arguments -mavx512bw: YES 00:02:48.250 Compiler for C supports arguments -mavx512dq: YES 00:02:48.250 Compiler for C supports arguments -mavx512vl: YES 00:02:48.250 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:48.250 Compiler for C supports arguments -mavx2: YES 00:02:48.250 Compiler for C supports arguments -mavx: YES 00:02:48.250 Message: lib/net: Defining dependency "net" 00:02:48.250 Message: lib/meter: Defining dependency "meter" 00:02:48.250 Message: lib/ethdev: Defining dependency "ethdev" 00:02:48.250 Message: lib/pci: Defining dependency "pci" 00:02:48.250 Message: lib/cmdline: Defining dependency "cmdline" 00:02:48.250 Message: lib/metrics: Defining dependency "metrics" 00:02:48.250 Message: lib/hash: Defining dependency "hash" 00:02:48.250 Message: lib/timer: Defining dependency "timer" 00:02:48.250 Fetching value of define "__AVX2__" : 1 (cached) 00:02:48.250 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:48.250 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:48.250 Fetching value of define "__AVX512CD__" : 1 (cached) 00:02:48.250 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:48.251 Message: lib/acl: Defining dependency "acl" 00:02:48.251 Message: lib/bbdev: Defining dependency "bbdev" 00:02:48.251 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:48.251 Run-time dependency libelf found: YES 0.191 00:02:48.251 Message: lib/bpf: Defining dependency "bpf" 00:02:48.251 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:48.251 Message: lib/compressdev: Defining dependency "compressdev" 00:02:48.251 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:48.251 Message: lib/distributor: Defining dependency "distributor" 00:02:48.251 Message: lib/efd: Defining dependency "efd" 00:02:48.251 Message: lib/eventdev: Defining dependency "eventdev" 00:02:48.251 Message: lib/gpudev: Defining dependency "gpudev" 00:02:48.251 Message: lib/gro: Defining dependency "gro" 00:02:48.251 Message: lib/gso: Defining dependency "gso" 00:02:48.251 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:48.251 Message: lib/jobstats: Defining dependency "jobstats" 00:02:48.251 Message: lib/latencystats: Defining dependency "latencystats" 00:02:48.251 Message: lib/lpm: Defining dependency "lpm" 00:02:48.251 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:48.251 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:48.251 Fetching value of define "__AVX512IFMA__" : 1 00:02:48.251 Message: lib/member: Defining dependency "member" 00:02:48.251 Message: lib/pcapng: Defining dependency "pcapng" 00:02:48.251 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:48.251 Message: lib/power: Defining dependency "power" 00:02:48.251 Message: lib/rawdev: Defining dependency "rawdev" 00:02:48.251 Message: lib/regexdev: Defining dependency "regexdev" 00:02:48.251 Message: lib/dmadev: Defining dependency "dmadev" 00:02:48.251 Message: lib/rib: Defining dependency "rib" 00:02:48.251 Message: lib/reorder: Defining dependency "reorder" 00:02:48.251 Message: lib/sched: Defining dependency "sched" 00:02:48.251 Message: lib/security: Defining dependency "security" 00:02:48.251 Message: lib/stack: Defining dependency "stack" 00:02:48.251 Has header "linux/userfaultfd.h" : YES 00:02:48.251 Message: lib/vhost: Defining dependency "vhost" 00:02:48.251 Message: lib/ipsec: Defining dependency "ipsec" 00:02:48.251 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:48.251 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:48.251 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:48.251 Message: lib/fib: Defining dependency "fib" 00:02:48.251 Message: lib/port: Defining dependency "port" 00:02:48.251 Message: lib/pdump: Defining dependency "pdump" 00:02:48.251 Message: lib/table: Defining dependency "table" 00:02:48.251 Message: lib/pipeline: Defining dependency "pipeline" 00:02:48.251 Message: lib/graph: Defining dependency "graph" 00:02:48.251 Message: lib/node: Defining dependency "node" 00:02:48.251 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:48.251 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:48.251 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:48.251 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:48.251 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:48.251 Compiler for C supports arguments -Wno-unused-value: YES 00:02:48.251 Compiler for C supports arguments -Wno-format: YES 00:02:48.251 Compiler for C supports arguments -Wno-format-security: YES 00:02:48.251 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:48.251 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:48.251 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:48.251 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:49.188 Fetching value of define "__AVX2__" : 1 (cached) 00:02:49.188 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:49.188 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:49.188 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:49.188 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:49.188 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:49.188 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:49.188 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:49.188 Configuring doxy-api.conf using configuration 00:02:49.188 Program sphinx-build found: NO 00:02:49.188 Configuring rte_build_config.h using configuration 00:02:49.188 Message: 00:02:49.188 ================= 00:02:49.188 Applications Enabled 00:02:49.188 ================= 00:02:49.188 00:02:49.188 apps: 00:02:49.188 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:02:49.188 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:02:49.188 test-security-perf, 00:02:49.188 00:02:49.188 Message: 00:02:49.188 ================= 00:02:49.188 Libraries Enabled 00:02:49.188 ================= 00:02:49.188 00:02:49.188 libs: 00:02:49.188 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:02:49.188 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:02:49.188 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:02:49.188 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:02:49.188 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:02:49.188 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:02:49.188 table, pipeline, graph, node, 00:02:49.188 00:02:49.188 Message: 00:02:49.188 =============== 00:02:49.188 Drivers Enabled 00:02:49.188 =============== 00:02:49.188 00:02:49.188 common: 00:02:49.188 00:02:49.188 bus: 00:02:49.188 pci, vdev, 00:02:49.188 mempool: 00:02:49.188 ring, 00:02:49.188 dma: 00:02:49.188 00:02:49.188 net: 00:02:49.188 i40e, 00:02:49.188 raw: 00:02:49.188 00:02:49.188 crypto: 00:02:49.188 00:02:49.188 compress: 00:02:49.188 00:02:49.188 regex: 00:02:49.188 00:02:49.188 vdpa: 00:02:49.188 00:02:49.188 event: 00:02:49.188 00:02:49.188 baseband: 00:02:49.188 00:02:49.188 gpu: 00:02:49.188 00:02:49.188 00:02:49.188 Message: 00:02:49.188 ================= 00:02:49.188 Content Skipped 00:02:49.188 ================= 00:02:49.188 00:02:49.188 apps: 00:02:49.188 00:02:49.188 libs: 00:02:49.188 kni: explicitly disabled via build config (deprecated lib) 00:02:49.188 flow_classify: explicitly disabled via build config (deprecated lib) 00:02:49.188 00:02:49.188 drivers: 00:02:49.188 common/cpt: not in enabled drivers build config 00:02:49.188 common/dpaax: not in enabled drivers build config 00:02:49.188 common/iavf: not in enabled drivers build config 00:02:49.188 common/idpf: not in enabled drivers build config 00:02:49.188 common/mvep: not in enabled drivers build config 00:02:49.188 common/octeontx: not in enabled drivers build config 00:02:49.188 bus/auxiliary: not in enabled drivers build config 00:02:49.188 bus/dpaa: not in enabled drivers build config 00:02:49.188 bus/fslmc: not in enabled drivers build config 00:02:49.188 bus/ifpga: not in enabled drivers build config 00:02:49.188 bus/vmbus: not in enabled drivers build config 00:02:49.188 common/cnxk: not in enabled drivers build config 00:02:49.188 common/mlx5: not in enabled drivers build config 00:02:49.188 common/qat: not in enabled drivers build config 00:02:49.188 common/sfc_efx: not in enabled drivers build config 00:02:49.188 mempool/bucket: not in enabled drivers build config 00:02:49.188 mempool/cnxk: not in enabled drivers build config 00:02:49.188 mempool/dpaa: not in enabled drivers build config 00:02:49.188 mempool/dpaa2: not in enabled drivers build config 00:02:49.188 mempool/octeontx: not in enabled drivers build config 00:02:49.188 mempool/stack: not in enabled drivers build config 00:02:49.188 dma/cnxk: not in enabled drivers build config 00:02:49.188 dma/dpaa: not in enabled drivers build config 00:02:49.188 dma/dpaa2: not in enabled drivers build config 00:02:49.188 dma/hisilicon: not in enabled drivers build config 00:02:49.188 dma/idxd: not in enabled drivers build config 00:02:49.188 dma/ioat: not in enabled drivers build config 00:02:49.188 dma/skeleton: not in enabled drivers build config 00:02:49.188 net/af_packet: not in enabled drivers build config 00:02:49.188 net/af_xdp: not in enabled drivers build config 00:02:49.188 net/ark: not in enabled drivers build config 00:02:49.188 net/atlantic: not in enabled drivers build config 00:02:49.188 net/avp: not in enabled drivers build config 00:02:49.188 net/axgbe: not in enabled drivers build config 00:02:49.188 net/bnx2x: not in enabled drivers build config 00:02:49.188 net/bnxt: not in enabled drivers build config 00:02:49.188 net/bonding: not in enabled drivers build config 00:02:49.188 net/cnxk: not in enabled drivers build config 00:02:49.188 net/cxgbe: not in enabled drivers build config 00:02:49.188 net/dpaa: not in enabled drivers build config 00:02:49.188 net/dpaa2: not in enabled drivers build config 00:02:49.188 net/e1000: not in enabled drivers build config 00:02:49.188 net/ena: not in enabled drivers build config 00:02:49.188 net/enetc: not in enabled drivers build config 00:02:49.188 net/enetfec: not in enabled drivers build config 00:02:49.188 net/enic: not in enabled drivers build config 00:02:49.188 net/failsafe: not in enabled drivers build config 00:02:49.188 net/fm10k: not in enabled drivers build config 00:02:49.188 net/gve: not in enabled drivers build config 00:02:49.188 net/hinic: not in enabled drivers build config 00:02:49.188 net/hns3: not in enabled drivers build config 00:02:49.188 net/iavf: not in enabled drivers build config 00:02:49.188 net/ice: not in enabled drivers build config 00:02:49.188 net/idpf: not in enabled drivers build config 00:02:49.188 net/igc: not in enabled drivers build config 00:02:49.188 net/ionic: not in enabled drivers build config 00:02:49.188 net/ipn3ke: not in enabled drivers build config 00:02:49.188 net/ixgbe: not in enabled drivers build config 00:02:49.188 net/kni: not in enabled drivers build config 00:02:49.188 net/liquidio: not in enabled drivers build config 00:02:49.188 net/mana: not in enabled drivers build config 00:02:49.188 net/memif: not in enabled drivers build config 00:02:49.188 net/mlx4: not in enabled drivers build config 00:02:49.188 net/mlx5: not in enabled drivers build config 00:02:49.188 net/mvneta: not in enabled drivers build config 00:02:49.188 net/mvpp2: not in enabled drivers build config 00:02:49.188 net/netvsc: not in enabled drivers build config 00:02:49.188 net/nfb: not in enabled drivers build config 00:02:49.188 net/nfp: not in enabled drivers build config 00:02:49.188 net/ngbe: not in enabled drivers build config 00:02:49.188 net/null: not in enabled drivers build config 00:02:49.188 net/octeontx: not in enabled drivers build config 00:02:49.188 net/octeon_ep: not in enabled drivers build config 00:02:49.188 net/pcap: not in enabled drivers build config 00:02:49.188 net/pfe: not in enabled drivers build config 00:02:49.188 net/qede: not in enabled drivers build config 00:02:49.188 net/ring: not in enabled drivers build config 00:02:49.188 net/sfc: not in enabled drivers build config 00:02:49.188 net/softnic: not in enabled drivers build config 00:02:49.188 net/tap: not in enabled drivers build config 00:02:49.188 net/thunderx: not in enabled drivers build config 00:02:49.188 net/txgbe: not in enabled drivers build config 00:02:49.188 net/vdev_netvsc: not in enabled drivers build config 00:02:49.188 net/vhost: not in enabled drivers build config 00:02:49.188 net/virtio: not in enabled drivers build config 00:02:49.188 net/vmxnet3: not in enabled drivers build config 00:02:49.188 raw/cnxk_bphy: not in enabled drivers build config 00:02:49.188 raw/cnxk_gpio: not in enabled drivers build config 00:02:49.188 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:49.188 raw/ifpga: not in enabled drivers build config 00:02:49.188 raw/ntb: not in enabled drivers build config 00:02:49.188 raw/skeleton: not in enabled drivers build config 00:02:49.188 crypto/armv8: not in enabled drivers build config 00:02:49.188 crypto/bcmfs: not in enabled drivers build config 00:02:49.188 crypto/caam_jr: not in enabled drivers build config 00:02:49.188 crypto/ccp: not in enabled drivers build config 00:02:49.188 crypto/cnxk: not in enabled drivers build config 00:02:49.188 crypto/dpaa_sec: not in enabled drivers build config 00:02:49.188 crypto/dpaa2_sec: not in enabled drivers build config 00:02:49.188 crypto/ipsec_mb: not in enabled drivers build config 00:02:49.188 crypto/mlx5: not in enabled drivers build config 00:02:49.188 crypto/mvsam: not in enabled drivers build config 00:02:49.188 crypto/nitrox: not in enabled drivers build config 00:02:49.188 crypto/null: not in enabled drivers build config 00:02:49.188 crypto/octeontx: not in enabled drivers build config 00:02:49.188 crypto/openssl: not in enabled drivers build config 00:02:49.188 crypto/scheduler: not in enabled drivers build config 00:02:49.188 crypto/uadk: not in enabled drivers build config 00:02:49.188 crypto/virtio: not in enabled drivers build config 00:02:49.188 compress/isal: not in enabled drivers build config 00:02:49.188 compress/mlx5: not in enabled drivers build config 00:02:49.188 compress/octeontx: not in enabled drivers build config 00:02:49.188 compress/zlib: not in enabled drivers build config 00:02:49.188 regex/mlx5: not in enabled drivers build config 00:02:49.188 regex/cn9k: not in enabled drivers build config 00:02:49.188 vdpa/ifc: not in enabled drivers build config 00:02:49.188 vdpa/mlx5: not in enabled drivers build config 00:02:49.188 vdpa/sfc: not in enabled drivers build config 00:02:49.188 event/cnxk: not in enabled drivers build config 00:02:49.188 event/dlb2: not in enabled drivers build config 00:02:49.189 event/dpaa: not in enabled drivers build config 00:02:49.189 event/dpaa2: not in enabled drivers build config 00:02:49.189 event/dsw: not in enabled drivers build config 00:02:49.189 event/opdl: not in enabled drivers build config 00:02:49.189 event/skeleton: not in enabled drivers build config 00:02:49.189 event/sw: not in enabled drivers build config 00:02:49.189 event/octeontx: not in enabled drivers build config 00:02:49.189 baseband/acc: not in enabled drivers build config 00:02:49.189 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:49.189 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:49.189 baseband/la12xx: not in enabled drivers build config 00:02:49.189 baseband/null: not in enabled drivers build config 00:02:49.189 baseband/turbo_sw: not in enabled drivers build config 00:02:49.189 gpu/cuda: not in enabled drivers build config 00:02:49.189 00:02:49.189 00:02:49.189 Build targets in project: 309 00:02:49.189 00:02:49.189 DPDK 22.11.4 00:02:49.189 00:02:49.189 User defined options 00:02:49.189 libdir : lib 00:02:49.189 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:49.189 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:49.189 c_link_args : 00:02:49.189 enable_docs : false 00:02:49.189 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:49.189 enable_kmods : false 00:02:49.189 machine : native 00:02:49.189 tests : false 00:02:49.189 00:02:49.189 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:49.189 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:49.189 04:50:27 build_native_dpdk -- common/autobuild_common.sh@192 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:49.189 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:49.189 [1/738] Generating lib/rte_kvargs_mingw with a custom command 00:02:49.189 [2/738] Generating lib/rte_kvargs_def with a custom command 00:02:49.189 [3/738] Generating lib/rte_telemetry_def with a custom command 00:02:49.189 [4/738] Generating lib/rte_telemetry_mingw with a custom command 00:02:49.447 [5/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:49.447 [6/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:49.447 [7/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:49.447 [8/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:49.447 [9/738] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:49.447 [10/738] Linking static target lib/librte_kvargs.a 00:02:49.447 [11/738] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:49.447 [12/738] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:49.447 [13/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:49.447 [14/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:49.447 [15/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:49.447 [16/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:49.447 [17/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:49.448 [18/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:49.448 [19/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:49.448 [20/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:49.706 [21/738] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.706 [22/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:49.706 [23/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:02:49.706 [24/738] Linking target lib/librte_kvargs.so.23.0 00:02:49.706 [25/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:49.706 [26/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:49.706 [27/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:49.706 [28/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:49.706 [29/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:49.706 [30/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:49.706 [31/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:49.706 [32/738] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:49.706 [33/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:49.706 [34/738] Linking static target lib/librte_telemetry.a 00:02:49.965 [35/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:49.965 [36/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:49.965 [37/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:49.965 [38/738] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:02:49.965 [39/738] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:49.965 [40/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:49.965 [41/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:49.965 [42/738] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:49.965 [43/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:49.965 [44/738] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.224 [45/738] Linking target lib/librte_telemetry.so.23.0 00:02:50.224 [46/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:50.224 [47/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:50.224 [48/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:50.224 [49/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:50.224 [50/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:50.224 [51/738] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:02:50.224 [52/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:50.224 [53/738] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:50.224 [54/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:50.224 [55/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:50.224 [56/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:50.224 [57/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:50.224 [58/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:50.224 [59/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:50.224 [60/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:50.224 [61/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:50.224 [62/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:50.224 [63/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:50.224 [64/738] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:50.224 [65/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:50.224 [66/738] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:50.483 [67/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:02:50.483 [68/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:50.483 [69/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:50.483 [70/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:50.483 [71/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:50.483 [72/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:50.483 [73/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:50.483 [74/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:50.483 [75/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:50.483 [76/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:50.483 [77/738] Generating lib/rte_eal_def with a custom command 00:02:50.483 [78/738] Generating lib/rte_eal_mingw with a custom command 00:02:50.483 [79/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:50.483 [80/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:50.483 [81/738] Generating lib/rte_ring_def with a custom command 00:02:50.483 [82/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:50.483 [83/738] Generating lib/rte_ring_mingw with a custom command 00:02:50.483 [84/738] Generating lib/rte_rcu_def with a custom command 00:02:50.483 [85/738] Generating lib/rte_rcu_mingw with a custom command 00:02:50.483 [86/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:50.741 [87/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:50.741 [88/738] Generating lib/rte_mempool_def with a custom command 00:02:50.741 [89/738] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:50.741 [90/738] Linking static target lib/librte_ring.a 00:02:50.741 [91/738] Generating lib/rte_mempool_mingw with a custom command 00:02:50.741 [92/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:50.741 [93/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:50.741 [94/738] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.000 [95/738] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:51.000 [96/738] Generating lib/rte_mbuf_def with a custom command 00:02:51.000 [97/738] Generating lib/rte_mbuf_mingw with a custom command 00:02:51.000 [98/738] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:51.000 [99/738] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:51.000 [100/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:51.000 [101/738] Linking static target lib/librte_eal.a 00:02:51.000 [102/738] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:51.000 [103/738] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:51.000 [104/738] Linking static target lib/librte_rcu.a 00:02:51.258 [105/738] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:51.258 [106/738] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:51.258 [107/738] Linking static target lib/librte_mempool.a 00:02:51.258 [108/738] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:51.258 [109/738] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:51.258 [110/738] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.258 [111/738] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:51.258 [112/738] Generating lib/rte_net_mingw with a custom command 00:02:51.258 [113/738] Generating lib/rte_net_def with a custom command 00:02:51.258 [114/738] Generating lib/rte_meter_def with a custom command 00:02:51.258 [115/738] Generating lib/rte_meter_mingw with a custom command 00:02:51.517 [116/738] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:51.517 [117/738] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:51.517 [118/738] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:51.517 [119/738] Linking static target lib/librte_meter.a 00:02:51.517 [120/738] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:51.517 [121/738] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.775 [122/738] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:51.775 [123/738] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:51.775 [124/738] Linking static target lib/librte_net.a 00:02:51.775 [125/738] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:51.775 [126/738] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:51.775 [127/738] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:51.775 [128/738] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:51.775 [129/738] Linking static target lib/librte_mbuf.a 00:02:51.775 [130/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:51.775 [131/738] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.032 [132/738] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.032 [133/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:52.290 [134/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:52.290 [135/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:52.290 [136/738] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.290 [137/738] Generating lib/rte_ethdev_def with a custom command 00:02:52.290 [138/738] Generating lib/rte_ethdev_mingw with a custom command 00:02:52.290 [139/738] Generating lib/rte_pci_def with a custom command 00:02:52.290 [140/738] Generating lib/rte_pci_mingw with a custom command 00:02:52.290 [141/738] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:52.290 [142/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:52.291 [143/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:52.291 [144/738] Linking static target lib/librte_pci.a 00:02:52.291 [145/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:52.291 [146/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:52.291 [147/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:52.549 [148/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:52.549 [149/738] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.549 [150/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:52.549 [151/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:52.549 [152/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:52.549 [153/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:52.549 [154/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:52.549 [155/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:52.549 [156/738] Generating lib/rte_cmdline_def with a custom command 00:02:52.549 [157/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:52.549 [158/738] Generating lib/rte_cmdline_mingw with a custom command 00:02:52.549 [159/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:52.549 [160/738] Generating lib/rte_metrics_def with a custom command 00:02:52.549 [161/738] Generating lib/rte_metrics_mingw with a custom command 00:02:52.549 [162/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:52.549 [163/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:52.549 [164/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:52.549 [165/738] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:52.549 [166/738] Generating lib/rte_hash_def with a custom command 00:02:52.549 [167/738] Linking static target lib/librte_cmdline.a 00:02:52.807 [168/738] Generating lib/rte_hash_mingw with a custom command 00:02:52.807 [169/738] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:52.807 [170/738] Generating lib/rte_timer_def with a custom command 00:02:52.807 [171/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:52.807 [172/738] Generating lib/rte_timer_mingw with a custom command 00:02:52.807 [173/738] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:52.807 [174/738] Linking static target lib/librte_metrics.a 00:02:53.066 [175/738] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:53.066 [176/738] Linking static target lib/librte_timer.a 00:02:53.066 [177/738] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.066 [178/738] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:53.324 [179/738] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:53.324 [180/738] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.324 [181/738] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:53.324 [182/738] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:53.324 [183/738] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.324 [184/738] Generating lib/rte_acl_def with a custom command 00:02:53.324 [185/738] Generating lib/rte_acl_mingw with a custom command 00:02:53.324 [186/738] Generating lib/rte_bbdev_def with a custom command 00:02:53.324 [187/738] Generating lib/rte_bbdev_mingw with a custom command 00:02:53.582 [188/738] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:53.582 [189/738] Generating lib/rte_bitratestats_def with a custom command 00:02:53.582 [190/738] Generating lib/rte_bitratestats_mingw with a custom command 00:02:53.875 [191/738] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:53.875 [192/738] Linking static target lib/librte_bitratestats.a 00:02:53.875 [193/738] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:53.875 [194/738] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.875 [195/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:53.875 [196/738] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:53.875 [197/738] Linking static target lib/librte_ethdev.a 00:02:53.875 [198/738] Linking static target lib/librte_bbdev.a 00:02:53.875 [199/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:54.147 [200/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:54.404 [201/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:54.404 [202/738] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.405 [203/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:54.405 [204/738] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:54.405 [205/738] Linking static target lib/librte_hash.a 00:02:54.662 [206/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:54.662 [207/738] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:54.662 [208/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:54.662 [209/738] Generating lib/rte_bpf_def with a custom command 00:02:54.662 [210/738] Generating lib/rte_bpf_mingw with a custom command 00:02:54.921 [211/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:54.921 [212/738] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:54.921 [213/738] Linking static target lib/librte_cfgfile.a 00:02:54.921 [214/738] Generating lib/rte_cfgfile_def with a custom command 00:02:54.921 [215/738] Generating lib/rte_cfgfile_mingw with a custom command 00:02:54.921 [216/738] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:02:54.921 [217/738] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.921 [218/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:54.921 [219/738] Generating lib/rte_compressdev_def with a custom command 00:02:54.921 [220/738] Generating lib/rte_compressdev_mingw with a custom command 00:02:55.179 [221/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:55.179 [222/738] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.179 [223/738] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:55.179 [224/738] Generating lib/rte_cryptodev_def with a custom command 00:02:55.179 [225/738] Generating lib/rte_cryptodev_mingw with a custom command 00:02:55.179 [226/738] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:55.437 [227/738] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:02:55.437 [228/738] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:55.437 [229/738] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:55.437 [230/738] Linking static target lib/librte_compressdev.a 00:02:55.437 [231/738] Linking static target lib/librte_acl.a 00:02:55.437 [232/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:55.437 [233/738] Linking static target lib/librte_bpf.a 00:02:55.437 [234/738] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.694 [235/738] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:55.694 [236/738] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:55.694 [237/738] Generating lib/rte_distributor_def with a custom command 00:02:55.694 [238/738] Generating lib/rte_distributor_mingw with a custom command 00:02:55.694 [239/738] Generating lib/rte_efd_def with a custom command 00:02:55.694 [240/738] Generating lib/rte_efd_mingw with a custom command 00:02:55.694 [241/738] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:55.694 [242/738] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.694 [243/738] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:55.694 [244/738] Linking static target lib/librte_distributor.a 00:02:55.952 [245/738] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:55.952 [246/738] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.952 [247/738] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:55.952 [248/738] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.210 [249/738] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.210 [250/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:56.210 [251/738] Linking target lib/librte_eal.so.23.0 00:02:56.210 [252/738] Generating lib/rte_eventdev_def with a custom command 00:02:56.210 [253/738] Generating lib/rte_eventdev_mingw with a custom command 00:02:56.210 [254/738] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:02:56.469 [255/738] Linking target lib/librte_ring.so.23.0 00:02:56.469 [256/738] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:02:56.469 [257/738] Linking target lib/librte_rcu.so.23.0 00:02:56.469 [258/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:56.469 [259/738] Linking target lib/librte_mempool.so.23.0 00:02:56.469 [260/738] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:02:56.469 [261/738] Linking target lib/librte_meter.so.23.0 00:02:56.469 [262/738] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:56.727 [263/738] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:02:56.727 [264/738] Linking target lib/librte_pci.so.23.0 00:02:56.727 [265/738] Linking target lib/librte_mbuf.so.23.0 00:02:56.727 [266/738] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:02:56.727 [267/738] Linking target lib/librte_timer.so.23.0 00:02:56.727 [268/738] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:02:56.727 [269/738] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:02:56.727 [270/738] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:56.727 [271/738] Linking target lib/librte_net.so.23.0 00:02:56.727 [272/738] Linking target lib/librte_bbdev.so.23.0 00:02:56.727 [273/738] Linking target lib/librte_acl.so.23.0 00:02:56.727 [274/738] Linking target lib/librte_cfgfile.so.23.0 00:02:56.727 [275/738] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:02:56.727 [276/738] Linking target lib/librte_compressdev.so.23.0 00:02:56.727 [277/738] Linking static target lib/librte_efd.a 00:02:56.727 [278/738] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:02:56.727 [279/738] Linking target lib/librte_distributor.so.23.0 00:02:56.727 [280/738] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:02:56.727 [281/738] Linking static target lib/librte_gpudev.a 00:02:56.997 [282/738] Linking target lib/librte_cmdline.so.23.0 00:02:56.997 [283/738] Linking target lib/librte_hash.so.23.0 00:02:56.997 [284/738] Generating lib/rte_gpudev_def with a custom command 00:02:56.997 [285/738] Generating lib/rte_gpudev_mingw with a custom command 00:02:56.997 [286/738] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:56.997 [287/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:56.997 [288/738] Linking static target lib/librte_cryptodev.a 00:02:56.997 [289/738] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:02:56.997 [290/738] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.997 [291/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:56.997 [292/738] Linking target lib/librte_efd.so.23.0 00:02:57.254 [293/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:57.254 [294/738] Generating lib/rte_gro_def with a custom command 00:02:57.254 [295/738] Generating lib/rte_gro_mingw with a custom command 00:02:57.254 [296/738] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.254 [297/738] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:57.254 [298/738] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:57.254 [299/738] Linking target lib/librte_ethdev.so.23.0 00:02:57.512 [300/738] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:02:57.512 [301/738] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:57.512 [302/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:57.512 [303/738] Linking target lib/librte_metrics.so.23.0 00:02:57.512 [304/738] Linking static target lib/librte_eventdev.a 00:02:57.512 [305/738] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:57.512 [306/738] Linking target lib/librte_bpf.so.23.0 00:02:57.512 [307/738] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:57.512 [308/738] Linking static target lib/librte_gro.a 00:02:57.512 [309/738] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.512 [310/738] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:02:57.512 [311/738] Linking target lib/librte_bitratestats.so.23.0 00:02:57.512 [312/738] Linking target lib/librte_gpudev.so.23.0 00:02:57.512 [313/738] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:02:57.512 [314/738] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:57.512 [315/738] Generating lib/rte_gso_def with a custom command 00:02:57.512 [316/738] Generating lib/rte_gso_mingw with a custom command 00:02:57.769 [317/738] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:57.769 [318/738] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.769 [319/738] Linking target lib/librte_gro.so.23.0 00:02:57.769 [320/738] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:57.769 [321/738] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:57.769 [322/738] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:57.769 [323/738] Generating lib/rte_ip_frag_def with a custom command 00:02:58.027 [324/738] Generating lib/rte_ip_frag_mingw with a custom command 00:02:58.027 [325/738] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:58.027 [326/738] Linking static target lib/librte_gso.a 00:02:58.027 [327/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:58.027 [328/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:58.027 [329/738] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:58.027 [330/738] Generating lib/rte_jobstats_mingw with a custom command 00:02:58.027 [331/738] Generating lib/rte_jobstats_def with a custom command 00:02:58.027 [332/738] Linking static target lib/librte_jobstats.a 00:02:58.027 [333/738] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.027 [334/738] Generating lib/rte_latencystats_def with a custom command 00:02:58.027 [335/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:58.027 [336/738] Generating lib/rte_latencystats_mingw with a custom command 00:02:58.027 [337/738] Linking target lib/librte_gso.so.23.0 00:02:58.027 [338/738] Generating lib/rte_lpm_def with a custom command 00:02:58.027 [339/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:58.027 [340/738] Generating lib/rte_lpm_mingw with a custom command 00:02:58.285 [341/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:58.285 [342/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:58.285 [343/738] Linking static target lib/librte_ip_frag.a 00:02:58.285 [344/738] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.285 [345/738] Linking target lib/librte_jobstats.so.23.0 00:02:58.544 [346/738] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:58.544 [347/738] Linking static target lib/librte_latencystats.a 00:02:58.544 [348/738] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.544 [349/738] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.544 [350/738] Linking target lib/librte_ip_frag.so.23.0 00:02:58.544 [351/738] Linking target lib/librte_cryptodev.so.23.0 00:02:58.544 [352/738] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:58.544 [353/738] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.544 [354/738] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:02:58.544 [355/738] Linking target lib/librte_latencystats.so.23.0 00:02:58.544 [356/738] Generating lib/rte_member_def with a custom command 00:02:58.544 [357/738] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:02:58.544 [358/738] Generating lib/rte_member_mingw with a custom command 00:02:58.544 [359/738] Generating lib/rte_pcapng_def with a custom command 00:02:58.544 [360/738] Generating lib/rte_pcapng_mingw with a custom command 00:02:58.802 [361/738] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:58.802 [362/738] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:58.802 [363/738] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:58.802 [364/738] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.802 [365/738] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:58.802 [366/738] Linking target lib/librte_eventdev.so.23.0 00:02:58.802 [367/738] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:58.802 [368/738] Linking static target lib/librte_lpm.a 00:02:59.061 [369/738] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:02:59.061 [370/738] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:59.061 [371/738] Compiling C object lib/librte_member.a.p/member_rte_member_sketch_avx512.c.o 00:02:59.061 [372/738] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:59.061 [373/738] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:59.061 [374/738] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:59.061 [375/738] Generating lib/rte_power_def with a custom command 00:02:59.061 [376/738] Generating lib/rte_power_mingw with a custom command 00:02:59.061 [377/738] Generating lib/rte_rawdev_def with a custom command 00:02:59.061 [378/738] Generating lib/rte_rawdev_mingw with a custom command 00:02:59.061 [379/738] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:59.061 [380/738] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.061 [381/738] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:59.061 [382/738] Linking static target lib/librte_pcapng.a 00:02:59.061 [383/738] Generating lib/rte_regexdev_def with a custom command 00:02:59.061 [384/738] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:02:59.061 [385/738] Linking target lib/librte_lpm.so.23.0 00:02:59.061 [386/738] Generating lib/rte_regexdev_mingw with a custom command 00:02:59.061 [387/738] Generating lib/rte_dmadev_def with a custom command 00:02:59.319 [388/738] Generating lib/rte_dmadev_mingw with a custom command 00:02:59.319 [389/738] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:02:59.319 [390/738] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.319 [391/738] Linking target lib/librte_pcapng.so.23.0 00:02:59.319 [392/738] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:59.319 [393/738] Linking static target lib/librte_rawdev.a 00:02:59.319 [394/738] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:59.319 [395/738] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:02:59.319 [396/738] Generating lib/rte_rib_mingw with a custom command 00:02:59.319 [397/738] Generating lib/rte_rib_def with a custom command 00:02:59.319 [398/738] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:59.319 [399/738] Linking static target lib/librte_power.a 00:02:59.319 [400/738] Generating lib/rte_reorder_def with a custom command 00:02:59.578 [401/738] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:02:59.578 [402/738] Generating lib/rte_reorder_mingw with a custom command 00:02:59.578 [403/738] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:59.578 [404/738] Linking static target lib/librte_regexdev.a 00:02:59.578 [405/738] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:59.578 [406/738] Linking static target lib/librte_dmadev.a 00:02:59.578 [407/738] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:59.578 [408/738] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:59.837 [409/738] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:59.837 [410/738] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.837 [411/738] Generating lib/rte_sched_def with a custom command 00:02:59.837 [412/738] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:59.837 [413/738] Linking target lib/librte_rawdev.so.23.0 00:02:59.837 [414/738] Generating lib/rte_sched_mingw with a custom command 00:02:59.837 [415/738] Generating lib/rte_security_def with a custom command 00:02:59.837 [416/738] Generating lib/rte_security_mingw with a custom command 00:02:59.837 [417/738] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:59.837 [418/738] Linking static target lib/librte_reorder.a 00:02:59.837 [419/738] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:59.837 [420/738] Linking static target lib/librte_rib.a 00:02:59.837 [421/738] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:59.837 [422/738] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:59.837 [423/738] Generating lib/rte_stack_def with a custom command 00:02:59.837 [424/738] Generating lib/rte_stack_mingw with a custom command 00:02:59.837 [425/738] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.837 [426/738] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:59.837 [427/738] Linking static target lib/librte_stack.a 00:03:00.096 [428/738] Linking target lib/librte_dmadev.so.23.0 00:03:00.096 [429/738] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:03:00.096 [430/738] Linking static target lib/librte_member.a 00:03:00.096 [431/738] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:00.096 [432/738] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.096 [433/738] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:03:00.096 [434/738] Linking target lib/librte_reorder.so.23.0 00:03:00.096 [435/738] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.096 [436/738] Linking target lib/librte_regexdev.so.23.0 00:03:00.096 [437/738] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.096 [438/738] Linking target lib/librte_stack.so.23.0 00:03:00.096 [439/738] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:00.096 [440/738] Linking static target lib/librte_security.a 00:03:00.360 [441/738] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.360 [442/738] Linking target lib/librte_power.so.23.0 00:03:00.360 [443/738] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.360 [444/738] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.360 [445/738] Linking target lib/librte_rib.so.23.0 00:03:00.360 [446/738] Linking target lib/librte_member.so.23.0 00:03:00.360 [447/738] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:03:00.360 [448/738] Generating lib/rte_vhost_def with a custom command 00:03:00.360 [449/738] Generating lib/rte_vhost_mingw with a custom command 00:03:00.618 [450/738] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:00.618 [451/738] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:00.618 [452/738] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.618 [453/738] Linking target lib/librte_security.so.23.0 00:03:00.618 [454/738] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:00.618 [455/738] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:03:00.618 [456/738] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:03:00.618 [457/738] Linking static target lib/librte_sched.a 00:03:00.877 [458/738] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:03:00.877 [459/738] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.135 [460/738] Linking target lib/librte_sched.so.23.0 00:03:01.135 [461/738] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:03:01.135 [462/738] Generating lib/rte_ipsec_def with a custom command 00:03:01.135 [463/738] Generating lib/rte_ipsec_mingw with a custom command 00:03:01.135 [464/738] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:03:01.135 [465/738] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:03:01.135 [466/738] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:01.394 [467/738] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:03:01.394 [468/738] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:03:01.394 [469/738] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:01.394 [470/738] Generating lib/rte_fib_def with a custom command 00:03:01.394 [471/738] Generating lib/rte_fib_mingw with a custom command 00:03:01.394 [472/738] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:03:01.652 [473/738] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:03:01.652 [474/738] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:03:01.911 [475/738] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:03:01.911 [476/738] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:03:01.911 [477/738] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:03:01.911 [478/738] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:03:01.911 [479/738] Linking static target lib/librte_ipsec.a 00:03:01.911 [480/738] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:03:01.911 [481/738] Linking static target lib/librte_fib.a 00:03:02.169 [482/738] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:03:02.169 [483/738] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:03:02.169 [484/738] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:03:02.169 [485/738] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:03:02.169 [486/738] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.169 [487/738] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.169 [488/738] Linking target lib/librte_fib.so.23.0 00:03:02.427 [489/738] Linking target lib/librte_ipsec.so.23.0 00:03:02.427 [490/738] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:03:02.427 [491/738] Generating lib/rte_port_def with a custom command 00:03:02.427 [492/738] Generating lib/rte_port_mingw with a custom command 00:03:02.684 [493/738] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:03:02.684 [494/738] Generating lib/rte_pdump_def with a custom command 00:03:02.684 [495/738] Generating lib/rte_pdump_mingw with a custom command 00:03:02.684 [496/738] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:03:02.684 [497/738] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:03:02.684 [498/738] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:03:02.684 [499/738] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:03:02.942 [500/738] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:03:02.942 [501/738] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:03:02.942 [502/738] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:03:02.942 [503/738] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:03:03.199 [504/738] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:03:03.199 [505/738] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:03:03.199 [506/738] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:03:03.199 [507/738] Linking static target lib/librte_port.a 00:03:03.199 [508/738] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:03:03.199 [509/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:03:03.199 [510/738] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:03:03.199 [511/738] Linking static target lib/librte_pdump.a 00:03:03.458 [512/738] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:03:03.458 [513/738] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.458 [514/738] Linking target lib/librte_pdump.so.23.0 00:03:03.458 [515/738] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.742 [516/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:03:03.742 [517/738] Linking target lib/librte_port.so.23.0 00:03:03.742 [518/738] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:03:03.742 [519/738] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:03:03.742 [520/738] Generating lib/rte_table_def with a custom command 00:03:03.742 [521/738] Generating lib/rte_table_mingw with a custom command 00:03:03.742 [522/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:03:03.742 [523/738] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:03:03.742 [524/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:03:03.742 [525/738] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:03:04.000 [526/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:03:04.000 [527/738] Generating lib/rte_pipeline_def with a custom command 00:03:04.000 [528/738] Generating lib/rte_pipeline_mingw with a custom command 00:03:04.000 [529/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:03:04.000 [530/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:03:04.000 [531/738] Linking static target lib/librte_table.a 00:03:04.257 [532/738] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:03:04.257 [533/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:03:04.515 [534/738] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.515 [535/738] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:03:04.515 [536/738] Linking target lib/librte_table.so.23.0 00:03:04.515 [537/738] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:03:04.515 [538/738] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:03:04.515 [539/738] Generating lib/rte_graph_def with a custom command 00:03:04.515 [540/738] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:03:04.515 [541/738] Generating lib/rte_graph_mingw with a custom command 00:03:04.773 [542/738] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:04.773 [543/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:03:04.773 [544/738] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:03:04.773 [545/738] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:03:04.773 [546/738] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:03:04.773 [547/738] Linking static target lib/librte_graph.a 00:03:05.031 [548/738] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:03:05.031 [549/738] Compiling C object lib/librte_node.a.p/node_null.c.o 00:03:05.031 [550/738] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:03:05.031 [551/738] Compiling C object lib/librte_node.a.p/node_log.c.o 00:03:05.288 [552/738] Generating lib/rte_node_def with a custom command 00:03:05.288 [553/738] Generating lib/rte_node_mingw with a custom command 00:03:05.288 [554/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:03:05.288 [555/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:05.288 [556/738] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:03:05.288 [557/738] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:03:05.288 [558/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:05.546 [559/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:05.546 [560/738] Generating drivers/rte_bus_pci_def with a custom command 00:03:05.546 [561/738] Generating drivers/rte_bus_pci_mingw with a custom command 00:03:05.546 [562/738] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:03:05.546 [563/738] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:05.546 [564/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:05.546 [565/738] Generating drivers/rte_bus_vdev_def with a custom command 00:03:05.546 [566/738] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:03:05.546 [567/738] Generating drivers/rte_bus_vdev_mingw with a custom command 00:03:05.546 [568/738] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.546 [569/738] Linking static target lib/librte_node.a 00:03:05.546 [570/738] Generating drivers/rte_mempool_ring_def with a custom command 00:03:05.546 [571/738] Linking target lib/librte_graph.so.23.0 00:03:05.546 [572/738] Generating drivers/rte_mempool_ring_mingw with a custom command 00:03:05.546 [573/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:05.802 [574/738] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:05.802 [575/738] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:03:05.802 [576/738] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:05.802 [577/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:05.802 [578/738] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:05.802 [579/738] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.802 [580/738] Linking target lib/librte_node.so.23.0 00:03:05.802 [581/738] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:05.802 [582/738] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:05.802 [583/738] Linking static target drivers/librte_bus_vdev.a 00:03:05.802 [584/738] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:06.060 [585/738] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:06.060 [586/738] Linking static target drivers/librte_bus_pci.a 00:03:06.060 [587/738] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.060 [588/738] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:06.060 [589/738] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:06.060 [590/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:03:06.060 [591/738] Linking target drivers/librte_bus_vdev.so.23.0 00:03:06.060 [592/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:03:06.060 [593/738] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:03:06.318 [594/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:03:06.318 [595/738] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.318 [596/738] Linking target drivers/librte_bus_pci.so.23.0 00:03:06.318 [597/738] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:06.318 [598/738] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:06.318 [599/738] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:03:06.577 [600/738] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:06.577 [601/738] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:06.577 [602/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:03:06.577 [603/738] Linking static target drivers/librte_mempool_ring.a 00:03:06.577 [604/738] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:06.577 [605/738] Linking target drivers/librte_mempool_ring.so.23.0 00:03:06.577 [606/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:03:06.836 [607/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:03:07.094 [608/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:03:07.094 [609/738] Linking static target drivers/net/i40e/base/libi40e_base.a 00:03:07.094 [610/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:03:07.351 [611/738] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:03:07.351 [612/738] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:03:07.608 [613/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:03:07.608 [614/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:03:07.608 [615/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:03:07.608 [616/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:03:07.865 [617/738] Generating drivers/rte_net_i40e_def with a custom command 00:03:07.865 [618/738] Generating drivers/rte_net_i40e_mingw with a custom command 00:03:07.865 [619/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:03:08.123 [620/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:03:08.381 [621/738] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:03:08.641 [622/738] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:03:08.641 [623/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:03:08.641 [624/738] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:03:08.641 [625/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:03:08.641 [626/738] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:03:08.899 [627/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:03:08.899 [628/738] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:03:08.899 [629/738] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:03:08.899 [630/738] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:03:08.899 [631/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:03:09.157 [632/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:03:09.415 [633/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:03:09.415 [634/738] Linking static target drivers/libtmp_rte_net_i40e.a 00:03:09.415 [635/738] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:03:09.415 [636/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:09.415 [637/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:09.415 [638/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:09.415 [639/738] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:03:09.415 [640/738] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:09.674 [641/738] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:09.674 [642/738] Linking static target drivers/librte_net_i40e.a 00:03:09.674 [643/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:09.674 [644/738] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:09.674 [645/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:09.932 [646/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:09.932 [647/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:09.932 [648/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:09.932 [649/738] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.190 [650/738] Linking target drivers/librte_net_i40e.so.23.0 00:03:10.190 [651/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:10.190 [652/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:10.190 [653/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:10.190 [654/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:10.190 [655/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:10.449 [656/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:10.449 [657/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:10.449 [658/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:10.449 [659/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:10.449 [660/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:10.449 [661/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:10.707 [662/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:10.964 [663/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:10.964 [664/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:11.223 [665/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:11.223 [666/738] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:11.223 [667/738] Linking static target lib/librte_vhost.a 00:03:11.223 [668/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:11.482 [669/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:11.482 [670/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:11.482 [671/738] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:11.482 [672/738] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:11.740 [673/738] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:11.740 [674/738] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:11.740 [675/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:11.740 [676/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:11.740 [677/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:11.740 [678/738] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:12.011 [679/738] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.011 [680/738] Linking target lib/librte_vhost.so.23.0 00:03:12.011 [681/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:12.011 [682/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:12.011 [683/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:12.288 [684/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:12.288 [685/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:12.288 [686/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:12.288 [687/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:12.546 [688/738] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:12.546 [689/738] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:12.546 [690/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:12.546 [691/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:12.806 [692/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:12.806 [693/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:12.806 [694/738] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:13.065 [695/738] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:13.065 [696/738] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:13.065 [697/738] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:13.065 [698/738] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:13.324 [699/738] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:13.324 [700/738] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:13.582 [701/738] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:13.582 [702/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:13.582 [703/738] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:13.841 [704/738] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:13.842 [705/738] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:14.101 [706/738] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:14.101 [707/738] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:14.101 [708/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:14.101 [709/738] Linking static target lib/librte_pipeline.a 00:03:14.360 [710/738] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:14.360 [711/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:14.360 [712/738] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:14.360 [713/738] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:14.360 [714/738] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:14.360 [715/738] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:14.360 [716/738] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:14.619 [717/738] Linking target app/dpdk-dumpcap 00:03:14.619 [718/738] Linking target app/dpdk-proc-info 00:03:14.619 [719/738] Linking target app/dpdk-test-acl 00:03:14.619 [720/738] Linking target app/dpdk-pdump 00:03:14.619 [721/738] Linking target app/dpdk-test-bbdev 00:03:14.619 [722/738] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:14.619 [723/738] Linking target app/dpdk-test-cmdline 00:03:14.619 [724/738] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:14.619 [725/738] Linking target app/dpdk-test-compress-perf 00:03:14.878 [726/738] Linking target app/dpdk-test-crypto-perf 00:03:14.878 [727/738] Linking target app/dpdk-test-eventdev 00:03:14.878 [728/738] Linking target app/dpdk-test-fib 00:03:14.878 [729/738] Linking target app/dpdk-test-flow-perf 00:03:14.878 [730/738] Linking target app/dpdk-test-pipeline 00:03:14.878 [731/738] Linking target app/dpdk-test-regex 00:03:14.878 [732/738] Linking target app/dpdk-test-sad 00:03:14.878 [733/738] Linking target app/dpdk-test-gpudev 00:03:14.878 [734/738] Linking target app/dpdk-testpmd 00:03:15.816 [735/738] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:15.816 [736/738] Linking target app/dpdk-test-security-perf 00:03:17.196 [737/738] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.454 [738/738] Linking target lib/librte_pipeline.so.23.0 00:03:17.454 04:50:55 build_native_dpdk -- common/autobuild_common.sh@194 -- $ uname -s 00:03:17.454 04:50:55 build_native_dpdk -- common/autobuild_common.sh@194 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:17.454 04:50:55 build_native_dpdk -- common/autobuild_common.sh@207 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:17.454 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:17.454 [0/1] Installing files. 00:03:17.715 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:17.715 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:17.715 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:17.715 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:17.715 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:17.715 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:17.715 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:17.715 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:17.715 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:17.715 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:17.715 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:17.715 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:17.715 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:17.715 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:17.715 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:17.715 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:17.715 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:17.715 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:17.715 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:17.715 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:17.715 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:17.715 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:17.715 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:17.715 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:17.715 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:17.715 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:17.715 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:17.715 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:17.715 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:17.715 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:17.715 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:17.715 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:17.715 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:17.715 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:17.715 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:17.715 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:17.715 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:17.715 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:17.715 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:17.716 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:17.716 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:17.716 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:17.716 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:17.716 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:17.716 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:17.716 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:17.716 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:17.716 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:17.716 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:17.716 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:17.716 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:17.716 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:17.716 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:17.716 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:17.716 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:17.716 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/flow_classify.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:17.716 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/ipv4_rules_file.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:17.716 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:17.716 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:17.716 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:17.716 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:17.716 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:17.716 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:17.716 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:17.716 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.716 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.716 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.716 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.716 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.716 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.716 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.716 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.716 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.716 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.716 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.716 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.716 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.716 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.716 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.716 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.716 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.716 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.716 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.716 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.716 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.716 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.716 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.716 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.716 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.716 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.716 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.716 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.716 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.716 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:17.716 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:17.716 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:17.716 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/kni.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:17.716 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:17.716 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:17.716 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:17.716 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:17.716 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:17.716 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:17.716 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:17.716 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.716 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.716 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.716 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.716 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.716 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.716 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.716 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.716 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.716 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.716 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:17.717 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:17.717 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:17.717 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:17.717 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:17.717 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:17.717 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:17.717 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:17.717 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:17.717 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:17.717 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:17.717 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:17.717 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:17.717 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:17.717 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:17.717 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:17.717 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:17.717 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:17.717 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:17.717 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:17.717 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:17.717 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:17.717 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:17.717 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:17.717 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:17.717 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:17.717 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:17.717 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:17.717 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:17.717 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:17.717 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:17.717 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:17.717 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:17.718 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:17.718 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:17.718 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.718 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.718 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.718 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.718 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.718 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.718 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.718 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.718 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.718 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.718 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.718 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.718 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.718 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.718 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.718 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.718 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.718 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.718 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.718 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.718 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.718 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.718 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.718 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.718 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.718 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.718 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.718 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.718 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.718 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.718 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.718 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.718 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.718 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:17.718 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:17.718 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:17.718 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:17.718 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:17.718 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:17.718 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:17.718 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:17.718 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:17.718 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:17.718 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:17.718 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:17.718 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:17.718 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:17.718 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:17.718 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:17.718 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:17.718 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:17.718 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:17.718 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:17.718 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:17.718 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:17.718 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:17.718 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:17.718 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:17.718 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:17.718 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:17.718 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:17.718 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:17.718 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:17.718 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:17.718 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:17.718 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:17.718 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:17.718 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:17.718 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:17.718 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:17.718 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.718 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.718 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.718 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.719 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.719 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.719 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.719 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.719 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.719 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.719 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.719 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.719 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.719 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.719 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.719 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.719 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.719 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.719 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.719 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.719 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.719 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.719 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.719 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.719 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.719 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.719 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.719 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.719 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.719 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.719 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.719 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.719 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.719 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.719 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.719 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.719 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:17.719 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:17.719 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:17.719 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:17.719 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:17.719 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:17.719 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:17.719 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:17.719 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:17.719 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:17.719 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:17.719 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:17.719 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:17.719 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:17.719 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:17.719 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:17.719 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:17.719 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:17.719 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:17.719 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:17.719 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:17.719 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:17.719 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:17.719 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:17.719 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:17.719 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:17.719 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:17.719 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:17.719 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:17.719 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:17.719 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:17.719 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:17.719 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:17.719 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:17.719 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:17.719 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:17.719 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:17.719 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:17.719 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:17.719 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:17.719 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:17.719 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:17.719 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:17.719 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:17.720 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:17.720 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:17.720 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:17.720 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:17.720 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:17.720 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:17.720 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:17.720 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:17.720 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:17.720 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:17.720 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:17.720 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:17.720 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:17.720 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:17.720 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:17.720 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:17.720 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:17.720 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:17.720 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:17.720 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:17.720 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:17.720 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:17.720 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:17.720 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:17.720 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:17.720 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:17.720 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:17.720 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:17.720 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:17.720 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:17.720 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:17.720 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:17.720 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:17.720 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:17.720 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:17.720 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.720 Installing lib/librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.720 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.720 Installing lib/librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.720 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.720 Installing lib/librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.720 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.720 Installing lib/librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.720 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.720 Installing lib/librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.720 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.720 Installing lib/librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.720 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.720 Installing lib/librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.720 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.981 Installing lib/librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.981 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.981 Installing lib/librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.981 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.981 Installing lib/librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.982 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.982 Installing lib/librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.982 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.982 Installing lib/librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.982 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.982 Installing lib/librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.982 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.982 Installing lib/librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.982 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.982 Installing lib/librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.982 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.982 Installing lib/librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.982 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.982 Installing lib/librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.982 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.982 Installing lib/librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.982 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.982 Installing lib/librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.982 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.982 Installing lib/librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.982 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.982 Installing lib/librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.982 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.982 Installing lib/librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.982 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.982 Installing lib/librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.982 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.982 Installing lib/librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.982 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.982 Installing lib/librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.982 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.982 Installing lib/librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.982 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.982 Installing lib/librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.982 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.982 Installing lib/librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.982 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.982 Installing lib/librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.982 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.982 Installing lib/librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.982 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.982 Installing lib/librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.982 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.982 Installing lib/librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.982 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.982 Installing lib/librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.982 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.982 Installing lib/librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.982 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.982 Installing lib/librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.982 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.982 Installing lib/librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.982 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.982 Installing lib/librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.982 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.982 Installing lib/librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.982 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.982 Installing lib/librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.982 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.982 Installing lib/librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.982 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.982 Installing lib/librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.982 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.982 Installing lib/librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.982 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.982 Installing lib/librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.982 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.982 Installing lib/librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.982 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.982 Installing lib/librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.982 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.982 Installing lib/librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.982 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.982 Installing lib/librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.982 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.982 Installing lib/librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.982 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.982 Installing lib/librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.982 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.982 Installing lib/librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.982 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.982 Installing lib/librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.982 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.982 Installing lib/librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.982 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.982 Installing drivers/librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:17.982 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.982 Installing drivers/librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:17.982 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.982 Installing drivers/librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:17.982 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.982 Installing drivers/librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:17.982 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:17.982 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:17.982 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:17.982 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:17.982 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:17.982 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:17.982 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:17.982 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:17.982 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:17.982 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:17.982 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:17.982 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:17.982 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:17.982 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:17.982 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:17.982 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:17.982 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:17.982 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.982 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.982 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.982 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:17.982 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:17.982 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:17.982 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:17.982 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:17.982 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:17.982 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:17.983 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:17.983 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:17.983 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:17.983 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:17.983 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:17.983 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.983 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.983 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.983 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.983 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.983 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.983 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.983 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.983 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.983 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.983 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.983 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.983 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.983 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.983 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.983 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.983 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.983 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.983 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.983 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.983 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.983 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.983 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.983 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.983 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.983 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.983 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.983 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.983 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.983 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.983 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.983 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.983 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.983 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.983 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.983 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.983 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.983 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.983 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.983 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.983 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.983 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.983 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.983 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.983 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.983 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.983 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.983 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.983 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.983 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.983 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.983 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.983 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.983 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.983 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.983 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.983 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.983 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.983 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.983 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.983 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.983 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.983 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.983 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.983 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.983 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.983 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.983 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.983 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.983 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.983 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.983 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.983 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.983 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.983 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.983 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.983 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.983 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.983 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.983 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.983 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.983 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.983 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.983 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.983 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.983 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.984 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.984 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.984 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.984 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.984 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.984 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.984 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.984 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.984 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.984 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.984 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.984 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.984 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.984 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.984 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.984 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.984 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.984 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.984 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.984 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.984 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.984 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.984 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.984 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.984 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.984 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.984 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.984 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.984 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.984 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.984 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.984 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.984 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.984 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.984 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.984 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.984 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.984 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.984 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.984 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.984 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.984 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.984 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.984 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.984 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.984 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.984 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.984 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.984 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.984 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.984 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.984 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.984 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.984 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.984 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.984 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.984 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.984 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.984 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.984 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.984 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.984 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.984 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.984 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.984 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.984 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.984 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.984 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.984 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.984 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.984 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.984 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.984 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.984 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.984 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.984 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.984 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.984 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.984 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.984 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.984 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.984 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.984 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.984 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.984 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.984 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.985 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.985 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.985 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.985 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.985 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.985 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.985 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.985 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.985 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.985 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.985 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.985 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.985 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.985 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.985 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.985 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_empty_poll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.985 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_intel_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.985 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.985 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.985 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.985 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.985 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.985 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.985 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.985 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.985 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.985 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.985 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.985 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.985 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.985 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.985 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.985 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.985 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.985 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.985 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.985 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.985 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.985 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.985 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.985 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.985 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.985 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.985 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.985 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.985 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.985 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.985 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.985 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.985 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.985 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.985 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.985 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.985 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.985 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.985 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.985 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.985 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.985 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.985 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.985 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.985 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.985 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.985 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.985 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.985 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.985 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.985 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.985 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.985 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.985 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.985 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.985 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.985 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.985 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.985 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.985 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.985 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.985 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.985 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.985 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.985 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.985 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.985 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.985 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.985 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.985 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.986 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.986 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.986 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.986 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.986 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.986 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.986 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.986 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.986 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.986 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.986 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.986 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.986 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.986 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:17.986 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:17.986 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:17.986 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:17.986 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.986 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:17.986 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:17.986 Installing symlink pointing to librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.23 00:03:17.986 Installing symlink pointing to librte_kvargs.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:17.986 Installing symlink pointing to librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.23 00:03:17.986 Installing symlink pointing to librte_telemetry.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:17.986 Installing symlink pointing to librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.23 00:03:17.986 Installing symlink pointing to librte_eal.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:17.986 Installing symlink pointing to librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.23 00:03:17.986 Installing symlink pointing to librte_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:17.986 Installing symlink pointing to librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.23 00:03:17.986 Installing symlink pointing to librte_rcu.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:17.986 Installing symlink pointing to librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.23 00:03:17.986 Installing symlink pointing to librte_mempool.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:17.986 Installing symlink pointing to librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.23 00:03:17.986 Installing symlink pointing to librte_mbuf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:17.986 Installing symlink pointing to librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.23 00:03:17.986 Installing symlink pointing to librte_net.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:17.986 Installing symlink pointing to librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.23 00:03:17.986 Installing symlink pointing to librte_meter.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:17.986 Installing symlink pointing to librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.23 00:03:17.986 Installing symlink pointing to librte_ethdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:17.986 Installing symlink pointing to librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.23 00:03:17.986 Installing symlink pointing to librte_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:17.986 Installing symlink pointing to librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.23 00:03:17.986 Installing symlink pointing to librte_cmdline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:17.986 Installing symlink pointing to librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.23 00:03:17.986 Installing symlink pointing to librte_metrics.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:17.986 Installing symlink pointing to librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.23 00:03:17.986 Installing symlink pointing to librte_hash.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:17.986 Installing symlink pointing to librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.23 00:03:17.986 Installing symlink pointing to librte_timer.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:17.986 Installing symlink pointing to librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.23 00:03:17.986 Installing symlink pointing to librte_acl.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:17.986 Installing symlink pointing to librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.23 00:03:17.986 Installing symlink pointing to librte_bbdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:17.986 Installing symlink pointing to librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.23 00:03:17.986 Installing symlink pointing to librte_bitratestats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:17.986 Installing symlink pointing to librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.23 00:03:17.986 Installing symlink pointing to librte_bpf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:17.986 Installing symlink pointing to librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.23 00:03:17.986 Installing symlink pointing to librte_cfgfile.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:17.986 Installing symlink pointing to librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.23 00:03:17.986 Installing symlink pointing to librte_compressdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:17.986 Installing symlink pointing to librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.23 00:03:17.986 Installing symlink pointing to librte_cryptodev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:17.986 Installing symlink pointing to librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.23 00:03:17.986 Installing symlink pointing to librte_distributor.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:17.986 Installing symlink pointing to librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.23 00:03:17.986 Installing symlink pointing to librte_efd.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:17.986 Installing symlink pointing to librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.23 00:03:17.986 Installing symlink pointing to librte_eventdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:17.986 Installing symlink pointing to librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.23 00:03:17.986 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:03:17.986 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:03:17.986 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:03:17.986 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:03:17.986 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:03:17.986 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:03:17.986 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:03:17.986 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:03:17.986 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:03:17.986 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:03:17.986 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:03:17.986 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:03:17.986 Installing symlink pointing to librte_gpudev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:17.986 Installing symlink pointing to librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.23 00:03:17.986 Installing symlink pointing to librte_gro.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:17.986 Installing symlink pointing to librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.23 00:03:17.986 Installing symlink pointing to librte_gso.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:17.986 Installing symlink pointing to librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.23 00:03:17.986 Installing symlink pointing to librte_ip_frag.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:17.986 Installing symlink pointing to librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.23 00:03:17.986 Installing symlink pointing to librte_jobstats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:17.986 Installing symlink pointing to librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.23 00:03:17.986 Installing symlink pointing to librte_latencystats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:17.986 Installing symlink pointing to librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.23 00:03:17.986 Installing symlink pointing to librte_lpm.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:17.986 Installing symlink pointing to librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.23 00:03:17.986 Installing symlink pointing to librte_member.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:17.987 Installing symlink pointing to librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.23 00:03:17.987 Installing symlink pointing to librte_pcapng.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:17.987 Installing symlink pointing to librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.23 00:03:17.987 Installing symlink pointing to librte_power.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:17.987 Installing symlink pointing to librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.23 00:03:17.987 Installing symlink pointing to librte_rawdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:17.987 Installing symlink pointing to librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.23 00:03:17.987 Installing symlink pointing to librte_regexdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:17.987 Installing symlink pointing to librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.23 00:03:17.987 Installing symlink pointing to librte_dmadev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:17.987 Installing symlink pointing to librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.23 00:03:17.987 Installing symlink pointing to librte_rib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:17.987 Installing symlink pointing to librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.23 00:03:17.987 Installing symlink pointing to librte_reorder.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:17.987 Installing symlink pointing to librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.23 00:03:17.987 Installing symlink pointing to librte_sched.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:17.987 Installing symlink pointing to librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.23 00:03:17.987 Installing symlink pointing to librte_security.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:17.987 Installing symlink pointing to librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.23 00:03:17.987 Installing symlink pointing to librte_stack.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:17.987 Installing symlink pointing to librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.23 00:03:17.987 Installing symlink pointing to librte_vhost.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:17.987 Installing symlink pointing to librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.23 00:03:17.987 Installing symlink pointing to librte_ipsec.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:17.987 Installing symlink pointing to librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.23 00:03:17.987 Installing symlink pointing to librte_fib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:17.987 Installing symlink pointing to librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.23 00:03:17.987 Installing symlink pointing to librte_port.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:17.987 Installing symlink pointing to librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.23 00:03:17.987 Installing symlink pointing to librte_pdump.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:17.987 Installing symlink pointing to librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.23 00:03:17.987 Installing symlink pointing to librte_table.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:17.987 Installing symlink pointing to librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.23 00:03:17.987 Installing symlink pointing to librte_pipeline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:17.987 Installing symlink pointing to librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.23 00:03:17.987 Installing symlink pointing to librte_graph.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:17.987 Installing symlink pointing to librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.23 00:03:17.987 Installing symlink pointing to librte_node.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:17.987 Installing symlink pointing to librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:03:17.987 Installing symlink pointing to librte_bus_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:03:17.987 Installing symlink pointing to librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:03:17.987 Installing symlink pointing to librte_bus_vdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:03:17.987 Installing symlink pointing to librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:03:17.987 Installing symlink pointing to librte_mempool_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:03:17.987 Installing symlink pointing to librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:03:17.987 Installing symlink pointing to librte_net_i40e.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:03:17.987 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:03:17.987 04:50:56 build_native_dpdk -- common/autobuild_common.sh@213 -- $ cat 00:03:17.987 04:50:56 build_native_dpdk -- common/autobuild_common.sh@218 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:17.987 00:03:17.987 real 0m34.177s 00:03:17.987 user 3m40.526s 00:03:17.987 sys 0m36.928s 00:03:17.987 04:50:56 build_native_dpdk -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:17.987 04:50:56 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:17.987 ************************************ 00:03:17.987 END TEST build_native_dpdk 00:03:17.987 ************************************ 00:03:17.987 04:50:56 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:17.987 04:50:56 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:17.987 04:50:56 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:17.987 04:50:56 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:17.987 04:50:56 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:17.987 04:50:56 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:17.987 04:50:56 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:17.987 04:50:56 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme --with-shared 00:03:18.245 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:18.245 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:18.245 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:18.245 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:18.503 Using 'verbs' RDMA provider 00:03:29.418 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:41.638 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:41.638 Creating mk/config.mk...done. 00:03:41.638 Creating mk/cc.flags.mk...done. 00:03:41.638 Type 'make' to build. 00:03:41.638 04:51:18 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:41.638 04:51:18 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:03:41.638 04:51:18 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:41.638 04:51:18 -- common/autotest_common.sh@10 -- $ set +x 00:03:41.638 ************************************ 00:03:41.638 START TEST make 00:03:41.638 ************************************ 00:03:41.639 04:51:18 make -- common/autotest_common.sh@1125 -- $ make -j10 00:03:41.639 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:03:41.639 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:03:41.639 meson setup builddir \ 00:03:41.639 -Dwith-libaio=enabled \ 00:03:41.639 -Dwith-liburing=enabled \ 00:03:41.639 -Dwith-libvfn=disabled \ 00:03:41.639 -Dwith-spdk=false && \ 00:03:41.639 meson compile -C builddir && \ 00:03:41.639 cd -) 00:03:41.639 make[1]: Nothing to be done for 'all'. 00:03:43.015 The Meson build system 00:03:43.015 Version: 1.5.0 00:03:43.015 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:03:43.015 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:43.015 Build type: native build 00:03:43.015 Project name: xnvme 00:03:43.015 Project version: 0.7.3 00:03:43.015 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:43.015 C linker for the host machine: gcc ld.bfd 2.40-14 00:03:43.015 Host machine cpu family: x86_64 00:03:43.015 Host machine cpu: x86_64 00:03:43.015 Message: host_machine.system: linux 00:03:43.015 Compiler for C supports arguments -Wno-missing-braces: YES 00:03:43.015 Compiler for C supports arguments -Wno-cast-function-type: YES 00:03:43.015 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:43.015 Run-time dependency threads found: YES 00:03:43.015 Has header "setupapi.h" : NO 00:03:43.015 Has header "linux/blkzoned.h" : YES 00:03:43.015 Has header "linux/blkzoned.h" : YES (cached) 00:03:43.015 Has header "libaio.h" : YES 00:03:43.015 Library aio found: YES 00:03:43.015 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:43.015 Run-time dependency liburing found: YES 2.2 00:03:43.015 Dependency libvfn skipped: feature with-libvfn disabled 00:03:43.015 Run-time dependency appleframeworks found: NO (tried framework) 00:03:43.015 Run-time dependency appleframeworks found: NO (tried framework) 00:03:43.015 Configuring xnvme_config.h using configuration 00:03:43.015 Configuring xnvme.spec using configuration 00:03:43.015 Run-time dependency bash-completion found: YES 2.11 00:03:43.015 Message: Bash-completions: /usr/share/bash-completion/completions 00:03:43.015 Program cp found: YES (/usr/bin/cp) 00:03:43.015 Has header "winsock2.h" : NO 00:03:43.015 Has header "dbghelp.h" : NO 00:03:43.015 Library rpcrt4 found: NO 00:03:43.015 Library rt found: YES 00:03:43.015 Checking for function "clock_gettime" with dependency -lrt: YES 00:03:43.015 Found CMake: /usr/bin/cmake (3.27.7) 00:03:43.015 Run-time dependency _spdk found: NO (tried pkgconfig and cmake) 00:03:43.015 Run-time dependency wpdk found: NO (tried pkgconfig and cmake) 00:03:43.015 Run-time dependency spdk-win found: NO (tried pkgconfig and cmake) 00:03:43.015 Build targets in project: 32 00:03:43.015 00:03:43.015 xnvme 0.7.3 00:03:43.015 00:03:43.015 User defined options 00:03:43.015 with-libaio : enabled 00:03:43.015 with-liburing: enabled 00:03:43.015 with-libvfn : disabled 00:03:43.015 with-spdk : false 00:03:43.015 00:03:43.015 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:43.272 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:03:43.272 [1/203] Generating toolbox/xnvme-driver-script with a custom command 00:03:43.272 [2/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd.c.o 00:03:43.272 [3/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_nil.c.o 00:03:43.272 [4/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_mem_posix.c.o 00:03:43.272 [5/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_async.c.o 00:03:43.272 [6/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_nvme.c.o 00:03:43.272 [7/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_admin_shim.c.o 00:03:43.272 [8/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_dev.c.o 00:03:43.529 [9/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_posix.c.o 00:03:43.529 [10/203] Compiling C object lib/libxnvme.so.p/xnvme_adm.c.o 00:03:43.529 [11/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux.c.o 00:03:43.529 [12/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_emu.c.o 00:03:43.529 [13/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_sync_psync.c.o 00:03:43.529 [14/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_thrpool.c.o 00:03:43.530 [15/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_libaio.c.o 00:03:43.530 [16/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_hugepage.c.o 00:03:43.530 [17/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos.c.o 00:03:43.530 [18/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_dev.c.o 00:03:43.530 [19/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_admin.c.o 00:03:43.530 [20/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_dev.c.o 00:03:43.530 [21/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_sync.c.o 00:03:43.530 [22/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_liburing.c.o 00:03:43.530 [23/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_nvme.c.o 00:03:43.530 [24/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_ucmd.c.o 00:03:43.530 [25/203] Compiling C object lib/libxnvme.so.p/xnvme_be_nosys.c.o 00:03:43.530 [26/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk.c.o 00:03:43.530 [27/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk.c.o 00:03:43.530 [28/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_admin.c.o 00:03:43.530 [29/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_block.c.o 00:03:43.530 [30/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_dev.c.o 00:03:43.530 [31/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_admin.c.o 00:03:43.530 [32/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_async.c.o 00:03:43.530 [33/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_sync.c.o 00:03:43.530 [34/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_admin.c.o 00:03:43.530 [35/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_dev.c.o 00:03:43.530 [36/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_sync.c.o 00:03:43.530 [37/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_mem.c.o 00:03:43.530 [38/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio.c.o 00:03:43.788 [39/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_async.c.o 00:03:43.788 [40/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp.c.o 00:03:43.788 [41/203] Compiling C object lib/libxnvme.so.p/xnvme_be.c.o 00:03:43.788 [42/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_dev.c.o 00:03:43.788 [43/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_mem.c.o 00:03:43.788 [44/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_sync.c.o 00:03:43.788 [45/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows.c.o 00:03:43.788 [46/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_dev.c.o 00:03:43.788 [47/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp_th.c.o 00:03:43.788 [48/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_fs.c.o 00:03:43.788 [49/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_block.c.o 00:03:43.788 [50/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_ioring.c.o 00:03:43.788 [51/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_mem.c.o 00:03:43.788 [52/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_nvme.c.o 00:03:43.788 [53/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf_entries.c.o 00:03:43.788 [54/203] Compiling C object lib/libxnvme.so.p/xnvme_file.c.o 00:03:43.788 [55/203] Compiling C object lib/libxnvme.so.p/xnvme_dev.c.o 00:03:43.788 [56/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf.c.o 00:03:43.788 [57/203] Compiling C object lib/libxnvme.so.p/xnvme_geo.c.o 00:03:43.788 [58/203] Compiling C object lib/libxnvme.so.p/xnvme_req.c.o 00:03:43.788 [59/203] Compiling C object lib/libxnvme.so.p/xnvme_ident.c.o 00:03:43.788 [60/203] Compiling C object lib/libxnvme.so.p/xnvme_lba.c.o 00:03:43.788 [61/203] Compiling C object lib/libxnvme.so.p/xnvme_cmd.c.o 00:03:43.788 [62/203] Compiling C object lib/libxnvme.so.p/xnvme_buf.c.o 00:03:43.788 [63/203] Compiling C object lib/libxnvme.so.p/xnvme_nvm.c.o 00:03:43.788 [64/203] Compiling C object lib/libxnvme.so.p/xnvme_kvs.c.o 00:03:43.788 [65/203] Compiling C object lib/libxnvme.so.p/xnvme_opts.c.o 00:03:43.788 [66/203] Compiling C object lib/libxnvme.so.p/xnvme_topology.c.o 00:03:43.788 [67/203] Compiling C object lib/libxnvme.so.p/xnvme_ver.c.o 00:03:43.788 [68/203] Compiling C object lib/libxnvme.so.p/xnvme_queue.c.o 00:03:44.047 [69/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_nil.c.o 00:03:44.047 [70/203] Compiling C object lib/libxnvme.a.p/xnvme_adm.c.o 00:03:44.047 [71/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_emu.c.o 00:03:44.047 [72/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_mem_posix.c.o 00:03:44.047 [73/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_admin_shim.c.o 00:03:44.047 [74/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd.c.o 00:03:44.047 [75/203] Compiling C object lib/libxnvme.so.p/xnvme_spec_pp.c.o 00:03:44.047 [76/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_async.c.o 00:03:44.047 [77/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_posix.c.o 00:03:44.047 [78/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_dev.c.o 00:03:44.047 [79/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_sync_psync.c.o 00:03:44.047 [80/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_nvme.c.o 00:03:44.047 [81/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_thrpool.c.o 00:03:44.047 [82/203] Compiling C object lib/libxnvme.so.p/xnvme_znd.c.o 00:03:44.047 [83/203] Compiling C object lib/libxnvme.a.p/xnvme_be.c.o 00:03:44.047 [84/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos.c.o 00:03:44.047 [85/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux.c.o 00:03:44.047 [86/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_libaio.c.o 00:03:44.307 [87/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_hugepage.c.o 00:03:44.307 [88/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_admin.c.o 00:03:44.307 [89/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_dev.c.o 00:03:44.307 [90/203] Compiling C object lib/libxnvme.so.p/xnvme_cli.c.o 00:03:44.307 [91/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_sync.c.o 00:03:44.307 [92/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_nvme.c.o 00:03:44.307 [93/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_block.c.o 00:03:44.307 [94/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk.c.o 00:03:44.307 [95/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_dev.c.o 00:03:44.307 [96/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk.c.o 00:03:44.307 [97/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_ucmd.c.o 00:03:44.307 [98/203] Compiling C object lib/libxnvme.a.p/xnvme_be_nosys.c.o 00:03:44.307 [99/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_liburing.c.o 00:03:44.307 [100/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_admin.c.o 00:03:44.307 [101/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_mem.c.o 00:03:44.307 [102/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_async.c.o 00:03:44.307 [103/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_admin.c.o 00:03:44.307 [104/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio.c.o 00:03:44.307 [105/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_sync.c.o 00:03:44.307 [106/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_dev.c.o 00:03:44.307 [107/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_admin.c.o 00:03:44.307 [108/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_dev.c.o 00:03:44.307 [109/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_sync.c.o 00:03:44.307 [110/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_mem.c.o 00:03:44.307 [111/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp_th.c.o 00:03:44.307 [112/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_dev.c.o 00:03:44.307 [113/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_async.c.o 00:03:44.307 [114/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_sync.c.o 00:03:44.307 [115/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows.c.o 00:03:44.307 [116/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_ioring.c.o 00:03:44.307 [117/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_mem.c.o 00:03:44.307 [118/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_dev.c.o 00:03:44.307 [119/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_block.c.o 00:03:44.307 [120/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp.c.o 00:03:44.307 [121/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_fs.c.o 00:03:44.307 [122/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_nvme.c.o 00:03:44.307 [123/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf_entries.c.o 00:03:44.565 [124/203] Compiling C object lib/libxnvme.a.p/xnvme_cmd.c.o 00:03:44.565 [125/203] Compiling C object lib/libxnvme.a.p/xnvme_lba.c.o 00:03:44.565 [126/203] Compiling C object lib/libxnvme.a.p/xnvme_geo.c.o 00:03:44.565 [127/203] Compiling C object lib/libxnvme.a.p/xnvme_ident.c.o 00:03:44.565 [128/203] Compiling C object lib/libxnvme.a.p/xnvme_file.c.o 00:03:44.565 [129/203] Compiling C object lib/libxnvme.a.p/xnvme_req.c.o 00:03:44.565 [130/203] Compiling C object lib/libxnvme.a.p/xnvme_dev.c.o 00:03:44.565 [131/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf.c.o 00:03:44.565 [132/203] Compiling C object lib/libxnvme.a.p/xnvme_buf.c.o 00:03:44.565 [133/203] Compiling C object lib/libxnvme.a.p/xnvme_nvm.c.o 00:03:44.565 [134/203] Compiling C object lib/libxnvme.a.p/xnvme_kvs.c.o 00:03:44.565 [135/203] Compiling C object lib/libxnvme.a.p/xnvme_opts.c.o 00:03:44.565 [136/203] Compiling C object lib/libxnvme.a.p/xnvme_topology.c.o 00:03:44.565 [137/203] Compiling C object lib/libxnvme.a.p/xnvme_ver.c.o 00:03:44.565 [138/203] Compiling C object tests/xnvme_tests_async_intf.p/async_intf.c.o 00:03:44.565 [139/203] Compiling C object lib/libxnvme.a.p/xnvme_queue.c.o 00:03:44.565 [140/203] Compiling C object tests/xnvme_tests_buf.p/buf.c.o 00:03:44.565 [141/203] Compiling C object tests/xnvme_tests_cli.p/cli.c.o 00:03:44.565 [142/203] Compiling C object lib/libxnvme.a.p/xnvme_spec_pp.c.o 00:03:44.565 [143/203] Compiling C object tests/xnvme_tests_enum.p/enum.c.o 00:03:44.565 [144/203] Compiling C object lib/libxnvme.so.p/xnvme_spec.c.o 00:03:44.823 [145/203] Compiling C object tests/xnvme_tests_xnvme_cli.p/xnvme_cli.c.o 00:03:44.823 [146/203] Compiling C object lib/libxnvme.a.p/xnvme_znd.c.o 00:03:44.823 [147/203] Compiling C object tests/xnvme_tests_xnvme_file.p/xnvme_file.c.o 00:03:44.823 [148/203] Compiling C object tests/xnvme_tests_znd_append.p/znd_append.c.o 00:03:44.823 [149/203] Linking target lib/libxnvme.so 00:03:44.823 [150/203] Compiling C object tests/xnvme_tests_znd_state.p/znd_state.c.o 00:03:44.823 [151/203] Compiling C object tests/xnvme_tests_scc.p/scc.c.o 00:03:44.823 [152/203] Compiling C object tests/xnvme_tests_znd_explicit_open.p/znd_explicit_open.c.o 00:03:44.823 [153/203] Compiling C object tests/xnvme_tests_znd_zrwa.p/znd_zrwa.c.o 00:03:44.823 [154/203] Compiling C object tests/xnvme_tests_map.p/map.c.o 00:03:44.823 [155/203] Compiling C object tests/xnvme_tests_lblk.p/lblk.c.o 00:03:44.823 [156/203] Compiling C object tests/xnvme_tests_kvs.p/kvs.c.o 00:03:44.823 [157/203] Compiling C object tools/xdd.p/xdd.c.o 00:03:44.823 [158/203] Compiling C object lib/libxnvme.a.p/xnvme_cli.c.o 00:03:44.823 [159/203] Compiling C object examples/xnvme_hello.p/xnvme_hello.c.o 00:03:44.823 [160/203] Compiling C object examples/xnvme_enum.p/xnvme_enum.c.o 00:03:44.823 [161/203] Compiling C object examples/xnvme_dev.p/xnvme_dev.c.o 00:03:44.823 [162/203] Compiling C object tests/xnvme_tests_ioworker.p/ioworker.c.o 00:03:44.823 [163/203] Compiling C object tools/lblk.p/lblk.c.o 00:03:44.823 [164/203] Compiling C object examples/xnvme_single_async.p/xnvme_single_async.c.o 00:03:44.823 [165/203] Compiling C object tools/zoned.p/zoned.c.o 00:03:45.082 [166/203] Compiling C object tools/kvs.p/kvs.c.o 00:03:45.082 [167/203] Compiling C object examples/xnvme_single_sync.p/xnvme_single_sync.c.o 00:03:45.082 [168/203] Compiling C object examples/zoned_io_sync.p/zoned_io_sync.c.o 00:03:45.082 [169/203] Compiling C object examples/xnvme_io_async.p/xnvme_io_async.c.o 00:03:45.082 [170/203] Compiling C object examples/zoned_io_async.p/zoned_io_async.c.o 00:03:45.082 [171/203] Compiling C object tools/xnvme_file.p/xnvme_file.c.o 00:03:45.082 [172/203] Compiling C object tools/xnvme.p/xnvme.c.o 00:03:45.082 [173/203] Compiling C object lib/libxnvme.a.p/xnvme_spec.c.o 00:03:45.082 [174/203] Linking static target lib/libxnvme.a 00:03:45.082 [175/203] Linking target tests/xnvme_tests_async_intf 00:03:45.082 [176/203] Linking target tests/xnvme_tests_cli 00:03:45.082 [177/203] Linking target tests/xnvme_tests_buf 00:03:45.082 [178/203] Linking target tests/xnvme_tests_xnvme_file 00:03:45.082 [179/203] Linking target tests/xnvme_tests_scc 00:03:45.082 [180/203] Linking target tests/xnvme_tests_znd_state 00:03:45.082 [181/203] Linking target tests/xnvme_tests_xnvme_cli 00:03:45.082 [182/203] Linking target tests/xnvme_tests_znd_append 00:03:45.082 [183/203] Linking target tests/xnvme_tests_znd_explicit_open 00:03:45.082 [184/203] Linking target tests/xnvme_tests_ioworker 00:03:45.082 [185/203] Linking target tests/xnvme_tests_lblk 00:03:45.082 [186/203] Linking target tests/xnvme_tests_enum 00:03:45.082 [187/203] Linking target tests/xnvme_tests_map 00:03:45.082 [188/203] Linking target tools/lblk 00:03:45.082 [189/203] Linking target tests/xnvme_tests_znd_zrwa 00:03:45.082 [190/203] Linking target tests/xnvme_tests_kvs 00:03:45.082 [191/203] Linking target tools/xnvme 00:03:45.082 [192/203] Linking target examples/xnvme_dev 00:03:45.082 [193/203] Linking target examples/xnvme_enum 00:03:45.341 [194/203] Linking target tools/kvs 00:03:45.341 [195/203] Linking target tools/xnvme_file 00:03:45.341 [196/203] Linking target examples/xnvme_hello 00:03:45.341 [197/203] Linking target tools/xdd 00:03:45.341 [198/203] Linking target examples/xnvme_io_async 00:03:45.341 [199/203] Linking target tools/zoned 00:03:45.341 [200/203] Linking target examples/zoned_io_sync 00:03:45.341 [201/203] Linking target examples/xnvme_single_async 00:03:45.341 [202/203] Linking target examples/zoned_io_async 00:03:45.341 [203/203] Linking target examples/xnvme_single_sync 00:03:45.341 INFO: autodetecting backend as ninja 00:03:45.341 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:45.341 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:04:17.433 CC lib/log/log.o 00:04:17.433 CC lib/log/log_flags.o 00:04:17.433 CC lib/log/log_deprecated.o 00:04:17.433 CC lib/ut/ut.o 00:04:17.433 CC lib/ut_mock/mock.o 00:04:17.433 LIB libspdk_log.a 00:04:17.433 LIB libspdk_ut.a 00:04:17.433 LIB libspdk_ut_mock.a 00:04:17.433 SO libspdk_log.so.7.0 00:04:17.433 SO libspdk_ut.so.2.0 00:04:17.433 SO libspdk_ut_mock.so.6.0 00:04:17.433 SYMLINK libspdk_ut.so 00:04:17.433 SYMLINK libspdk_ut_mock.so 00:04:17.433 SYMLINK libspdk_log.so 00:04:17.433 CC lib/util/base64.o 00:04:17.433 CC lib/util/bit_array.o 00:04:17.433 CC lib/util/cpuset.o 00:04:17.433 CC lib/ioat/ioat.o 00:04:17.433 CC lib/util/crc32.o 00:04:17.433 CC lib/util/crc32c.o 00:04:17.433 CC lib/util/crc16.o 00:04:17.433 CC lib/dma/dma.o 00:04:17.433 CXX lib/trace_parser/trace.o 00:04:17.433 CC lib/vfio_user/host/vfio_user_pci.o 00:04:17.433 CC lib/util/crc32_ieee.o 00:04:17.433 CC lib/util/crc64.o 00:04:17.433 CC lib/util/dif.o 00:04:17.433 CC lib/util/fd.o 00:04:17.433 CC lib/vfio_user/host/vfio_user.o 00:04:17.433 CC lib/util/fd_group.o 00:04:17.433 LIB libspdk_dma.a 00:04:17.433 SO libspdk_dma.so.5.0 00:04:17.433 CC lib/util/file.o 00:04:17.433 CC lib/util/hexlify.o 00:04:17.433 SYMLINK libspdk_dma.so 00:04:17.433 CC lib/util/iov.o 00:04:17.433 LIB libspdk_ioat.a 00:04:17.433 CC lib/util/math.o 00:04:17.433 CC lib/util/net.o 00:04:17.433 SO libspdk_ioat.so.7.0 00:04:17.433 SYMLINK libspdk_ioat.so 00:04:17.433 CC lib/util/pipe.o 00:04:17.433 CC lib/util/strerror_tls.o 00:04:17.433 CC lib/util/string.o 00:04:17.433 CC lib/util/uuid.o 00:04:17.433 LIB libspdk_vfio_user.a 00:04:17.433 CC lib/util/xor.o 00:04:17.433 SO libspdk_vfio_user.so.5.0 00:04:17.433 SYMLINK libspdk_vfio_user.so 00:04:17.433 CC lib/util/zipf.o 00:04:17.433 CC lib/util/md5.o 00:04:17.433 LIB libspdk_util.a 00:04:17.433 SO libspdk_util.so.10.0 00:04:17.433 LIB libspdk_trace_parser.a 00:04:17.433 SYMLINK libspdk_util.so 00:04:17.433 SO libspdk_trace_parser.so.6.0 00:04:17.433 SYMLINK libspdk_trace_parser.so 00:04:17.433 CC lib/rdma_utils/rdma_utils.o 00:04:17.433 CC lib/conf/conf.o 00:04:17.433 CC lib/rdma_provider/common.o 00:04:17.433 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:17.433 CC lib/idxd/idxd.o 00:04:17.433 CC lib/idxd/idxd_user.o 00:04:17.433 CC lib/idxd/idxd_kernel.o 00:04:17.433 CC lib/vmd/vmd.o 00:04:17.433 CC lib/env_dpdk/env.o 00:04:17.433 CC lib/json/json_parse.o 00:04:17.433 CC lib/env_dpdk/memory.o 00:04:17.433 CC lib/json/json_util.o 00:04:17.433 LIB libspdk_rdma_provider.a 00:04:17.433 LIB libspdk_conf.a 00:04:17.433 SO libspdk_rdma_provider.so.6.0 00:04:17.433 SO libspdk_conf.so.6.0 00:04:17.433 LIB libspdk_rdma_utils.a 00:04:17.433 CC lib/json/json_write.o 00:04:17.433 CC lib/env_dpdk/pci.o 00:04:17.433 SO libspdk_rdma_utils.so.1.0 00:04:17.433 SYMLINK libspdk_rdma_provider.so 00:04:17.433 SYMLINK libspdk_conf.so 00:04:17.433 CC lib/vmd/led.o 00:04:17.433 CC lib/env_dpdk/init.o 00:04:17.433 SYMLINK libspdk_rdma_utils.so 00:04:17.433 CC lib/env_dpdk/threads.o 00:04:17.433 CC lib/env_dpdk/pci_ioat.o 00:04:17.433 CC lib/env_dpdk/pci_virtio.o 00:04:17.433 CC lib/env_dpdk/pci_vmd.o 00:04:17.433 LIB libspdk_json.a 00:04:17.433 CC lib/env_dpdk/pci_idxd.o 00:04:17.433 SO libspdk_json.so.6.0 00:04:17.433 CC lib/env_dpdk/pci_event.o 00:04:17.433 CC lib/env_dpdk/sigbus_handler.o 00:04:17.433 SYMLINK libspdk_json.so 00:04:17.433 CC lib/env_dpdk/pci_dpdk.o 00:04:17.433 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:17.433 LIB libspdk_vmd.a 00:04:17.433 SO libspdk_vmd.so.6.0 00:04:17.433 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:17.433 SYMLINK libspdk_vmd.so 00:04:17.433 LIB libspdk_idxd.a 00:04:17.433 SO libspdk_idxd.so.12.1 00:04:17.433 SYMLINK libspdk_idxd.so 00:04:17.433 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:17.433 CC lib/jsonrpc/jsonrpc_server.o 00:04:17.433 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:17.433 CC lib/jsonrpc/jsonrpc_client.o 00:04:17.433 LIB libspdk_jsonrpc.a 00:04:17.433 SO libspdk_jsonrpc.so.6.0 00:04:17.433 SYMLINK libspdk_jsonrpc.so 00:04:17.433 CC lib/rpc/rpc.o 00:04:17.433 LIB libspdk_env_dpdk.a 00:04:17.433 LIB libspdk_rpc.a 00:04:17.433 SO libspdk_env_dpdk.so.15.0 00:04:17.433 SO libspdk_rpc.so.6.0 00:04:17.433 SYMLINK libspdk_rpc.so 00:04:17.433 SYMLINK libspdk_env_dpdk.so 00:04:17.434 CC lib/trace/trace.o 00:04:17.434 CC lib/trace/trace_flags.o 00:04:17.434 CC lib/trace/trace_rpc.o 00:04:17.434 CC lib/keyring/keyring.o 00:04:17.434 CC lib/keyring/keyring_rpc.o 00:04:17.434 CC lib/notify/notify.o 00:04:17.434 CC lib/notify/notify_rpc.o 00:04:17.690 LIB libspdk_notify.a 00:04:17.690 SO libspdk_notify.so.6.0 00:04:17.690 LIB libspdk_keyring.a 00:04:17.690 SYMLINK libspdk_notify.so 00:04:17.690 LIB libspdk_trace.a 00:04:17.690 SO libspdk_keyring.so.2.0 00:04:17.690 SO libspdk_trace.so.11.0 00:04:17.946 SYMLINK libspdk_keyring.so 00:04:17.946 SYMLINK libspdk_trace.so 00:04:17.946 CC lib/sock/sock.o 00:04:17.946 CC lib/sock/sock_rpc.o 00:04:18.203 CC lib/thread/thread.o 00:04:18.203 CC lib/thread/iobuf.o 00:04:18.460 LIB libspdk_sock.a 00:04:18.460 SO libspdk_sock.so.10.0 00:04:18.460 SYMLINK libspdk_sock.so 00:04:18.717 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:18.717 CC lib/nvme/nvme_ctrlr.o 00:04:18.717 CC lib/nvme/nvme_ns.o 00:04:18.717 CC lib/nvme/nvme_fabric.o 00:04:18.717 CC lib/nvme/nvme.o 00:04:18.717 CC lib/nvme/nvme_ns_cmd.o 00:04:18.717 CC lib/nvme/nvme_qpair.o 00:04:18.717 CC lib/nvme/nvme_pcie.o 00:04:18.717 CC lib/nvme/nvme_pcie_common.o 00:04:19.283 LIB libspdk_thread.a 00:04:19.283 CC lib/nvme/nvme_quirks.o 00:04:19.541 SO libspdk_thread.so.10.1 00:04:19.541 CC lib/nvme/nvme_transport.o 00:04:19.541 CC lib/nvme/nvme_discovery.o 00:04:19.541 SYMLINK libspdk_thread.so 00:04:19.541 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:19.541 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:19.541 CC lib/nvme/nvme_tcp.o 00:04:19.541 CC lib/nvme/nvme_opal.o 00:04:19.541 CC lib/nvme/nvme_io_msg.o 00:04:19.799 CC lib/nvme/nvme_poll_group.o 00:04:19.799 CC lib/nvme/nvme_zns.o 00:04:20.058 CC lib/accel/accel.o 00:04:20.058 CC lib/blob/blobstore.o 00:04:20.058 CC lib/init/json_config.o 00:04:20.058 CC lib/nvme/nvme_stubs.o 00:04:20.316 CC lib/virtio/virtio.o 00:04:20.316 CC lib/init/subsystem.o 00:04:20.316 CC lib/fsdev/fsdev.o 00:04:20.316 CC lib/accel/accel_rpc.o 00:04:20.316 CC lib/fsdev/fsdev_io.o 00:04:20.316 CC lib/init/subsystem_rpc.o 00:04:20.316 CC lib/init/rpc.o 00:04:20.575 CC lib/virtio/virtio_vhost_user.o 00:04:20.575 CC lib/virtio/virtio_vfio_user.o 00:04:20.575 LIB libspdk_init.a 00:04:20.575 SO libspdk_init.so.6.0 00:04:20.575 CC lib/virtio/virtio_pci.o 00:04:20.575 SYMLINK libspdk_init.so 00:04:20.575 CC lib/blob/request.o 00:04:20.833 CC lib/accel/accel_sw.o 00:04:20.833 CC lib/blob/zeroes.o 00:04:20.833 CC lib/blob/blob_bs_dev.o 00:04:20.833 CC lib/fsdev/fsdev_rpc.o 00:04:20.833 CC lib/nvme/nvme_auth.o 00:04:20.833 CC lib/nvme/nvme_cuse.o 00:04:20.833 LIB libspdk_virtio.a 00:04:20.833 CC lib/nvme/nvme_rdma.o 00:04:20.833 SO libspdk_virtio.so.7.0 00:04:20.833 CC lib/event/app.o 00:04:21.092 CC lib/event/reactor.o 00:04:21.092 SYMLINK libspdk_virtio.so 00:04:21.092 CC lib/event/log_rpc.o 00:04:21.092 LIB libspdk_fsdev.a 00:04:21.092 SO libspdk_fsdev.so.1.0 00:04:21.092 CC lib/event/app_rpc.o 00:04:21.092 SYMLINK libspdk_fsdev.so 00:04:21.092 CC lib/event/scheduler_static.o 00:04:21.092 LIB libspdk_accel.a 00:04:21.351 SO libspdk_accel.so.16.0 00:04:21.351 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:21.351 SYMLINK libspdk_accel.so 00:04:21.351 LIB libspdk_event.a 00:04:21.351 SO libspdk_event.so.14.0 00:04:21.611 CC lib/bdev/bdev.o 00:04:21.611 CC lib/bdev/part.o 00:04:21.611 CC lib/bdev/bdev_rpc.o 00:04:21.611 CC lib/bdev/bdev_zone.o 00:04:21.611 SYMLINK libspdk_event.so 00:04:21.611 CC lib/bdev/scsi_nvme.o 00:04:21.869 LIB libspdk_fuse_dispatcher.a 00:04:21.869 SO libspdk_fuse_dispatcher.so.1.0 00:04:21.869 LIB libspdk_nvme.a 00:04:21.869 SYMLINK libspdk_fuse_dispatcher.so 00:04:22.127 SO libspdk_nvme.so.14.0 00:04:22.385 SYMLINK libspdk_nvme.so 00:04:23.321 LIB libspdk_blob.a 00:04:23.321 SO libspdk_blob.so.11.0 00:04:23.321 SYMLINK libspdk_blob.so 00:04:23.321 CC lib/blobfs/tree.o 00:04:23.321 CC lib/blobfs/blobfs.o 00:04:23.321 CC lib/lvol/lvol.o 00:04:24.258 LIB libspdk_bdev.a 00:04:24.258 LIB libspdk_blobfs.a 00:04:24.258 SO libspdk_bdev.so.16.0 00:04:24.258 SO libspdk_blobfs.so.10.0 00:04:24.258 SYMLINK libspdk_blobfs.so 00:04:24.517 LIB libspdk_lvol.a 00:04:24.517 SYMLINK libspdk_bdev.so 00:04:24.517 SO libspdk_lvol.so.10.0 00:04:24.517 SYMLINK libspdk_lvol.so 00:04:24.517 CC lib/ublk/ublk.o 00:04:24.517 CC lib/ublk/ublk_rpc.o 00:04:24.517 CC lib/ftl/ftl_core.o 00:04:24.517 CC lib/ftl/ftl_init.o 00:04:24.517 CC lib/ftl/ftl_layout.o 00:04:24.517 CC lib/scsi/dev.o 00:04:24.517 CC lib/ftl/ftl_debug.o 00:04:24.517 CC lib/scsi/lun.o 00:04:24.517 CC lib/nvmf/ctrlr.o 00:04:24.517 CC lib/nbd/nbd.o 00:04:24.775 CC lib/ftl/ftl_io.o 00:04:24.775 CC lib/nbd/nbd_rpc.o 00:04:24.775 CC lib/ftl/ftl_sb.o 00:04:24.775 CC lib/ftl/ftl_l2p.o 00:04:24.775 CC lib/scsi/port.o 00:04:24.775 CC lib/ftl/ftl_l2p_flat.o 00:04:24.775 CC lib/ftl/ftl_nv_cache.o 00:04:24.775 CC lib/ftl/ftl_band.o 00:04:24.775 CC lib/ftl/ftl_band_ops.o 00:04:25.033 CC lib/nvmf/ctrlr_discovery.o 00:04:25.033 CC lib/ftl/ftl_writer.o 00:04:25.033 CC lib/scsi/scsi.o 00:04:25.033 LIB libspdk_nbd.a 00:04:25.033 SO libspdk_nbd.so.7.0 00:04:25.033 CC lib/ftl/ftl_rq.o 00:04:25.033 SYMLINK libspdk_nbd.so 00:04:25.033 CC lib/ftl/ftl_reloc.o 00:04:25.033 CC lib/scsi/scsi_bdev.o 00:04:25.033 CC lib/ftl/ftl_l2p_cache.o 00:04:25.292 CC lib/ftl/ftl_p2l.o 00:04:25.292 LIB libspdk_ublk.a 00:04:25.292 CC lib/scsi/scsi_pr.o 00:04:25.292 SO libspdk_ublk.so.3.0 00:04:25.292 CC lib/ftl/ftl_p2l_log.o 00:04:25.292 SYMLINK libspdk_ublk.so 00:04:25.292 CC lib/nvmf/ctrlr_bdev.o 00:04:25.292 CC lib/ftl/mngt/ftl_mngt.o 00:04:25.292 CC lib/scsi/scsi_rpc.o 00:04:25.292 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:25.551 CC lib/scsi/task.o 00:04:25.551 CC lib/nvmf/subsystem.o 00:04:25.551 CC lib/nvmf/nvmf.o 00:04:25.551 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:25.551 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:25.551 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:25.551 LIB libspdk_scsi.a 00:04:25.810 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:25.810 CC lib/nvmf/nvmf_rpc.o 00:04:25.810 CC lib/nvmf/transport.o 00:04:25.810 SO libspdk_scsi.so.9.0 00:04:25.810 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:25.810 SYMLINK libspdk_scsi.so 00:04:25.810 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:25.810 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:26.068 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:26.068 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:26.068 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:26.068 CC lib/iscsi/conn.o 00:04:26.068 CC lib/vhost/vhost.o 00:04:26.068 CC lib/vhost/vhost_rpc.o 00:04:26.068 CC lib/vhost/vhost_scsi.o 00:04:26.326 CC lib/iscsi/init_grp.o 00:04:26.326 CC lib/iscsi/iscsi.o 00:04:26.326 CC lib/iscsi/param.o 00:04:26.326 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:26.326 CC lib/ftl/utils/ftl_conf.o 00:04:26.585 CC lib/ftl/utils/ftl_md.o 00:04:26.585 CC lib/ftl/utils/ftl_mempool.o 00:04:26.585 CC lib/vhost/vhost_blk.o 00:04:26.585 CC lib/vhost/rte_vhost_user.o 00:04:26.585 CC lib/ftl/utils/ftl_bitmap.o 00:04:26.585 CC lib/ftl/utils/ftl_property.o 00:04:26.585 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:26.585 CC lib/nvmf/tcp.o 00:04:26.843 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:26.843 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:26.843 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:26.843 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:26.843 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:26.843 CC lib/iscsi/portal_grp.o 00:04:26.843 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:27.102 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:27.102 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:27.102 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:27.102 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:27.102 CC lib/iscsi/tgt_node.o 00:04:27.102 CC lib/iscsi/iscsi_subsystem.o 00:04:27.102 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:27.102 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:27.359 CC lib/ftl/base/ftl_base_dev.o 00:04:27.359 CC lib/ftl/base/ftl_base_bdev.o 00:04:27.359 CC lib/ftl/ftl_trace.o 00:04:27.359 CC lib/nvmf/stubs.o 00:04:27.359 CC lib/nvmf/mdns_server.o 00:04:27.359 CC lib/nvmf/rdma.o 00:04:27.359 CC lib/nvmf/auth.o 00:04:27.359 LIB libspdk_vhost.a 00:04:27.359 LIB libspdk_ftl.a 00:04:27.617 SO libspdk_vhost.so.8.0 00:04:27.617 CC lib/iscsi/iscsi_rpc.o 00:04:27.617 SYMLINK libspdk_vhost.so 00:04:27.617 CC lib/iscsi/task.o 00:04:27.617 SO libspdk_ftl.so.9.0 00:04:27.874 SYMLINK libspdk_ftl.so 00:04:27.874 LIB libspdk_iscsi.a 00:04:28.132 SO libspdk_iscsi.so.8.0 00:04:28.132 SYMLINK libspdk_iscsi.so 00:04:29.065 LIB libspdk_nvmf.a 00:04:29.065 SO libspdk_nvmf.so.19.0 00:04:29.323 SYMLINK libspdk_nvmf.so 00:04:29.581 CC module/env_dpdk/env_dpdk_rpc.o 00:04:29.581 CC module/scheduler/gscheduler/gscheduler.o 00:04:29.581 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:29.581 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:29.581 CC module/keyring/linux/keyring.o 00:04:29.581 CC module/blob/bdev/blob_bdev.o 00:04:29.581 CC module/keyring/file/keyring.o 00:04:29.581 CC module/sock/posix/posix.o 00:04:29.581 CC module/fsdev/aio/fsdev_aio.o 00:04:29.581 CC module/accel/error/accel_error.o 00:04:29.581 LIB libspdk_env_dpdk_rpc.a 00:04:29.841 SO libspdk_env_dpdk_rpc.so.6.0 00:04:29.841 LIB libspdk_scheduler_gscheduler.a 00:04:29.841 LIB libspdk_scheduler_dpdk_governor.a 00:04:29.841 SO libspdk_scheduler_gscheduler.so.4.0 00:04:29.841 CC module/keyring/linux/keyring_rpc.o 00:04:29.841 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:29.841 SYMLINK libspdk_env_dpdk_rpc.so 00:04:29.841 CC module/keyring/file/keyring_rpc.o 00:04:29.841 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:29.841 SYMLINK libspdk_scheduler_gscheduler.so 00:04:29.841 CC module/fsdev/aio/linux_aio_mgr.o 00:04:29.841 LIB libspdk_scheduler_dynamic.a 00:04:29.841 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:29.841 CC module/accel/error/accel_error_rpc.o 00:04:29.841 SO libspdk_scheduler_dynamic.so.4.0 00:04:29.841 LIB libspdk_keyring_linux.a 00:04:29.841 SYMLINK libspdk_scheduler_dynamic.so 00:04:29.841 SO libspdk_keyring_linux.so.1.0 00:04:29.841 LIB libspdk_blob_bdev.a 00:04:29.841 LIB libspdk_keyring_file.a 00:04:29.841 SO libspdk_blob_bdev.so.11.0 00:04:30.099 SO libspdk_keyring_file.so.2.0 00:04:30.099 LIB libspdk_accel_error.a 00:04:30.099 CC module/accel/ioat/accel_ioat.o 00:04:30.099 SYMLINK libspdk_keyring_linux.so 00:04:30.099 CC module/accel/ioat/accel_ioat_rpc.o 00:04:30.099 SYMLINK libspdk_blob_bdev.so 00:04:30.099 SO libspdk_accel_error.so.2.0 00:04:30.099 SYMLINK libspdk_keyring_file.so 00:04:30.099 SYMLINK libspdk_accel_error.so 00:04:30.099 CC module/accel/dsa/accel_dsa.o 00:04:30.099 CC module/accel/dsa/accel_dsa_rpc.o 00:04:30.099 CC module/accel/iaa/accel_iaa.o 00:04:30.099 CC module/accel/iaa/accel_iaa_rpc.o 00:04:30.099 LIB libspdk_accel_ioat.a 00:04:30.099 SO libspdk_accel_ioat.so.6.0 00:04:30.099 LIB libspdk_fsdev_aio.a 00:04:30.099 CC module/bdev/error/vbdev_error.o 00:04:30.099 CC module/bdev/delay/vbdev_delay.o 00:04:30.358 SO libspdk_fsdev_aio.so.1.0 00:04:30.358 CC module/blobfs/bdev/blobfs_bdev.o 00:04:30.358 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:30.358 SYMLINK libspdk_accel_ioat.so 00:04:30.358 LIB libspdk_accel_iaa.a 00:04:30.358 SYMLINK libspdk_fsdev_aio.so 00:04:30.358 SO libspdk_accel_iaa.so.3.0 00:04:30.358 CC module/bdev/gpt/gpt.o 00:04:30.358 LIB libspdk_accel_dsa.a 00:04:30.358 SO libspdk_accel_dsa.so.5.0 00:04:30.358 SYMLINK libspdk_accel_iaa.so 00:04:30.358 CC module/bdev/lvol/vbdev_lvol.o 00:04:30.358 SYMLINK libspdk_accel_dsa.so 00:04:30.358 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:30.358 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:30.358 CC module/bdev/malloc/bdev_malloc.o 00:04:30.358 CC module/bdev/error/vbdev_error_rpc.o 00:04:30.358 LIB libspdk_sock_posix.a 00:04:30.358 SO libspdk_sock_posix.so.6.0 00:04:30.618 CC module/bdev/gpt/vbdev_gpt.o 00:04:30.618 CC module/bdev/null/bdev_null.o 00:04:30.618 LIB libspdk_bdev_error.a 00:04:30.618 CC module/bdev/nvme/bdev_nvme.o 00:04:30.618 SYMLINK libspdk_sock_posix.so 00:04:30.618 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:30.618 SO libspdk_bdev_error.so.6.0 00:04:30.618 LIB libspdk_blobfs_bdev.a 00:04:30.618 LIB libspdk_bdev_delay.a 00:04:30.618 SO libspdk_blobfs_bdev.so.6.0 00:04:30.618 SYMLINK libspdk_bdev_error.so 00:04:30.618 SO libspdk_bdev_delay.so.6.0 00:04:30.618 CC module/bdev/nvme/nvme_rpc.o 00:04:30.618 SYMLINK libspdk_blobfs_bdev.so 00:04:30.618 SYMLINK libspdk_bdev_delay.so 00:04:30.618 CC module/bdev/nvme/bdev_mdns_client.o 00:04:30.876 CC module/bdev/nvme/vbdev_opal.o 00:04:30.876 CC module/bdev/null/bdev_null_rpc.o 00:04:30.876 LIB libspdk_bdev_gpt.a 00:04:30.876 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:30.876 SO libspdk_bdev_gpt.so.6.0 00:04:30.876 CC module/bdev/passthru/vbdev_passthru.o 00:04:30.876 LIB libspdk_bdev_lvol.a 00:04:30.876 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:30.876 SO libspdk_bdev_lvol.so.6.0 00:04:30.876 SYMLINK libspdk_bdev_gpt.so 00:04:30.876 SYMLINK libspdk_bdev_lvol.so 00:04:30.876 LIB libspdk_bdev_null.a 00:04:30.876 LIB libspdk_bdev_malloc.a 00:04:30.876 SO libspdk_bdev_null.so.6.0 00:04:30.876 SO libspdk_bdev_malloc.so.6.0 00:04:30.876 CC module/bdev/raid/bdev_raid.o 00:04:31.135 CC module/bdev/split/vbdev_split.o 00:04:31.135 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:31.135 LIB libspdk_bdev_passthru.a 00:04:31.135 SYMLINK libspdk_bdev_null.so 00:04:31.135 SYMLINK libspdk_bdev_malloc.so 00:04:31.135 SO libspdk_bdev_passthru.so.6.0 00:04:31.135 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:31.135 CC module/bdev/xnvme/bdev_xnvme.o 00:04:31.135 SYMLINK libspdk_bdev_passthru.so 00:04:31.135 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:31.135 CC module/bdev/aio/bdev_aio.o 00:04:31.135 CC module/bdev/ftl/bdev_ftl.o 00:04:31.135 CC module/bdev/split/vbdev_split_rpc.o 00:04:31.394 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:31.394 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:04:31.394 CC module/bdev/iscsi/bdev_iscsi.o 00:04:31.394 LIB libspdk_bdev_split.a 00:04:31.394 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:31.394 SO libspdk_bdev_split.so.6.0 00:04:31.394 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:31.394 SYMLINK libspdk_bdev_split.so 00:04:31.394 CC module/bdev/raid/bdev_raid_rpc.o 00:04:31.394 CC module/bdev/raid/bdev_raid_sb.o 00:04:31.394 LIB libspdk_bdev_xnvme.a 00:04:31.394 LIB libspdk_bdev_ftl.a 00:04:31.394 SO libspdk_bdev_xnvme.so.3.0 00:04:31.394 CC module/bdev/aio/bdev_aio_rpc.o 00:04:31.653 SO libspdk_bdev_ftl.so.6.0 00:04:31.653 LIB libspdk_bdev_zone_block.a 00:04:31.653 SYMLINK libspdk_bdev_xnvme.so 00:04:31.653 SO libspdk_bdev_zone_block.so.6.0 00:04:31.653 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:31.653 SYMLINK libspdk_bdev_ftl.so 00:04:31.653 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:31.653 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:31.653 CC module/bdev/raid/raid0.o 00:04:31.653 SYMLINK libspdk_bdev_zone_block.so 00:04:31.653 CC module/bdev/raid/raid1.o 00:04:31.653 LIB libspdk_bdev_aio.a 00:04:31.653 SO libspdk_bdev_aio.so.6.0 00:04:31.653 LIB libspdk_bdev_iscsi.a 00:04:31.653 SO libspdk_bdev_iscsi.so.6.0 00:04:31.653 CC module/bdev/raid/concat.o 00:04:31.653 SYMLINK libspdk_bdev_aio.so 00:04:31.653 SYMLINK libspdk_bdev_iscsi.so 00:04:31.913 LIB libspdk_bdev_virtio.a 00:04:31.913 SO libspdk_bdev_virtio.so.6.0 00:04:31.913 SYMLINK libspdk_bdev_virtio.so 00:04:31.913 LIB libspdk_bdev_raid.a 00:04:32.246 SO libspdk_bdev_raid.so.6.0 00:04:32.246 SYMLINK libspdk_bdev_raid.so 00:04:32.842 LIB libspdk_bdev_nvme.a 00:04:33.102 SO libspdk_bdev_nvme.so.7.0 00:04:33.102 SYMLINK libspdk_bdev_nvme.so 00:04:33.361 CC module/event/subsystems/sock/sock.o 00:04:33.361 CC module/event/subsystems/iobuf/iobuf.o 00:04:33.361 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:33.361 CC module/event/subsystems/keyring/keyring.o 00:04:33.361 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:33.361 CC module/event/subsystems/vmd/vmd.o 00:04:33.361 CC module/event/subsystems/scheduler/scheduler.o 00:04:33.361 CC module/event/subsystems/fsdev/fsdev.o 00:04:33.361 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:33.619 LIB libspdk_event_scheduler.a 00:04:33.619 LIB libspdk_event_keyring.a 00:04:33.619 LIB libspdk_event_fsdev.a 00:04:33.619 LIB libspdk_event_sock.a 00:04:33.619 SO libspdk_event_scheduler.so.4.0 00:04:33.619 LIB libspdk_event_vmd.a 00:04:33.619 SO libspdk_event_keyring.so.1.0 00:04:33.619 LIB libspdk_event_iobuf.a 00:04:33.619 LIB libspdk_event_vhost_blk.a 00:04:33.619 SO libspdk_event_fsdev.so.1.0 00:04:33.619 SO libspdk_event_sock.so.5.0 00:04:33.619 SO libspdk_event_vmd.so.6.0 00:04:33.619 SO libspdk_event_vhost_blk.so.3.0 00:04:33.619 SO libspdk_event_iobuf.so.3.0 00:04:33.619 SYMLINK libspdk_event_keyring.so 00:04:33.619 SYMLINK libspdk_event_scheduler.so 00:04:33.619 SYMLINK libspdk_event_fsdev.so 00:04:33.619 SYMLINK libspdk_event_sock.so 00:04:33.619 SYMLINK libspdk_event_vmd.so 00:04:33.619 SYMLINK libspdk_event_vhost_blk.so 00:04:33.619 SYMLINK libspdk_event_iobuf.so 00:04:33.880 CC module/event/subsystems/accel/accel.o 00:04:33.880 LIB libspdk_event_accel.a 00:04:34.140 SO libspdk_event_accel.so.6.0 00:04:34.140 SYMLINK libspdk_event_accel.so 00:04:34.401 CC module/event/subsystems/bdev/bdev.o 00:04:34.401 LIB libspdk_event_bdev.a 00:04:34.401 SO libspdk_event_bdev.so.6.0 00:04:34.660 SYMLINK libspdk_event_bdev.so 00:04:34.660 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:34.660 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:34.660 CC module/event/subsystems/ublk/ublk.o 00:04:34.660 CC module/event/subsystems/scsi/scsi.o 00:04:34.660 CC module/event/subsystems/nbd/nbd.o 00:04:34.920 LIB libspdk_event_scsi.a 00:04:34.920 LIB libspdk_event_ublk.a 00:04:34.920 LIB libspdk_event_nbd.a 00:04:34.920 SO libspdk_event_scsi.so.6.0 00:04:34.920 SO libspdk_event_ublk.so.3.0 00:04:34.920 SO libspdk_event_nbd.so.6.0 00:04:34.920 LIB libspdk_event_nvmf.a 00:04:34.920 SYMLINK libspdk_event_ublk.so 00:04:34.920 SYMLINK libspdk_event_scsi.so 00:04:34.920 SYMLINK libspdk_event_nbd.so 00:04:34.920 SO libspdk_event_nvmf.so.6.0 00:04:34.920 SYMLINK libspdk_event_nvmf.so 00:04:35.181 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:35.181 CC module/event/subsystems/iscsi/iscsi.o 00:04:35.181 LIB libspdk_event_vhost_scsi.a 00:04:35.181 SO libspdk_event_vhost_scsi.so.3.0 00:04:35.181 LIB libspdk_event_iscsi.a 00:04:35.181 SO libspdk_event_iscsi.so.6.0 00:04:35.181 SYMLINK libspdk_event_vhost_scsi.so 00:04:35.181 SYMLINK libspdk_event_iscsi.so 00:04:35.440 SO libspdk.so.6.0 00:04:35.440 SYMLINK libspdk.so 00:04:35.698 CXX app/trace/trace.o 00:04:35.698 TEST_HEADER include/spdk/accel.h 00:04:35.698 TEST_HEADER include/spdk/accel_module.h 00:04:35.698 TEST_HEADER include/spdk/assert.h 00:04:35.698 TEST_HEADER include/spdk/barrier.h 00:04:35.698 TEST_HEADER include/spdk/base64.h 00:04:35.698 CC app/trace_record/trace_record.o 00:04:35.698 TEST_HEADER include/spdk/bdev.h 00:04:35.698 TEST_HEADER include/spdk/bdev_module.h 00:04:35.698 TEST_HEADER include/spdk/bdev_zone.h 00:04:35.698 TEST_HEADER include/spdk/bit_array.h 00:04:35.698 TEST_HEADER include/spdk/bit_pool.h 00:04:35.698 TEST_HEADER include/spdk/blob_bdev.h 00:04:35.698 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:35.698 TEST_HEADER include/spdk/blobfs.h 00:04:35.698 TEST_HEADER include/spdk/blob.h 00:04:35.698 TEST_HEADER include/spdk/conf.h 00:04:35.698 TEST_HEADER include/spdk/config.h 00:04:35.698 TEST_HEADER include/spdk/cpuset.h 00:04:35.698 CC app/nvmf_tgt/nvmf_main.o 00:04:35.698 TEST_HEADER include/spdk/crc16.h 00:04:35.698 TEST_HEADER include/spdk/crc32.h 00:04:35.698 TEST_HEADER include/spdk/crc64.h 00:04:35.698 TEST_HEADER include/spdk/dif.h 00:04:35.698 TEST_HEADER include/spdk/dma.h 00:04:35.698 TEST_HEADER include/spdk/endian.h 00:04:35.698 TEST_HEADER include/spdk/env_dpdk.h 00:04:35.698 TEST_HEADER include/spdk/env.h 00:04:35.698 TEST_HEADER include/spdk/event.h 00:04:35.698 TEST_HEADER include/spdk/fd_group.h 00:04:35.698 TEST_HEADER include/spdk/fd.h 00:04:35.698 TEST_HEADER include/spdk/file.h 00:04:35.698 TEST_HEADER include/spdk/fsdev.h 00:04:35.698 TEST_HEADER include/spdk/fsdev_module.h 00:04:35.698 TEST_HEADER include/spdk/ftl.h 00:04:35.698 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:35.698 TEST_HEADER include/spdk/gpt_spec.h 00:04:35.698 CC examples/ioat/perf/perf.o 00:04:35.698 TEST_HEADER include/spdk/hexlify.h 00:04:35.698 TEST_HEADER include/spdk/histogram_data.h 00:04:35.698 TEST_HEADER include/spdk/idxd.h 00:04:35.698 CC test/thread/poller_perf/poller_perf.o 00:04:35.698 TEST_HEADER include/spdk/idxd_spec.h 00:04:35.698 TEST_HEADER include/spdk/init.h 00:04:35.698 TEST_HEADER include/spdk/ioat.h 00:04:35.698 TEST_HEADER include/spdk/ioat_spec.h 00:04:35.698 TEST_HEADER include/spdk/iscsi_spec.h 00:04:35.698 TEST_HEADER include/spdk/json.h 00:04:35.698 TEST_HEADER include/spdk/jsonrpc.h 00:04:35.698 CC examples/util/zipf/zipf.o 00:04:35.698 TEST_HEADER include/spdk/keyring.h 00:04:35.698 TEST_HEADER include/spdk/keyring_module.h 00:04:35.698 TEST_HEADER include/spdk/likely.h 00:04:35.698 TEST_HEADER include/spdk/log.h 00:04:35.698 TEST_HEADER include/spdk/lvol.h 00:04:35.698 TEST_HEADER include/spdk/md5.h 00:04:35.698 TEST_HEADER include/spdk/memory.h 00:04:35.698 TEST_HEADER include/spdk/mmio.h 00:04:35.698 TEST_HEADER include/spdk/nbd.h 00:04:35.698 TEST_HEADER include/spdk/net.h 00:04:35.698 TEST_HEADER include/spdk/notify.h 00:04:35.698 CC test/app/bdev_svc/bdev_svc.o 00:04:35.698 TEST_HEADER include/spdk/nvme.h 00:04:35.698 TEST_HEADER include/spdk/nvme_intel.h 00:04:35.698 CC test/dma/test_dma/test_dma.o 00:04:35.698 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:35.698 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:35.698 TEST_HEADER include/spdk/nvme_spec.h 00:04:35.698 TEST_HEADER include/spdk/nvme_zns.h 00:04:35.698 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:35.698 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:35.698 TEST_HEADER include/spdk/nvmf.h 00:04:35.698 TEST_HEADER include/spdk/nvmf_spec.h 00:04:35.698 TEST_HEADER include/spdk/nvmf_transport.h 00:04:35.698 TEST_HEADER include/spdk/opal.h 00:04:35.698 CC test/env/mem_callbacks/mem_callbacks.o 00:04:35.698 TEST_HEADER include/spdk/opal_spec.h 00:04:35.698 TEST_HEADER include/spdk/pci_ids.h 00:04:35.698 TEST_HEADER include/spdk/pipe.h 00:04:35.698 TEST_HEADER include/spdk/queue.h 00:04:35.698 TEST_HEADER include/spdk/reduce.h 00:04:35.698 TEST_HEADER include/spdk/rpc.h 00:04:35.698 TEST_HEADER include/spdk/scheduler.h 00:04:35.698 TEST_HEADER include/spdk/scsi.h 00:04:35.698 TEST_HEADER include/spdk/scsi_spec.h 00:04:35.698 TEST_HEADER include/spdk/sock.h 00:04:35.698 TEST_HEADER include/spdk/stdinc.h 00:04:35.698 TEST_HEADER include/spdk/string.h 00:04:35.698 TEST_HEADER include/spdk/thread.h 00:04:35.698 TEST_HEADER include/spdk/trace.h 00:04:35.698 TEST_HEADER include/spdk/trace_parser.h 00:04:35.698 TEST_HEADER include/spdk/tree.h 00:04:35.698 TEST_HEADER include/spdk/ublk.h 00:04:35.698 TEST_HEADER include/spdk/util.h 00:04:35.698 TEST_HEADER include/spdk/uuid.h 00:04:35.698 TEST_HEADER include/spdk/version.h 00:04:35.698 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:35.698 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:35.698 TEST_HEADER include/spdk/vhost.h 00:04:35.698 TEST_HEADER include/spdk/vmd.h 00:04:35.698 TEST_HEADER include/spdk/xor.h 00:04:35.698 TEST_HEADER include/spdk/zipf.h 00:04:35.698 CXX test/cpp_headers/accel.o 00:04:35.698 LINK zipf 00:04:35.698 LINK nvmf_tgt 00:04:35.957 LINK poller_perf 00:04:35.957 LINK spdk_trace_record 00:04:35.957 LINK bdev_svc 00:04:35.957 LINK mem_callbacks 00:04:35.957 LINK ioat_perf 00:04:35.957 CXX test/cpp_headers/accel_module.o 00:04:35.957 LINK spdk_trace 00:04:35.957 CC examples/ioat/verify/verify.o 00:04:35.957 CC test/env/vtophys/vtophys.o 00:04:35.957 CC app/iscsi_tgt/iscsi_tgt.o 00:04:35.957 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:35.957 CXX test/cpp_headers/assert.o 00:04:36.215 CC examples/thread/thread/thread_ex.o 00:04:36.215 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:36.215 CC examples/sock/hello_world/hello_sock.o 00:04:36.215 LINK vtophys 00:04:36.215 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:36.215 LINK verify 00:04:36.215 LINK test_dma 00:04:36.215 CXX test/cpp_headers/barrier.o 00:04:36.215 LINK iscsi_tgt 00:04:36.215 LINK interrupt_tgt 00:04:36.215 CXX test/cpp_headers/base64.o 00:04:36.215 LINK thread 00:04:36.215 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:36.474 LINK hello_sock 00:04:36.474 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:36.474 CXX test/cpp_headers/bdev.o 00:04:36.474 LINK env_dpdk_post_init 00:04:36.474 CC examples/vmd/lsvmd/lsvmd.o 00:04:36.474 CC examples/idxd/perf/perf.o 00:04:36.474 CC app/spdk_tgt/spdk_tgt.o 00:04:36.474 CC examples/vmd/led/led.o 00:04:36.474 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:36.474 LINK nvme_fuzz 00:04:36.474 CXX test/cpp_headers/bdev_module.o 00:04:36.474 LINK lsvmd 00:04:36.732 CC test/env/memory/memory_ut.o 00:04:36.732 CC examples/accel/perf/accel_perf.o 00:04:36.732 LINK led 00:04:36.732 LINK spdk_tgt 00:04:36.732 CXX test/cpp_headers/bdev_zone.o 00:04:36.732 CC test/env/pci/pci_ut.o 00:04:36.732 LINK idxd_perf 00:04:36.732 CC examples/blob/hello_world/hello_blob.o 00:04:36.999 CC examples/blob/cli/blobcli.o 00:04:36.999 CXX test/cpp_headers/bit_array.o 00:04:36.999 CC app/spdk_lspci/spdk_lspci.o 00:04:36.999 LINK vhost_fuzz 00:04:36.999 CXX test/cpp_headers/bit_pool.o 00:04:36.999 LINK hello_blob 00:04:36.999 CC examples/nvme/hello_world/hello_world.o 00:04:36.999 CXX test/cpp_headers/blob_bdev.o 00:04:36.999 LINK spdk_lspci 00:04:36.999 LINK pci_ut 00:04:37.264 LINK accel_perf 00:04:37.264 CXX test/cpp_headers/blobfs_bdev.o 00:04:37.264 CXX test/cpp_headers/blobfs.o 00:04:37.264 CC app/spdk_nvme_perf/perf.o 00:04:37.264 CC examples/nvme/reconnect/reconnect.o 00:04:37.264 LINK hello_world 00:04:37.264 CXX test/cpp_headers/blob.o 00:04:37.264 CXX test/cpp_headers/conf.o 00:04:37.264 LINK memory_ut 00:04:37.522 LINK blobcli 00:04:37.522 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:37.522 CXX test/cpp_headers/config.o 00:04:37.522 CC examples/nvme/arbitration/arbitration.o 00:04:37.522 CC examples/nvme/hotplug/hotplug.o 00:04:37.522 CXX test/cpp_headers/cpuset.o 00:04:37.522 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:37.522 LINK iscsi_fuzz 00:04:37.522 CXX test/cpp_headers/crc16.o 00:04:37.522 LINK reconnect 00:04:37.781 LINK cmb_copy 00:04:37.781 CXX test/cpp_headers/crc32.o 00:04:37.781 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:37.781 CC app/spdk_nvme_identify/identify.o 00:04:37.781 LINK hotplug 00:04:37.781 CC test/app/histogram_perf/histogram_perf.o 00:04:37.781 LINK arbitration 00:04:37.781 CC examples/nvme/abort/abort.o 00:04:37.781 CXX test/cpp_headers/crc64.o 00:04:37.781 LINK nvme_manage 00:04:37.781 LINK histogram_perf 00:04:37.781 LINK hello_fsdev 00:04:37.781 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:38.040 CXX test/cpp_headers/dif.o 00:04:38.040 CXX test/cpp_headers/dma.o 00:04:38.040 CC examples/bdev/hello_world/hello_bdev.o 00:04:38.040 CC test/app/jsoncat/jsoncat.o 00:04:38.040 CC test/rpc_client/rpc_client_test.o 00:04:38.040 LINK pmr_persistence 00:04:38.040 CXX test/cpp_headers/endian.o 00:04:38.040 CXX test/cpp_headers/env_dpdk.o 00:04:38.040 LINK spdk_nvme_perf 00:04:38.040 LINK jsoncat 00:04:38.040 LINK hello_bdev 00:04:38.040 LINK rpc_client_test 00:04:38.298 LINK abort 00:04:38.298 CC test/app/stub/stub.o 00:04:38.298 CXX test/cpp_headers/env.o 00:04:38.298 CXX test/cpp_headers/event.o 00:04:38.298 CXX test/cpp_headers/fd_group.o 00:04:38.298 CXX test/cpp_headers/fd.o 00:04:38.298 CC app/spdk_nvme_discover/discovery_aer.o 00:04:38.298 LINK stub 00:04:38.298 CC test/accel/dif/dif.o 00:04:38.298 CC test/blobfs/mkfs/mkfs.o 00:04:38.298 CC examples/bdev/bdevperf/bdevperf.o 00:04:38.298 CXX test/cpp_headers/file.o 00:04:38.557 CC test/event/event_perf/event_perf.o 00:04:38.557 CXX test/cpp_headers/fsdev.o 00:04:38.557 LINK spdk_nvme_identify 00:04:38.557 LINK spdk_nvme_discover 00:04:38.557 CC test/nvme/aer/aer.o 00:04:38.557 CC test/lvol/esnap/esnap.o 00:04:38.557 LINK mkfs 00:04:38.557 CC test/nvme/reset/reset.o 00:04:38.557 LINK event_perf 00:04:38.557 CXX test/cpp_headers/fsdev_module.o 00:04:38.557 CC app/spdk_top/spdk_top.o 00:04:38.557 CC test/event/reactor/reactor.o 00:04:38.815 CXX test/cpp_headers/ftl.o 00:04:38.815 CXX test/cpp_headers/fuse_dispatcher.o 00:04:38.815 LINK aer 00:04:38.815 LINK reactor 00:04:38.815 LINK reset 00:04:38.815 CC test/event/reactor_perf/reactor_perf.o 00:04:38.815 CXX test/cpp_headers/gpt_spec.o 00:04:38.815 CC test/event/app_repeat/app_repeat.o 00:04:38.815 LINK reactor_perf 00:04:39.073 CC test/event/scheduler/scheduler.o 00:04:39.073 CC test/nvme/sgl/sgl.o 00:04:39.073 CXX test/cpp_headers/hexlify.o 00:04:39.073 CC app/vhost/vhost.o 00:04:39.073 LINK app_repeat 00:04:39.073 LINK dif 00:04:39.073 CXX test/cpp_headers/histogram_data.o 00:04:39.073 CC test/nvme/e2edp/nvme_dp.o 00:04:39.073 LINK scheduler 00:04:39.073 CXX test/cpp_headers/idxd.o 00:04:39.073 LINK vhost 00:04:39.073 LINK bdevperf 00:04:39.073 CXX test/cpp_headers/idxd_spec.o 00:04:39.073 LINK sgl 00:04:39.331 CC app/spdk_dd/spdk_dd.o 00:04:39.331 CXX test/cpp_headers/init.o 00:04:39.331 CXX test/cpp_headers/ioat.o 00:04:39.331 LINK nvme_dp 00:04:39.331 CC app/fio/nvme/fio_plugin.o 00:04:39.331 CC test/nvme/overhead/overhead.o 00:04:39.331 CXX test/cpp_headers/ioat_spec.o 00:04:39.589 CC test/bdev/bdevio/bdevio.o 00:04:39.589 CXX test/cpp_headers/iscsi_spec.o 00:04:39.589 CC examples/nvmf/nvmf/nvmf.o 00:04:39.589 CC app/fio/bdev/fio_plugin.o 00:04:39.589 LINK spdk_top 00:04:39.589 CXX test/cpp_headers/json.o 00:04:39.589 CXX test/cpp_headers/jsonrpc.o 00:04:39.589 CXX test/cpp_headers/keyring.o 00:04:39.589 LINK spdk_dd 00:04:39.589 LINK overhead 00:04:39.847 CXX test/cpp_headers/keyring_module.o 00:04:39.847 CXX test/cpp_headers/likely.o 00:04:39.847 CXX test/cpp_headers/log.o 00:04:39.847 LINK nvmf 00:04:39.847 CXX test/cpp_headers/lvol.o 00:04:39.847 LINK bdevio 00:04:39.847 CXX test/cpp_headers/md5.o 00:04:39.847 CXX test/cpp_headers/memory.o 00:04:39.847 CXX test/cpp_headers/mmio.o 00:04:39.847 CC test/nvme/err_injection/err_injection.o 00:04:39.847 LINK spdk_nvme 00:04:39.847 CC test/nvme/startup/startup.o 00:04:39.847 CC test/nvme/reserve/reserve.o 00:04:40.106 LINK spdk_bdev 00:04:40.106 CC test/nvme/simple_copy/simple_copy.o 00:04:40.106 CXX test/cpp_headers/nbd.o 00:04:40.106 LINK err_injection 00:04:40.106 CC test/nvme/connect_stress/connect_stress.o 00:04:40.106 CXX test/cpp_headers/net.o 00:04:40.106 CC test/nvme/boot_partition/boot_partition.o 00:04:40.106 CC test/nvme/compliance/nvme_compliance.o 00:04:40.106 LINK startup 00:04:40.106 CXX test/cpp_headers/notify.o 00:04:40.106 LINK reserve 00:04:40.106 LINK connect_stress 00:04:40.106 CC test/nvme/fused_ordering/fused_ordering.o 00:04:40.106 LINK boot_partition 00:04:40.365 CXX test/cpp_headers/nvme.o 00:04:40.365 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:40.365 LINK simple_copy 00:04:40.365 CC test/nvme/fdp/fdp.o 00:04:40.365 CXX test/cpp_headers/nvme_intel.o 00:04:40.365 CC test/nvme/cuse/cuse.o 00:04:40.365 CXX test/cpp_headers/nvme_ocssd.o 00:04:40.365 LINK nvme_compliance 00:04:40.365 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:40.365 CXX test/cpp_headers/nvme_spec.o 00:04:40.365 LINK fused_ordering 00:04:40.365 CXX test/cpp_headers/nvme_zns.o 00:04:40.365 LINK doorbell_aers 00:04:40.365 CXX test/cpp_headers/nvmf_cmd.o 00:04:40.365 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:40.623 CXX test/cpp_headers/nvmf.o 00:04:40.623 CXX test/cpp_headers/nvmf_spec.o 00:04:40.623 CXX test/cpp_headers/nvmf_transport.o 00:04:40.623 CXX test/cpp_headers/opal.o 00:04:40.623 CXX test/cpp_headers/opal_spec.o 00:04:40.623 LINK fdp 00:04:40.623 CXX test/cpp_headers/pci_ids.o 00:04:40.623 CXX test/cpp_headers/pipe.o 00:04:40.623 CXX test/cpp_headers/queue.o 00:04:40.623 CXX test/cpp_headers/reduce.o 00:04:40.623 CXX test/cpp_headers/rpc.o 00:04:40.623 CXX test/cpp_headers/scheduler.o 00:04:40.623 CXX test/cpp_headers/scsi.o 00:04:40.623 CXX test/cpp_headers/scsi_spec.o 00:04:40.623 CXX test/cpp_headers/sock.o 00:04:40.881 CXX test/cpp_headers/stdinc.o 00:04:40.881 CXX test/cpp_headers/string.o 00:04:40.881 CXX test/cpp_headers/thread.o 00:04:40.881 CXX test/cpp_headers/trace.o 00:04:40.881 CXX test/cpp_headers/trace_parser.o 00:04:40.881 CXX test/cpp_headers/tree.o 00:04:40.881 CXX test/cpp_headers/ublk.o 00:04:40.881 CXX test/cpp_headers/util.o 00:04:40.881 CXX test/cpp_headers/uuid.o 00:04:40.881 CXX test/cpp_headers/version.o 00:04:40.881 CXX test/cpp_headers/vfio_user_pci.o 00:04:40.881 CXX test/cpp_headers/vfio_user_spec.o 00:04:40.881 CXX test/cpp_headers/vhost.o 00:04:40.881 CXX test/cpp_headers/vmd.o 00:04:40.881 CXX test/cpp_headers/xor.o 00:04:40.881 CXX test/cpp_headers/zipf.o 00:04:41.446 LINK cuse 00:04:43.347 LINK esnap 00:04:43.347 00:04:43.347 real 1m2.616s 00:04:43.347 user 5m5.502s 00:04:43.347 sys 0m52.046s 00:04:43.347 04:52:21 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:04:43.347 04:52:21 make -- common/autotest_common.sh@10 -- $ set +x 00:04:43.347 ************************************ 00:04:43.347 END TEST make 00:04:43.347 ************************************ 00:04:43.347 04:52:21 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:43.347 04:52:21 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:43.347 04:52:21 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:43.347 04:52:21 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:43.347 04:52:21 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:43.347 04:52:21 -- pm/common@44 -- $ pid=5801 00:04:43.347 04:52:21 -- pm/common@50 -- $ kill -TERM 5801 00:04:43.347 04:52:21 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:43.347 04:52:21 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:43.347 04:52:21 -- pm/common@44 -- $ pid=5803 00:04:43.347 04:52:21 -- pm/common@50 -- $ kill -TERM 5803 00:04:43.347 04:52:21 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:43.347 04:52:21 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:43.347 04:52:21 -- common/autotest_common.sh@1681 -- # lcov --version 00:04:43.347 04:52:21 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:43.347 04:52:21 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:43.347 04:52:21 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:43.347 04:52:21 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:43.347 04:52:21 -- scripts/common.sh@336 -- # IFS=.-: 00:04:43.348 04:52:21 -- scripts/common.sh@336 -- # read -ra ver1 00:04:43.348 04:52:21 -- scripts/common.sh@337 -- # IFS=.-: 00:04:43.348 04:52:21 -- scripts/common.sh@337 -- # read -ra ver2 00:04:43.348 04:52:21 -- scripts/common.sh@338 -- # local 'op=<' 00:04:43.348 04:52:21 -- scripts/common.sh@340 -- # ver1_l=2 00:04:43.348 04:52:21 -- scripts/common.sh@341 -- # ver2_l=1 00:04:43.348 04:52:21 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:43.348 04:52:21 -- scripts/common.sh@344 -- # case "$op" in 00:04:43.348 04:52:21 -- scripts/common.sh@345 -- # : 1 00:04:43.348 04:52:21 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:43.348 04:52:21 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:43.348 04:52:21 -- scripts/common.sh@365 -- # decimal 1 00:04:43.348 04:52:21 -- scripts/common.sh@353 -- # local d=1 00:04:43.348 04:52:21 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:43.348 04:52:21 -- scripts/common.sh@355 -- # echo 1 00:04:43.348 04:52:21 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:43.348 04:52:21 -- scripts/common.sh@366 -- # decimal 2 00:04:43.348 04:52:21 -- scripts/common.sh@353 -- # local d=2 00:04:43.348 04:52:21 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:43.348 04:52:21 -- scripts/common.sh@355 -- # echo 2 00:04:43.348 04:52:21 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:43.348 04:52:21 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:43.348 04:52:21 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:43.348 04:52:21 -- scripts/common.sh@368 -- # return 0 00:04:43.348 04:52:21 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:43.348 04:52:21 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:43.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.348 --rc genhtml_branch_coverage=1 00:04:43.348 --rc genhtml_function_coverage=1 00:04:43.348 --rc genhtml_legend=1 00:04:43.348 --rc geninfo_all_blocks=1 00:04:43.348 --rc geninfo_unexecuted_blocks=1 00:04:43.348 00:04:43.348 ' 00:04:43.348 04:52:21 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:43.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.348 --rc genhtml_branch_coverage=1 00:04:43.348 --rc genhtml_function_coverage=1 00:04:43.348 --rc genhtml_legend=1 00:04:43.348 --rc geninfo_all_blocks=1 00:04:43.348 --rc geninfo_unexecuted_blocks=1 00:04:43.348 00:04:43.348 ' 00:04:43.348 04:52:21 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:43.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.348 --rc genhtml_branch_coverage=1 00:04:43.348 --rc genhtml_function_coverage=1 00:04:43.348 --rc genhtml_legend=1 00:04:43.348 --rc geninfo_all_blocks=1 00:04:43.348 --rc geninfo_unexecuted_blocks=1 00:04:43.348 00:04:43.348 ' 00:04:43.348 04:52:21 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:43.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.348 --rc genhtml_branch_coverage=1 00:04:43.348 --rc genhtml_function_coverage=1 00:04:43.348 --rc genhtml_legend=1 00:04:43.348 --rc geninfo_all_blocks=1 00:04:43.348 --rc geninfo_unexecuted_blocks=1 00:04:43.348 00:04:43.348 ' 00:04:43.348 04:52:21 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:43.348 04:52:21 -- nvmf/common.sh@7 -- # uname -s 00:04:43.348 04:52:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:43.348 04:52:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:43.348 04:52:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:43.348 04:52:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:43.348 04:52:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:43.348 04:52:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:43.348 04:52:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:43.348 04:52:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:43.348 04:52:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:43.348 04:52:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:43.348 04:52:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:87da07b1-043b-42fc-9015-b8e07d74ef22 00:04:43.348 04:52:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=87da07b1-043b-42fc-9015-b8e07d74ef22 00:04:43.348 04:52:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:43.348 04:52:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:43.348 04:52:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:43.348 04:52:21 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:43.348 04:52:21 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:43.348 04:52:21 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:43.348 04:52:21 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:43.348 04:52:21 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:43.348 04:52:21 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:43.348 04:52:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:43.348 04:52:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:43.348 04:52:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:43.348 04:52:21 -- paths/export.sh@5 -- # export PATH 00:04:43.348 04:52:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:43.348 04:52:21 -- nvmf/common.sh@51 -- # : 0 00:04:43.348 04:52:21 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:43.348 04:52:21 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:43.348 04:52:21 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:43.348 04:52:21 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:43.348 04:52:21 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:43.348 04:52:21 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:43.348 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:43.348 04:52:21 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:43.348 04:52:21 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:43.348 04:52:21 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:43.348 04:52:21 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:43.348 04:52:21 -- spdk/autotest.sh@32 -- # uname -s 00:04:43.348 04:52:21 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:43.348 04:52:21 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:43.348 04:52:21 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:43.348 04:52:21 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:43.348 04:52:21 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:43.348 04:52:21 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:43.348 04:52:21 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:43.348 04:52:21 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:43.348 04:52:21 -- spdk/autotest.sh@48 -- # udevadm_pid=66629 00:04:43.348 04:52:21 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:43.348 04:52:21 -- pm/common@17 -- # local monitor 00:04:43.348 04:52:21 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:43.348 04:52:21 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:43.348 04:52:21 -- pm/common@25 -- # sleep 1 00:04:43.348 04:52:21 -- pm/common@21 -- # date +%s 00:04:43.348 04:52:21 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:43.348 04:52:21 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733460741 00:04:43.348 04:52:21 -- pm/common@21 -- # date +%s 00:04:43.348 04:52:21 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733460741 00:04:43.615 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733460741_collect-cpu-load.pm.log 00:04:43.615 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733460741_collect-vmstat.pm.log 00:04:44.564 04:52:22 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:44.564 04:52:22 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:44.564 04:52:22 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:44.564 04:52:22 -- common/autotest_common.sh@10 -- # set +x 00:04:44.564 04:52:22 -- spdk/autotest.sh@59 -- # create_test_list 00:04:44.564 04:52:22 -- common/autotest_common.sh@748 -- # xtrace_disable 00:04:44.564 04:52:22 -- common/autotest_common.sh@10 -- # set +x 00:04:44.564 04:52:22 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:44.564 04:52:22 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:44.564 04:52:22 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:44.564 04:52:22 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:44.564 04:52:22 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:44.564 04:52:22 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:44.564 04:52:22 -- common/autotest_common.sh@1455 -- # uname 00:04:44.564 04:52:22 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:04:44.564 04:52:22 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:44.564 04:52:22 -- common/autotest_common.sh@1475 -- # uname 00:04:44.564 04:52:22 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:04:44.564 04:52:22 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:44.564 04:52:22 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:44.564 lcov: LCOV version 1.15 00:04:44.564 04:52:22 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:59.462 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:59.462 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:14.375 04:52:50 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:14.375 04:52:50 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:14.375 04:52:50 -- common/autotest_common.sh@10 -- # set +x 00:05:14.375 04:52:50 -- spdk/autotest.sh@78 -- # rm -f 00:05:14.375 04:52:50 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:14.375 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:14.375 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:14.375 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:14.375 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:05:14.375 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:05:14.375 04:52:51 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:14.375 04:52:51 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:05:14.375 04:52:51 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:05:14.375 04:52:51 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:05:14.375 04:52:51 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:14.375 04:52:51 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:05:14.375 04:52:51 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:05:14.375 04:52:51 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:14.375 04:52:51 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:14.375 04:52:51 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:14.375 04:52:51 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:05:14.375 04:52:51 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:05:14.375 04:52:51 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:14.375 04:52:51 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:14.375 04:52:51 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:14.375 04:52:51 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:05:14.375 04:52:51 -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:05:14.375 04:52:51 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:05:14.375 04:52:51 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:14.375 04:52:51 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:14.375 04:52:51 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:05:14.375 04:52:51 -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:05:14.375 04:52:51 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:05:14.375 04:52:51 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:14.375 04:52:51 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:14.375 04:52:51 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:05:14.375 04:52:51 -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:05:14.375 04:52:51 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:05:14.375 04:52:51 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:14.375 04:52:51 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:14.375 04:52:51 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:05:14.375 04:52:51 -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:05:14.375 04:52:51 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:05:14.375 04:52:51 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:14.375 04:52:51 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:14.375 04:52:51 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:05:14.375 04:52:51 -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:05:14.375 04:52:51 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:05:14.375 04:52:51 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:14.375 04:52:51 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:14.375 04:52:51 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:14.375 04:52:51 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:14.375 04:52:51 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:14.375 04:52:51 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:14.375 04:52:51 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:14.375 No valid GPT data, bailing 00:05:14.375 04:52:51 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:14.375 04:52:51 -- scripts/common.sh@394 -- # pt= 00:05:14.375 04:52:51 -- scripts/common.sh@395 -- # return 1 00:05:14.375 04:52:51 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:14.375 1+0 records in 00:05:14.375 1+0 records out 00:05:14.375 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103036 s, 102 MB/s 00:05:14.375 04:52:51 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:14.375 04:52:51 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:14.375 04:52:51 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:14.375 04:52:51 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:14.375 04:52:51 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:14.375 No valid GPT data, bailing 00:05:14.375 04:52:51 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:14.375 04:52:51 -- scripts/common.sh@394 -- # pt= 00:05:14.375 04:52:51 -- scripts/common.sh@395 -- # return 1 00:05:14.375 04:52:51 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:14.375 1+0 records in 00:05:14.375 1+0 records out 00:05:14.375 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00560876 s, 187 MB/s 00:05:14.375 04:52:51 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:14.375 04:52:51 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:14.375 04:52:51 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:05:14.375 04:52:51 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:05:14.375 04:52:51 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:05:14.375 No valid GPT data, bailing 00:05:14.375 04:52:52 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:05:14.375 04:52:52 -- scripts/common.sh@394 -- # pt= 00:05:14.375 04:52:52 -- scripts/common.sh@395 -- # return 1 00:05:14.375 04:52:52 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:05:14.375 1+0 records in 00:05:14.375 1+0 records out 00:05:14.375 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00334056 s, 314 MB/s 00:05:14.375 04:52:52 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:14.375 04:52:52 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:14.375 04:52:52 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:05:14.375 04:52:52 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:05:14.375 04:52:52 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:05:14.375 No valid GPT data, bailing 00:05:14.375 04:52:52 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:05:14.375 04:52:52 -- scripts/common.sh@394 -- # pt= 00:05:14.375 04:52:52 -- scripts/common.sh@395 -- # return 1 00:05:14.375 04:52:52 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:05:14.375 1+0 records in 00:05:14.375 1+0 records out 00:05:14.375 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00571558 s, 183 MB/s 00:05:14.375 04:52:52 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:14.375 04:52:52 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:14.375 04:52:52 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:05:14.375 04:52:52 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:05:14.375 04:52:52 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:05:14.375 No valid GPT data, bailing 00:05:14.375 04:52:52 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:05:14.375 04:52:52 -- scripts/common.sh@394 -- # pt= 00:05:14.375 04:52:52 -- scripts/common.sh@395 -- # return 1 00:05:14.375 04:52:52 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:05:14.375 1+0 records in 00:05:14.375 1+0 records out 00:05:14.375 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00623966 s, 168 MB/s 00:05:14.375 04:52:52 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:14.375 04:52:52 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:14.375 04:52:52 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:05:14.375 04:52:52 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:05:14.375 04:52:52 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:05:14.375 No valid GPT data, bailing 00:05:14.375 04:52:52 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:05:14.375 04:52:52 -- scripts/common.sh@394 -- # pt= 00:05:14.375 04:52:52 -- scripts/common.sh@395 -- # return 1 00:05:14.375 04:52:52 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:05:14.375 1+0 records in 00:05:14.375 1+0 records out 00:05:14.375 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00626056 s, 167 MB/s 00:05:14.375 04:52:52 -- spdk/autotest.sh@105 -- # sync 00:05:14.945 04:52:52 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:14.945 04:52:52 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:14.945 04:52:52 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:16.860 04:52:54 -- spdk/autotest.sh@111 -- # uname -s 00:05:16.860 04:52:54 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:16.860 04:52:54 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:16.860 04:52:54 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:16.860 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:17.430 Hugepages 00:05:17.430 node hugesize free / total 00:05:17.430 node0 1048576kB 0 / 0 00:05:17.430 node0 2048kB 0 / 0 00:05:17.430 00:05:17.430 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:17.430 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:17.430 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:17.691 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:05:17.691 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:05:17.691 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:05:17.691 04:52:55 -- spdk/autotest.sh@117 -- # uname -s 00:05:17.691 04:52:55 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:17.691 04:52:55 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:17.691 04:52:55 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:18.263 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:18.836 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:18.836 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:18.836 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:18.836 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:18.836 04:52:56 -- common/autotest_common.sh@1515 -- # sleep 1 00:05:19.781 04:52:57 -- common/autotest_common.sh@1516 -- # bdfs=() 00:05:19.781 04:52:57 -- common/autotest_common.sh@1516 -- # local bdfs 00:05:19.781 04:52:57 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:05:19.781 04:52:57 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:05:19.781 04:52:57 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:19.781 04:52:57 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:19.781 04:52:57 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:19.781 04:52:57 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:19.781 04:52:57 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:20.042 04:52:58 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:05:20.042 04:52:58 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:20.042 04:52:58 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:20.303 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:20.638 Waiting for block devices as requested 00:05:20.638 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:20.638 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:20.638 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:05:20.900 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:05:26.189 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:05:26.189 04:53:03 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:26.189 04:53:03 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:26.189 04:53:03 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:26.189 04:53:03 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:05:26.189 04:53:03 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:26.189 04:53:03 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:26.189 04:53:03 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:26.189 04:53:03 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:05:26.189 04:53:03 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:05:26.189 04:53:03 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:05:26.189 04:53:03 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:26.189 04:53:03 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:26.189 04:53:03 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:05:26.189 04:53:03 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:26.189 04:53:03 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:26.189 04:53:03 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:26.189 04:53:03 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:05:26.189 04:53:03 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:26.189 04:53:03 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:26.189 04:53:03 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:26.189 04:53:03 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:26.189 04:53:03 -- common/autotest_common.sh@1541 -- # continue 00:05:26.189 04:53:03 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:26.189 04:53:03 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:26.189 04:53:03 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:05:26.189 04:53:03 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:26.189 04:53:03 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:26.189 04:53:03 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:26.189 04:53:03 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:26.189 04:53:03 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:05:26.189 04:53:03 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:05:26.189 04:53:03 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:05:26.189 04:53:03 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:05:26.189 04:53:03 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:26.189 04:53:03 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:26.189 04:53:04 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:26.189 04:53:04 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:26.189 04:53:04 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:26.190 04:53:04 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:05:26.190 04:53:04 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:26.190 04:53:04 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:26.190 04:53:04 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:26.190 04:53:04 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:26.190 04:53:04 -- common/autotest_common.sh@1541 -- # continue 00:05:26.190 04:53:04 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:26.190 04:53:04 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:05:26.190 04:53:04 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:26.190 04:53:04 -- common/autotest_common.sh@1485 -- # grep 0000:00:12.0/nvme/nvme 00:05:26.190 04:53:04 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:26.190 04:53:04 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:05:26.190 04:53:04 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:26.190 04:53:04 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme2 00:05:26.190 04:53:04 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme2 00:05:26.190 04:53:04 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme2 ]] 00:05:26.190 04:53:04 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme2 00:05:26.190 04:53:04 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:26.190 04:53:04 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:26.190 04:53:04 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:26.190 04:53:04 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:26.190 04:53:04 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:26.190 04:53:04 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme2 00:05:26.190 04:53:04 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:26.190 04:53:04 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:26.190 04:53:04 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:26.190 04:53:04 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:26.190 04:53:04 -- common/autotest_common.sh@1541 -- # continue 00:05:26.190 04:53:04 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:26.190 04:53:04 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:05:26.190 04:53:04 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:26.190 04:53:04 -- common/autotest_common.sh@1485 -- # grep 0000:00:13.0/nvme/nvme 00:05:26.190 04:53:04 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:26.190 04:53:04 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:05:26.190 04:53:04 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:26.190 04:53:04 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme3 00:05:26.190 04:53:04 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme3 00:05:26.190 04:53:04 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme3 ]] 00:05:26.190 04:53:04 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:26.190 04:53:04 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme3 00:05:26.190 04:53:04 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:26.190 04:53:04 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:26.190 04:53:04 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:26.190 04:53:04 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:26.190 04:53:04 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme3 00:05:26.190 04:53:04 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:26.190 04:53:04 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:26.190 04:53:04 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:26.190 04:53:04 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:26.190 04:53:04 -- common/autotest_common.sh@1541 -- # continue 00:05:26.190 04:53:04 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:26.190 04:53:04 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:26.190 04:53:04 -- common/autotest_common.sh@10 -- # set +x 00:05:26.190 04:53:04 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:26.190 04:53:04 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:26.190 04:53:04 -- common/autotest_common.sh@10 -- # set +x 00:05:26.190 04:53:04 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:26.448 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:27.015 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:27.015 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:27.015 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:27.015 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:27.015 04:53:05 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:27.015 04:53:05 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:27.015 04:53:05 -- common/autotest_common.sh@10 -- # set +x 00:05:27.015 04:53:05 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:27.015 04:53:05 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:27.015 04:53:05 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:27.015 04:53:05 -- common/autotest_common.sh@1561 -- # bdfs=() 00:05:27.015 04:53:05 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:05:27.015 04:53:05 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:05:27.015 04:53:05 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:05:27.015 04:53:05 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:05:27.015 04:53:05 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:27.015 04:53:05 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:27.015 04:53:05 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:27.015 04:53:05 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:27.015 04:53:05 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:27.015 04:53:05 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:05:27.015 04:53:05 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:27.015 04:53:05 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:27.015 04:53:05 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:27.015 04:53:05 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:27.015 04:53:05 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:27.015 04:53:05 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:27.015 04:53:05 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:27.015 04:53:05 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:27.015 04:53:05 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:27.015 04:53:05 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:27.015 04:53:05 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:05:27.015 04:53:05 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:27.015 04:53:05 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:27.015 04:53:05 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:27.015 04:53:05 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:05:27.015 04:53:05 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:27.015 04:53:05 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:27.015 04:53:05 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:05:27.015 04:53:05 -- common/autotest_common.sh@1570 -- # return 0 00:05:27.015 04:53:05 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:05:27.015 04:53:05 -- common/autotest_common.sh@1578 -- # return 0 00:05:27.015 04:53:05 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:27.015 04:53:05 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:27.015 04:53:05 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:27.015 04:53:05 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:27.015 04:53:05 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:27.015 04:53:05 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:27.015 04:53:05 -- common/autotest_common.sh@10 -- # set +x 00:05:27.015 04:53:05 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:27.015 04:53:05 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:27.015 04:53:05 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:27.015 04:53:05 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:27.015 04:53:05 -- common/autotest_common.sh@10 -- # set +x 00:05:27.273 ************************************ 00:05:27.273 START TEST env 00:05:27.273 ************************************ 00:05:27.273 04:53:05 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:27.273 * Looking for test storage... 00:05:27.273 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:27.273 04:53:05 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:27.273 04:53:05 env -- common/autotest_common.sh@1681 -- # lcov --version 00:05:27.273 04:53:05 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:27.273 04:53:05 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:27.273 04:53:05 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:27.273 04:53:05 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:27.273 04:53:05 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:27.273 04:53:05 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:27.273 04:53:05 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:27.273 04:53:05 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:27.273 04:53:05 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:27.273 04:53:05 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:27.273 04:53:05 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:27.273 04:53:05 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:27.273 04:53:05 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:27.273 04:53:05 env -- scripts/common.sh@344 -- # case "$op" in 00:05:27.273 04:53:05 env -- scripts/common.sh@345 -- # : 1 00:05:27.273 04:53:05 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:27.273 04:53:05 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:27.273 04:53:05 env -- scripts/common.sh@365 -- # decimal 1 00:05:27.273 04:53:05 env -- scripts/common.sh@353 -- # local d=1 00:05:27.273 04:53:05 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:27.273 04:53:05 env -- scripts/common.sh@355 -- # echo 1 00:05:27.273 04:53:05 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:27.273 04:53:05 env -- scripts/common.sh@366 -- # decimal 2 00:05:27.273 04:53:05 env -- scripts/common.sh@353 -- # local d=2 00:05:27.273 04:53:05 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:27.273 04:53:05 env -- scripts/common.sh@355 -- # echo 2 00:05:27.273 04:53:05 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:27.273 04:53:05 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:27.273 04:53:05 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:27.273 04:53:05 env -- scripts/common.sh@368 -- # return 0 00:05:27.273 04:53:05 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:27.273 04:53:05 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:27.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.273 --rc genhtml_branch_coverage=1 00:05:27.273 --rc genhtml_function_coverage=1 00:05:27.273 --rc genhtml_legend=1 00:05:27.273 --rc geninfo_all_blocks=1 00:05:27.273 --rc geninfo_unexecuted_blocks=1 00:05:27.273 00:05:27.273 ' 00:05:27.273 04:53:05 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:27.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.273 --rc genhtml_branch_coverage=1 00:05:27.273 --rc genhtml_function_coverage=1 00:05:27.273 --rc genhtml_legend=1 00:05:27.273 --rc geninfo_all_blocks=1 00:05:27.273 --rc geninfo_unexecuted_blocks=1 00:05:27.273 00:05:27.273 ' 00:05:27.273 04:53:05 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:27.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.273 --rc genhtml_branch_coverage=1 00:05:27.273 --rc genhtml_function_coverage=1 00:05:27.273 --rc genhtml_legend=1 00:05:27.273 --rc geninfo_all_blocks=1 00:05:27.273 --rc geninfo_unexecuted_blocks=1 00:05:27.273 00:05:27.273 ' 00:05:27.273 04:53:05 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:27.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.273 --rc genhtml_branch_coverage=1 00:05:27.273 --rc genhtml_function_coverage=1 00:05:27.273 --rc genhtml_legend=1 00:05:27.273 --rc geninfo_all_blocks=1 00:05:27.273 --rc geninfo_unexecuted_blocks=1 00:05:27.273 00:05:27.273 ' 00:05:27.273 04:53:05 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:27.273 04:53:05 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:27.273 04:53:05 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:27.273 04:53:05 env -- common/autotest_common.sh@10 -- # set +x 00:05:27.273 ************************************ 00:05:27.273 START TEST env_memory 00:05:27.273 ************************************ 00:05:27.273 04:53:05 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:27.273 00:05:27.273 00:05:27.273 CUnit - A unit testing framework for C - Version 2.1-3 00:05:27.273 http://cunit.sourceforge.net/ 00:05:27.273 00:05:27.273 00:05:27.273 Suite: memory 00:05:27.273 Test: alloc and free memory map ...[2024-12-06 04:53:05.440282] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:27.273 passed 00:05:27.273 Test: mem map translation ...[2024-12-06 04:53:05.479129] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:27.273 [2024-12-06 04:53:05.479239] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:27.273 [2024-12-06 04:53:05.479359] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:27.273 [2024-12-06 04:53:05.479437] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:27.530 passed 00:05:27.530 Test: mem map registration ...[2024-12-06 04:53:05.547820] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:27.530 [2024-12-06 04:53:05.547927] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:27.530 passed 00:05:27.530 Test: mem map adjacent registrations ...passed 00:05:27.530 00:05:27.530 Run Summary: Type Total Ran Passed Failed Inactive 00:05:27.530 suites 1 1 n/a 0 0 00:05:27.530 tests 4 4 4 0 0 00:05:27.530 asserts 152 152 152 0 n/a 00:05:27.530 00:05:27.530 Elapsed time = 0.233 seconds 00:05:27.530 00:05:27.530 real 0m0.256s 00:05:27.530 user 0m0.237s 00:05:27.530 sys 0m0.013s 00:05:27.530 04:53:05 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:27.530 ************************************ 00:05:27.530 END TEST env_memory 00:05:27.531 ************************************ 00:05:27.531 04:53:05 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:27.531 04:53:05 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:27.531 04:53:05 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:27.531 04:53:05 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:27.531 04:53:05 env -- common/autotest_common.sh@10 -- # set +x 00:05:27.531 ************************************ 00:05:27.531 START TEST env_vtophys 00:05:27.531 ************************************ 00:05:27.531 04:53:05 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:27.531 EAL: lib.eal log level changed from notice to debug 00:05:27.531 EAL: Detected lcore 0 as core 0 on socket 0 00:05:27.531 EAL: Detected lcore 1 as core 0 on socket 0 00:05:27.531 EAL: Detected lcore 2 as core 0 on socket 0 00:05:27.531 EAL: Detected lcore 3 as core 0 on socket 0 00:05:27.531 EAL: Detected lcore 4 as core 0 on socket 0 00:05:27.531 EAL: Detected lcore 5 as core 0 on socket 0 00:05:27.531 EAL: Detected lcore 6 as core 0 on socket 0 00:05:27.531 EAL: Detected lcore 7 as core 0 on socket 0 00:05:27.531 EAL: Detected lcore 8 as core 0 on socket 0 00:05:27.531 EAL: Detected lcore 9 as core 0 on socket 0 00:05:27.531 EAL: Maximum logical cores by configuration: 128 00:05:27.531 EAL: Detected CPU lcores: 10 00:05:27.531 EAL: Detected NUMA nodes: 1 00:05:27.531 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:05:27.531 EAL: Detected shared linkage of DPDK 00:05:27.531 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:05:27.531 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:05:27.531 EAL: Registered [vdev] bus. 00:05:27.531 EAL: bus.vdev log level changed from disabled to notice 00:05:27.531 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:05:27.531 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:05:27.531 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:27.531 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:27.531 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:05:27.531 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:05:27.531 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:05:27.531 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:05:27.531 EAL: No shared files mode enabled, IPC will be disabled 00:05:27.531 EAL: No shared files mode enabled, IPC is disabled 00:05:27.531 EAL: Selected IOVA mode 'PA' 00:05:27.531 EAL: Probing VFIO support... 00:05:27.531 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:27.531 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:27.531 EAL: Ask a virtual area of 0x2e000 bytes 00:05:27.531 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:27.531 EAL: Setting up physically contiguous memory... 00:05:27.531 EAL: Setting maximum number of open files to 524288 00:05:27.531 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:27.531 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:27.531 EAL: Ask a virtual area of 0x61000 bytes 00:05:27.531 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:27.531 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:27.531 EAL: Ask a virtual area of 0x400000000 bytes 00:05:27.531 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:27.531 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:27.531 EAL: Ask a virtual area of 0x61000 bytes 00:05:27.531 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:27.531 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:27.531 EAL: Ask a virtual area of 0x400000000 bytes 00:05:27.531 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:27.531 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:27.531 EAL: Ask a virtual area of 0x61000 bytes 00:05:27.531 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:27.531 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:27.531 EAL: Ask a virtual area of 0x400000000 bytes 00:05:27.531 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:27.531 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:27.531 EAL: Ask a virtual area of 0x61000 bytes 00:05:27.531 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:27.531 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:27.531 EAL: Ask a virtual area of 0x400000000 bytes 00:05:27.531 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:27.531 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:27.531 EAL: Hugepages will be freed exactly as allocated. 00:05:27.531 EAL: No shared files mode enabled, IPC is disabled 00:05:27.531 EAL: No shared files mode enabled, IPC is disabled 00:05:27.789 EAL: TSC frequency is ~2600000 KHz 00:05:27.789 EAL: Main lcore 0 is ready (tid=7f9969c25a40;cpuset=[0]) 00:05:27.789 EAL: Trying to obtain current memory policy. 00:05:27.789 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.789 EAL: Restoring previous memory policy: 0 00:05:27.789 EAL: request: mp_malloc_sync 00:05:27.789 EAL: No shared files mode enabled, IPC is disabled 00:05:27.789 EAL: Heap on socket 0 was expanded by 2MB 00:05:27.789 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:27.789 EAL: No shared files mode enabled, IPC is disabled 00:05:27.789 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:27.789 EAL: Mem event callback 'spdk:(nil)' registered 00:05:27.789 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:27.789 00:05:27.789 00:05:27.789 CUnit - A unit testing framework for C - Version 2.1-3 00:05:27.789 http://cunit.sourceforge.net/ 00:05:27.789 00:05:27.789 00:05:27.789 Suite: components_suite 00:05:28.048 Test: vtophys_malloc_test ...passed 00:05:28.048 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:28.048 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:28.048 EAL: Restoring previous memory policy: 4 00:05:28.048 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.048 EAL: request: mp_malloc_sync 00:05:28.048 EAL: No shared files mode enabled, IPC is disabled 00:05:28.048 EAL: Heap on socket 0 was expanded by 4MB 00:05:28.048 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.048 EAL: request: mp_malloc_sync 00:05:28.048 EAL: No shared files mode enabled, IPC is disabled 00:05:28.048 EAL: Heap on socket 0 was shrunk by 4MB 00:05:28.048 EAL: Trying to obtain current memory policy. 00:05:28.048 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:28.048 EAL: Restoring previous memory policy: 4 00:05:28.048 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.048 EAL: request: mp_malloc_sync 00:05:28.048 EAL: No shared files mode enabled, IPC is disabled 00:05:28.048 EAL: Heap on socket 0 was expanded by 6MB 00:05:28.048 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.048 EAL: request: mp_malloc_sync 00:05:28.048 EAL: No shared files mode enabled, IPC is disabled 00:05:28.048 EAL: Heap on socket 0 was shrunk by 6MB 00:05:28.048 EAL: Trying to obtain current memory policy. 00:05:28.048 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:28.048 EAL: Restoring previous memory policy: 4 00:05:28.048 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.048 EAL: request: mp_malloc_sync 00:05:28.048 EAL: No shared files mode enabled, IPC is disabled 00:05:28.048 EAL: Heap on socket 0 was expanded by 10MB 00:05:28.048 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.048 EAL: request: mp_malloc_sync 00:05:28.048 EAL: No shared files mode enabled, IPC is disabled 00:05:28.048 EAL: Heap on socket 0 was shrunk by 10MB 00:05:28.048 EAL: Trying to obtain current memory policy. 00:05:28.048 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:28.048 EAL: Restoring previous memory policy: 4 00:05:28.048 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.048 EAL: request: mp_malloc_sync 00:05:28.048 EAL: No shared files mode enabled, IPC is disabled 00:05:28.048 EAL: Heap on socket 0 was expanded by 18MB 00:05:28.048 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.048 EAL: request: mp_malloc_sync 00:05:28.048 EAL: No shared files mode enabled, IPC is disabled 00:05:28.048 EAL: Heap on socket 0 was shrunk by 18MB 00:05:28.048 EAL: Trying to obtain current memory policy. 00:05:28.048 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:28.048 EAL: Restoring previous memory policy: 4 00:05:28.048 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.048 EAL: request: mp_malloc_sync 00:05:28.048 EAL: No shared files mode enabled, IPC is disabled 00:05:28.048 EAL: Heap on socket 0 was expanded by 34MB 00:05:28.048 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.048 EAL: request: mp_malloc_sync 00:05:28.048 EAL: No shared files mode enabled, IPC is disabled 00:05:28.048 EAL: Heap on socket 0 was shrunk by 34MB 00:05:28.048 EAL: Trying to obtain current memory policy. 00:05:28.048 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:28.048 EAL: Restoring previous memory policy: 4 00:05:28.048 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.048 EAL: request: mp_malloc_sync 00:05:28.048 EAL: No shared files mode enabled, IPC is disabled 00:05:28.048 EAL: Heap on socket 0 was expanded by 66MB 00:05:28.048 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.048 EAL: request: mp_malloc_sync 00:05:28.048 EAL: No shared files mode enabled, IPC is disabled 00:05:28.048 EAL: Heap on socket 0 was shrunk by 66MB 00:05:28.048 EAL: Trying to obtain current memory policy. 00:05:28.048 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:28.048 EAL: Restoring previous memory policy: 4 00:05:28.048 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.048 EAL: request: mp_malloc_sync 00:05:28.048 EAL: No shared files mode enabled, IPC is disabled 00:05:28.048 EAL: Heap on socket 0 was expanded by 130MB 00:05:28.048 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.048 EAL: request: mp_malloc_sync 00:05:28.048 EAL: No shared files mode enabled, IPC is disabled 00:05:28.048 EAL: Heap on socket 0 was shrunk by 130MB 00:05:28.048 EAL: Trying to obtain current memory policy. 00:05:28.048 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:28.308 EAL: Restoring previous memory policy: 4 00:05:28.308 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.308 EAL: request: mp_malloc_sync 00:05:28.308 EAL: No shared files mode enabled, IPC is disabled 00:05:28.308 EAL: Heap on socket 0 was expanded by 258MB 00:05:28.308 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.308 EAL: request: mp_malloc_sync 00:05:28.308 EAL: No shared files mode enabled, IPC is disabled 00:05:28.308 EAL: Heap on socket 0 was shrunk by 258MB 00:05:28.308 EAL: Trying to obtain current memory policy. 00:05:28.308 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:28.308 EAL: Restoring previous memory policy: 4 00:05:28.308 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.308 EAL: request: mp_malloc_sync 00:05:28.308 EAL: No shared files mode enabled, IPC is disabled 00:05:28.308 EAL: Heap on socket 0 was expanded by 514MB 00:05:28.568 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.568 EAL: request: mp_malloc_sync 00:05:28.568 EAL: No shared files mode enabled, IPC is disabled 00:05:28.568 EAL: Heap on socket 0 was shrunk by 514MB 00:05:28.568 EAL: Trying to obtain current memory policy. 00:05:28.568 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:28.829 EAL: Restoring previous memory policy: 4 00:05:28.829 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.829 EAL: request: mp_malloc_sync 00:05:28.829 EAL: No shared files mode enabled, IPC is disabled 00:05:28.829 EAL: Heap on socket 0 was expanded by 1026MB 00:05:29.088 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.348 passed 00:05:29.348 00:05:29.348 Run Summary: Type Total Ran Passed Failed Inactive 00:05:29.348 suites 1 1 n/a 0 0 00:05:29.348 tests 2 2 2 0 0 00:05:29.348 asserts 5316 5316 5316 0 n/a 00:05:29.348 00:05:29.348 Elapsed time = 1.474 seconds 00:05:29.348 EAL: request: mp_malloc_sync 00:05:29.348 EAL: No shared files mode enabled, IPC is disabled 00:05:29.348 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:29.348 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.348 EAL: request: mp_malloc_sync 00:05:29.348 EAL: No shared files mode enabled, IPC is disabled 00:05:29.348 EAL: Heap on socket 0 was shrunk by 2MB 00:05:29.348 EAL: No shared files mode enabled, IPC is disabled 00:05:29.348 EAL: No shared files mode enabled, IPC is disabled 00:05:29.348 EAL: No shared files mode enabled, IPC is disabled 00:05:29.348 ************************************ 00:05:29.348 END TEST env_vtophys 00:05:29.348 ************************************ 00:05:29.348 00:05:29.348 real 0m1.692s 00:05:29.348 user 0m0.739s 00:05:29.348 sys 0m0.807s 00:05:29.348 04:53:07 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:29.348 04:53:07 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:29.348 04:53:07 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:29.348 04:53:07 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:29.349 04:53:07 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:29.349 04:53:07 env -- common/autotest_common.sh@10 -- # set +x 00:05:29.349 ************************************ 00:05:29.349 START TEST env_pci 00:05:29.349 ************************************ 00:05:29.349 04:53:07 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:29.349 00:05:29.349 00:05:29.349 CUnit - A unit testing framework for C - Version 2.1-3 00:05:29.349 http://cunit.sourceforge.net/ 00:05:29.349 00:05:29.349 00:05:29.349 Suite: pci 00:05:29.349 Test: pci_hook ...[2024-12-06 04:53:07.452345] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 69355 has claimed it 00:05:29.349 passed 00:05:29.349 00:05:29.349 Run Summary: Type Total Ran Passed Failed Inactive 00:05:29.349 suites 1 1 n/a 0 0 00:05:29.349 tests 1 1 1 0 0 00:05:29.349 asserts 25 25 25 0 n/a 00:05:29.349 00:05:29.349 Elapsed time = 0.003 seconds 00:05:29.349 EAL: Cannot find device (10000:00:01.0) 00:05:29.349 EAL: Failed to attach device on primary process 00:05:29.349 00:05:29.349 real 0m0.053s 00:05:29.349 user 0m0.020s 00:05:29.349 sys 0m0.033s 00:05:29.349 04:53:07 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:29.349 04:53:07 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:29.349 ************************************ 00:05:29.349 END TEST env_pci 00:05:29.349 ************************************ 00:05:29.349 04:53:07 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:29.349 04:53:07 env -- env/env.sh@15 -- # uname 00:05:29.349 04:53:07 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:29.349 04:53:07 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:29.349 04:53:07 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:29.349 04:53:07 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:05:29.349 04:53:07 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:29.349 04:53:07 env -- common/autotest_common.sh@10 -- # set +x 00:05:29.349 ************************************ 00:05:29.349 START TEST env_dpdk_post_init 00:05:29.349 ************************************ 00:05:29.349 04:53:07 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:29.606 EAL: Detected CPU lcores: 10 00:05:29.606 EAL: Detected NUMA nodes: 1 00:05:29.606 EAL: Detected shared linkage of DPDK 00:05:29.606 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:29.606 EAL: Selected IOVA mode 'PA' 00:05:29.606 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:29.606 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:29.606 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:29.606 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:05:29.606 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:05:29.606 Starting DPDK initialization... 00:05:29.606 Starting SPDK post initialization... 00:05:29.606 SPDK NVMe probe 00:05:29.606 Attaching to 0000:00:10.0 00:05:29.606 Attaching to 0000:00:11.0 00:05:29.606 Attaching to 0000:00:12.0 00:05:29.606 Attaching to 0000:00:13.0 00:05:29.606 Attached to 0000:00:10.0 00:05:29.606 Attached to 0000:00:11.0 00:05:29.606 Attached to 0000:00:13.0 00:05:29.606 Attached to 0000:00:12.0 00:05:29.606 Cleaning up... 00:05:29.606 00:05:29.606 real 0m0.199s 00:05:29.606 user 0m0.049s 00:05:29.606 sys 0m0.054s 00:05:29.606 04:53:07 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:29.606 04:53:07 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:29.606 ************************************ 00:05:29.606 END TEST env_dpdk_post_init 00:05:29.606 ************************************ 00:05:29.606 04:53:07 env -- env/env.sh@26 -- # uname 00:05:29.606 04:53:07 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:29.606 04:53:07 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:29.606 04:53:07 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:29.606 04:53:07 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:29.606 04:53:07 env -- common/autotest_common.sh@10 -- # set +x 00:05:29.606 ************************************ 00:05:29.606 START TEST env_mem_callbacks 00:05:29.606 ************************************ 00:05:29.606 04:53:07 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:29.606 EAL: Detected CPU lcores: 10 00:05:29.606 EAL: Detected NUMA nodes: 1 00:05:29.606 EAL: Detected shared linkage of DPDK 00:05:29.606 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:29.606 EAL: Selected IOVA mode 'PA' 00:05:29.862 00:05:29.862 00:05:29.862 CUnit - A unit testing framework for C - Version 2.1-3 00:05:29.862 http://cunit.sourceforge.net/ 00:05:29.862 00:05:29.862 00:05:29.862 Suite: memory 00:05:29.862 Test: test ... 00:05:29.862 register 0x200000200000 2097152 00:05:29.862 malloc 3145728 00:05:29.862 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:29.862 register 0x200000400000 4194304 00:05:29.862 buf 0x200000500000 len 3145728 PASSED 00:05:29.862 malloc 64 00:05:29.862 buf 0x2000004fff40 len 64 PASSED 00:05:29.862 malloc 4194304 00:05:29.862 register 0x200000800000 6291456 00:05:29.863 buf 0x200000a00000 len 4194304 PASSED 00:05:29.863 free 0x200000500000 3145728 00:05:29.863 free 0x2000004fff40 64 00:05:29.863 unregister 0x200000400000 4194304 PASSED 00:05:29.863 free 0x200000a00000 4194304 00:05:29.863 unregister 0x200000800000 6291456 PASSED 00:05:29.863 malloc 8388608 00:05:29.863 register 0x200000400000 10485760 00:05:29.863 buf 0x200000600000 len 8388608 PASSED 00:05:29.863 free 0x200000600000 8388608 00:05:29.863 unregister 0x200000400000 10485760 PASSED 00:05:29.863 passed 00:05:29.863 00:05:29.863 Run Summary: Type Total Ran Passed Failed Inactive 00:05:29.863 suites 1 1 n/a 0 0 00:05:29.863 tests 1 1 1 0 0 00:05:29.863 asserts 15 15 15 0 n/a 00:05:29.863 00:05:29.863 Elapsed time = 0.009 seconds 00:05:29.863 00:05:29.863 real 0m0.149s 00:05:29.863 user 0m0.019s 00:05:29.863 sys 0m0.029s 00:05:29.863 04:53:07 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:29.863 04:53:07 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:29.863 ************************************ 00:05:29.863 END TEST env_mem_callbacks 00:05:29.863 ************************************ 00:05:29.863 00:05:29.863 real 0m2.731s 00:05:29.863 user 0m1.214s 00:05:29.863 sys 0m1.133s 00:05:29.863 04:53:07 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:29.863 04:53:07 env -- common/autotest_common.sh@10 -- # set +x 00:05:29.863 ************************************ 00:05:29.863 END TEST env 00:05:29.863 ************************************ 00:05:29.863 04:53:08 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:29.863 04:53:08 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:29.863 04:53:08 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:29.863 04:53:08 -- common/autotest_common.sh@10 -- # set +x 00:05:29.863 ************************************ 00:05:29.863 START TEST rpc 00:05:29.863 ************************************ 00:05:29.863 04:53:08 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:29.863 * Looking for test storage... 00:05:29.863 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:29.863 04:53:08 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:29.863 04:53:08 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:29.863 04:53:08 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:30.124 04:53:08 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:30.124 04:53:08 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:30.124 04:53:08 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:30.124 04:53:08 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:30.124 04:53:08 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:30.124 04:53:08 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:30.124 04:53:08 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:30.124 04:53:08 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:30.124 04:53:08 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:30.124 04:53:08 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:30.124 04:53:08 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:30.124 04:53:08 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:30.124 04:53:08 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:30.124 04:53:08 rpc -- scripts/common.sh@345 -- # : 1 00:05:30.124 04:53:08 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:30.124 04:53:08 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:30.124 04:53:08 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:30.124 04:53:08 rpc -- scripts/common.sh@353 -- # local d=1 00:05:30.124 04:53:08 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:30.124 04:53:08 rpc -- scripts/common.sh@355 -- # echo 1 00:05:30.124 04:53:08 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:30.124 04:53:08 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:30.124 04:53:08 rpc -- scripts/common.sh@353 -- # local d=2 00:05:30.124 04:53:08 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:30.124 04:53:08 rpc -- scripts/common.sh@355 -- # echo 2 00:05:30.124 04:53:08 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:30.124 04:53:08 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:30.124 04:53:08 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:30.124 04:53:08 rpc -- scripts/common.sh@368 -- # return 0 00:05:30.124 04:53:08 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:30.124 04:53:08 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:30.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.124 --rc genhtml_branch_coverage=1 00:05:30.124 --rc genhtml_function_coverage=1 00:05:30.124 --rc genhtml_legend=1 00:05:30.124 --rc geninfo_all_blocks=1 00:05:30.124 --rc geninfo_unexecuted_blocks=1 00:05:30.124 00:05:30.124 ' 00:05:30.124 04:53:08 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:30.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.124 --rc genhtml_branch_coverage=1 00:05:30.124 --rc genhtml_function_coverage=1 00:05:30.124 --rc genhtml_legend=1 00:05:30.124 --rc geninfo_all_blocks=1 00:05:30.124 --rc geninfo_unexecuted_blocks=1 00:05:30.124 00:05:30.124 ' 00:05:30.124 04:53:08 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:30.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.124 --rc genhtml_branch_coverage=1 00:05:30.124 --rc genhtml_function_coverage=1 00:05:30.124 --rc genhtml_legend=1 00:05:30.124 --rc geninfo_all_blocks=1 00:05:30.124 --rc geninfo_unexecuted_blocks=1 00:05:30.124 00:05:30.124 ' 00:05:30.124 04:53:08 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:30.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.124 --rc genhtml_branch_coverage=1 00:05:30.124 --rc genhtml_function_coverage=1 00:05:30.124 --rc genhtml_legend=1 00:05:30.124 --rc geninfo_all_blocks=1 00:05:30.124 --rc geninfo_unexecuted_blocks=1 00:05:30.124 00:05:30.124 ' 00:05:30.124 04:53:08 rpc -- rpc/rpc.sh@65 -- # spdk_pid=69482 00:05:30.124 04:53:08 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:30.124 04:53:08 rpc -- rpc/rpc.sh@67 -- # waitforlisten 69482 00:05:30.124 04:53:08 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:30.124 04:53:08 rpc -- common/autotest_common.sh@831 -- # '[' -z 69482 ']' 00:05:30.124 04:53:08 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.124 04:53:08 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:30.124 04:53:08 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.124 04:53:08 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:30.124 04:53:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.124 [2024-12-06 04:53:08.217845] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:30.124 [2024-12-06 04:53:08.217961] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69482 ] 00:05:30.387 [2024-12-06 04:53:08.354245] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.387 [2024-12-06 04:53:08.388194] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:30.387 [2024-12-06 04:53:08.388243] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 69482' to capture a snapshot of events at runtime. 00:05:30.387 [2024-12-06 04:53:08.388255] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:30.387 [2024-12-06 04:53:08.388268] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:30.387 [2024-12-06 04:53:08.388286] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid69482 for offline analysis/debug. 00:05:30.387 [2024-12-06 04:53:08.388320] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.954 04:53:09 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:30.954 04:53:09 rpc -- common/autotest_common.sh@864 -- # return 0 00:05:30.954 04:53:09 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:30.954 04:53:09 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:30.954 04:53:09 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:30.954 04:53:09 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:30.954 04:53:09 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:30.954 04:53:09 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:30.954 04:53:09 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.954 ************************************ 00:05:30.954 START TEST rpc_integrity 00:05:30.954 ************************************ 00:05:30.954 04:53:09 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:30.954 04:53:09 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:30.954 04:53:09 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.954 04:53:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.954 04:53:09 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.954 04:53:09 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:30.954 04:53:09 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:30.954 04:53:09 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:30.954 04:53:09 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:30.954 04:53:09 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.954 04:53:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.954 04:53:09 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.954 04:53:09 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:30.954 04:53:09 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:30.954 04:53:09 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.954 04:53:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.954 04:53:09 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.954 04:53:09 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:30.954 { 00:05:30.954 "name": "Malloc0", 00:05:30.954 "aliases": [ 00:05:30.954 "67be2a10-1f8b-4baa-9d3a-f37fcc0b3c9c" 00:05:30.954 ], 00:05:30.954 "product_name": "Malloc disk", 00:05:30.954 "block_size": 512, 00:05:30.954 "num_blocks": 16384, 00:05:30.954 "uuid": "67be2a10-1f8b-4baa-9d3a-f37fcc0b3c9c", 00:05:30.954 "assigned_rate_limits": { 00:05:30.954 "rw_ios_per_sec": 0, 00:05:30.954 "rw_mbytes_per_sec": 0, 00:05:30.954 "r_mbytes_per_sec": 0, 00:05:30.954 "w_mbytes_per_sec": 0 00:05:30.954 }, 00:05:30.954 "claimed": false, 00:05:30.954 "zoned": false, 00:05:30.954 "supported_io_types": { 00:05:30.954 "read": true, 00:05:30.954 "write": true, 00:05:30.954 "unmap": true, 00:05:30.954 "flush": true, 00:05:30.954 "reset": true, 00:05:30.954 "nvme_admin": false, 00:05:30.954 "nvme_io": false, 00:05:30.954 "nvme_io_md": false, 00:05:30.954 "write_zeroes": true, 00:05:30.954 "zcopy": true, 00:05:30.954 "get_zone_info": false, 00:05:30.954 "zone_management": false, 00:05:30.954 "zone_append": false, 00:05:30.954 "compare": false, 00:05:30.954 "compare_and_write": false, 00:05:30.954 "abort": true, 00:05:30.954 "seek_hole": false, 00:05:30.954 "seek_data": false, 00:05:30.954 "copy": true, 00:05:30.954 "nvme_iov_md": false 00:05:30.954 }, 00:05:30.954 "memory_domains": [ 00:05:30.954 { 00:05:30.954 "dma_device_id": "system", 00:05:30.954 "dma_device_type": 1 00:05:30.954 }, 00:05:30.954 { 00:05:30.954 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:30.954 "dma_device_type": 2 00:05:30.954 } 00:05:30.954 ], 00:05:30.954 "driver_specific": {} 00:05:30.954 } 00:05:30.954 ]' 00:05:30.954 04:53:09 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:30.954 04:53:09 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:30.954 04:53:09 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:30.954 04:53:09 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.954 04:53:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.954 [2024-12-06 04:53:09.162740] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:30.954 [2024-12-06 04:53:09.162799] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:30.954 [2024-12-06 04:53:09.162825] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:05:30.955 [2024-12-06 04:53:09.162834] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:30.955 [2024-12-06 04:53:09.165045] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:30.955 [2024-12-06 04:53:09.165081] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:30.955 Passthru0 00:05:30.955 04:53:09 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.955 04:53:09 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:30.955 04:53:09 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.955 04:53:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.214 04:53:09 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.214 04:53:09 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:31.214 { 00:05:31.214 "name": "Malloc0", 00:05:31.214 "aliases": [ 00:05:31.214 "67be2a10-1f8b-4baa-9d3a-f37fcc0b3c9c" 00:05:31.214 ], 00:05:31.214 "product_name": "Malloc disk", 00:05:31.214 "block_size": 512, 00:05:31.214 "num_blocks": 16384, 00:05:31.214 "uuid": "67be2a10-1f8b-4baa-9d3a-f37fcc0b3c9c", 00:05:31.214 "assigned_rate_limits": { 00:05:31.214 "rw_ios_per_sec": 0, 00:05:31.214 "rw_mbytes_per_sec": 0, 00:05:31.214 "r_mbytes_per_sec": 0, 00:05:31.214 "w_mbytes_per_sec": 0 00:05:31.214 }, 00:05:31.214 "claimed": true, 00:05:31.214 "claim_type": "exclusive_write", 00:05:31.214 "zoned": false, 00:05:31.214 "supported_io_types": { 00:05:31.214 "read": true, 00:05:31.214 "write": true, 00:05:31.214 "unmap": true, 00:05:31.214 "flush": true, 00:05:31.214 "reset": true, 00:05:31.215 "nvme_admin": false, 00:05:31.215 "nvme_io": false, 00:05:31.215 "nvme_io_md": false, 00:05:31.215 "write_zeroes": true, 00:05:31.215 "zcopy": true, 00:05:31.215 "get_zone_info": false, 00:05:31.215 "zone_management": false, 00:05:31.215 "zone_append": false, 00:05:31.215 "compare": false, 00:05:31.215 "compare_and_write": false, 00:05:31.215 "abort": true, 00:05:31.215 "seek_hole": false, 00:05:31.215 "seek_data": false, 00:05:31.215 "copy": true, 00:05:31.215 "nvme_iov_md": false 00:05:31.215 }, 00:05:31.215 "memory_domains": [ 00:05:31.215 { 00:05:31.215 "dma_device_id": "system", 00:05:31.215 "dma_device_type": 1 00:05:31.215 }, 00:05:31.215 { 00:05:31.215 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:31.215 "dma_device_type": 2 00:05:31.215 } 00:05:31.215 ], 00:05:31.215 "driver_specific": {} 00:05:31.215 }, 00:05:31.215 { 00:05:31.215 "name": "Passthru0", 00:05:31.215 "aliases": [ 00:05:31.215 "cb32723f-a1b7-56cb-abab-eeaa6996aa56" 00:05:31.215 ], 00:05:31.215 "product_name": "passthru", 00:05:31.215 "block_size": 512, 00:05:31.215 "num_blocks": 16384, 00:05:31.215 "uuid": "cb32723f-a1b7-56cb-abab-eeaa6996aa56", 00:05:31.215 "assigned_rate_limits": { 00:05:31.215 "rw_ios_per_sec": 0, 00:05:31.215 "rw_mbytes_per_sec": 0, 00:05:31.215 "r_mbytes_per_sec": 0, 00:05:31.215 "w_mbytes_per_sec": 0 00:05:31.215 }, 00:05:31.215 "claimed": false, 00:05:31.215 "zoned": false, 00:05:31.215 "supported_io_types": { 00:05:31.215 "read": true, 00:05:31.215 "write": true, 00:05:31.215 "unmap": true, 00:05:31.215 "flush": true, 00:05:31.215 "reset": true, 00:05:31.215 "nvme_admin": false, 00:05:31.215 "nvme_io": false, 00:05:31.215 "nvme_io_md": false, 00:05:31.215 "write_zeroes": true, 00:05:31.215 "zcopy": true, 00:05:31.215 "get_zone_info": false, 00:05:31.215 "zone_management": false, 00:05:31.215 "zone_append": false, 00:05:31.215 "compare": false, 00:05:31.215 "compare_and_write": false, 00:05:31.215 "abort": true, 00:05:31.215 "seek_hole": false, 00:05:31.215 "seek_data": false, 00:05:31.215 "copy": true, 00:05:31.215 "nvme_iov_md": false 00:05:31.215 }, 00:05:31.215 "memory_domains": [ 00:05:31.215 { 00:05:31.215 "dma_device_id": "system", 00:05:31.215 "dma_device_type": 1 00:05:31.215 }, 00:05:31.215 { 00:05:31.215 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:31.215 "dma_device_type": 2 00:05:31.215 } 00:05:31.215 ], 00:05:31.215 "driver_specific": { 00:05:31.215 "passthru": { 00:05:31.215 "name": "Passthru0", 00:05:31.215 "base_bdev_name": "Malloc0" 00:05:31.215 } 00:05:31.215 } 00:05:31.215 } 00:05:31.215 ]' 00:05:31.215 04:53:09 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:31.215 04:53:09 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:31.215 04:53:09 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:31.215 04:53:09 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.215 04:53:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.215 04:53:09 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.215 04:53:09 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:31.215 04:53:09 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.215 04:53:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.215 04:53:09 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.215 04:53:09 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:31.215 04:53:09 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.215 04:53:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.215 04:53:09 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.215 04:53:09 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:31.215 04:53:09 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:31.215 04:53:09 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:31.215 00:05:31.215 real 0m0.227s 00:05:31.215 user 0m0.128s 00:05:31.215 sys 0m0.031s 00:05:31.215 04:53:09 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:31.215 04:53:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.215 ************************************ 00:05:31.215 END TEST rpc_integrity 00:05:31.215 ************************************ 00:05:31.215 04:53:09 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:31.215 04:53:09 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:31.215 04:53:09 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:31.215 04:53:09 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.215 ************************************ 00:05:31.215 START TEST rpc_plugins 00:05:31.215 ************************************ 00:05:31.215 04:53:09 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:05:31.215 04:53:09 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:31.215 04:53:09 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.215 04:53:09 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:31.215 04:53:09 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.215 04:53:09 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:31.215 04:53:09 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:31.215 04:53:09 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.215 04:53:09 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:31.215 04:53:09 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.215 04:53:09 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:31.215 { 00:05:31.215 "name": "Malloc1", 00:05:31.215 "aliases": [ 00:05:31.215 "23145134-b42d-4424-9023-925da319b4ce" 00:05:31.215 ], 00:05:31.215 "product_name": "Malloc disk", 00:05:31.215 "block_size": 4096, 00:05:31.215 "num_blocks": 256, 00:05:31.215 "uuid": "23145134-b42d-4424-9023-925da319b4ce", 00:05:31.215 "assigned_rate_limits": { 00:05:31.215 "rw_ios_per_sec": 0, 00:05:31.215 "rw_mbytes_per_sec": 0, 00:05:31.215 "r_mbytes_per_sec": 0, 00:05:31.215 "w_mbytes_per_sec": 0 00:05:31.215 }, 00:05:31.215 "claimed": false, 00:05:31.215 "zoned": false, 00:05:31.215 "supported_io_types": { 00:05:31.215 "read": true, 00:05:31.215 "write": true, 00:05:31.215 "unmap": true, 00:05:31.215 "flush": true, 00:05:31.215 "reset": true, 00:05:31.215 "nvme_admin": false, 00:05:31.215 "nvme_io": false, 00:05:31.215 "nvme_io_md": false, 00:05:31.215 "write_zeroes": true, 00:05:31.215 "zcopy": true, 00:05:31.215 "get_zone_info": false, 00:05:31.215 "zone_management": false, 00:05:31.215 "zone_append": false, 00:05:31.215 "compare": false, 00:05:31.215 "compare_and_write": false, 00:05:31.215 "abort": true, 00:05:31.215 "seek_hole": false, 00:05:31.215 "seek_data": false, 00:05:31.215 "copy": true, 00:05:31.215 "nvme_iov_md": false 00:05:31.215 }, 00:05:31.215 "memory_domains": [ 00:05:31.215 { 00:05:31.215 "dma_device_id": "system", 00:05:31.215 "dma_device_type": 1 00:05:31.215 }, 00:05:31.215 { 00:05:31.215 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:31.215 "dma_device_type": 2 00:05:31.215 } 00:05:31.215 ], 00:05:31.215 "driver_specific": {} 00:05:31.215 } 00:05:31.215 ]' 00:05:31.215 04:53:09 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:31.215 04:53:09 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:31.215 04:53:09 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:31.215 04:53:09 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.215 04:53:09 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:31.215 04:53:09 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.215 04:53:09 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:31.215 04:53:09 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.215 04:53:09 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:31.215 04:53:09 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.215 04:53:09 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:31.215 04:53:09 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:31.215 04:53:09 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:31.215 00:05:31.215 real 0m0.113s 00:05:31.215 user 0m0.067s 00:05:31.215 sys 0m0.013s 00:05:31.215 04:53:09 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:31.215 04:53:09 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:31.215 ************************************ 00:05:31.215 END TEST rpc_plugins 00:05:31.215 ************************************ 00:05:31.473 04:53:09 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:31.473 04:53:09 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:31.473 04:53:09 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:31.473 04:53:09 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.473 ************************************ 00:05:31.473 START TEST rpc_trace_cmd_test 00:05:31.473 ************************************ 00:05:31.473 04:53:09 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:05:31.473 04:53:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:31.473 04:53:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:31.473 04:53:09 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.473 04:53:09 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:31.473 04:53:09 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.473 04:53:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:31.473 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid69482", 00:05:31.473 "tpoint_group_mask": "0x8", 00:05:31.473 "iscsi_conn": { 00:05:31.473 "mask": "0x2", 00:05:31.473 "tpoint_mask": "0x0" 00:05:31.473 }, 00:05:31.473 "scsi": { 00:05:31.473 "mask": "0x4", 00:05:31.473 "tpoint_mask": "0x0" 00:05:31.473 }, 00:05:31.473 "bdev": { 00:05:31.473 "mask": "0x8", 00:05:31.473 "tpoint_mask": "0xffffffffffffffff" 00:05:31.473 }, 00:05:31.473 "nvmf_rdma": { 00:05:31.473 "mask": "0x10", 00:05:31.473 "tpoint_mask": "0x0" 00:05:31.473 }, 00:05:31.473 "nvmf_tcp": { 00:05:31.473 "mask": "0x20", 00:05:31.473 "tpoint_mask": "0x0" 00:05:31.473 }, 00:05:31.473 "ftl": { 00:05:31.473 "mask": "0x40", 00:05:31.473 "tpoint_mask": "0x0" 00:05:31.473 }, 00:05:31.473 "blobfs": { 00:05:31.473 "mask": "0x80", 00:05:31.473 "tpoint_mask": "0x0" 00:05:31.473 }, 00:05:31.473 "dsa": { 00:05:31.473 "mask": "0x200", 00:05:31.473 "tpoint_mask": "0x0" 00:05:31.473 }, 00:05:31.473 "thread": { 00:05:31.473 "mask": "0x400", 00:05:31.473 "tpoint_mask": "0x0" 00:05:31.473 }, 00:05:31.473 "nvme_pcie": { 00:05:31.473 "mask": "0x800", 00:05:31.473 "tpoint_mask": "0x0" 00:05:31.473 }, 00:05:31.473 "iaa": { 00:05:31.473 "mask": "0x1000", 00:05:31.473 "tpoint_mask": "0x0" 00:05:31.473 }, 00:05:31.473 "nvme_tcp": { 00:05:31.473 "mask": "0x2000", 00:05:31.473 "tpoint_mask": "0x0" 00:05:31.473 }, 00:05:31.473 "bdev_nvme": { 00:05:31.474 "mask": "0x4000", 00:05:31.474 "tpoint_mask": "0x0" 00:05:31.474 }, 00:05:31.474 "sock": { 00:05:31.474 "mask": "0x8000", 00:05:31.474 "tpoint_mask": "0x0" 00:05:31.474 }, 00:05:31.474 "blob": { 00:05:31.474 "mask": "0x10000", 00:05:31.474 "tpoint_mask": "0x0" 00:05:31.474 }, 00:05:31.474 "bdev_raid": { 00:05:31.474 "mask": "0x20000", 00:05:31.474 "tpoint_mask": "0x0" 00:05:31.474 } 00:05:31.474 }' 00:05:31.474 04:53:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:31.474 04:53:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 18 -gt 2 ']' 00:05:31.474 04:53:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:31.474 04:53:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:31.474 04:53:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:31.474 04:53:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:31.474 04:53:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:31.474 04:53:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:31.474 04:53:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:31.474 04:53:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:31.474 00:05:31.474 real 0m0.174s 00:05:31.474 user 0m0.145s 00:05:31.474 sys 0m0.020s 00:05:31.474 04:53:09 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:31.474 04:53:09 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:31.474 ************************************ 00:05:31.474 END TEST rpc_trace_cmd_test 00:05:31.474 ************************************ 00:05:31.474 04:53:09 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:31.474 04:53:09 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:31.474 04:53:09 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:31.474 04:53:09 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:31.474 04:53:09 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:31.474 04:53:09 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.474 ************************************ 00:05:31.474 START TEST rpc_daemon_integrity 00:05:31.474 ************************************ 00:05:31.474 04:53:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:31.474 04:53:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:31.474 04:53:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.474 04:53:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.474 04:53:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.474 04:53:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:31.474 04:53:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:31.732 04:53:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:31.732 04:53:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:31.732 04:53:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.732 04:53:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.732 04:53:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.732 04:53:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:31.732 04:53:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:31.732 04:53:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.732 04:53:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.732 04:53:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.732 04:53:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:31.732 { 00:05:31.732 "name": "Malloc2", 00:05:31.732 "aliases": [ 00:05:31.732 "fc407374-ba4d-49cc-ba72-023b2f34d2b7" 00:05:31.732 ], 00:05:31.732 "product_name": "Malloc disk", 00:05:31.732 "block_size": 512, 00:05:31.732 "num_blocks": 16384, 00:05:31.732 "uuid": "fc407374-ba4d-49cc-ba72-023b2f34d2b7", 00:05:31.732 "assigned_rate_limits": { 00:05:31.732 "rw_ios_per_sec": 0, 00:05:31.732 "rw_mbytes_per_sec": 0, 00:05:31.733 "r_mbytes_per_sec": 0, 00:05:31.733 "w_mbytes_per_sec": 0 00:05:31.733 }, 00:05:31.733 "claimed": false, 00:05:31.733 "zoned": false, 00:05:31.733 "supported_io_types": { 00:05:31.733 "read": true, 00:05:31.733 "write": true, 00:05:31.733 "unmap": true, 00:05:31.733 "flush": true, 00:05:31.733 "reset": true, 00:05:31.733 "nvme_admin": false, 00:05:31.733 "nvme_io": false, 00:05:31.733 "nvme_io_md": false, 00:05:31.733 "write_zeroes": true, 00:05:31.733 "zcopy": true, 00:05:31.733 "get_zone_info": false, 00:05:31.733 "zone_management": false, 00:05:31.733 "zone_append": false, 00:05:31.733 "compare": false, 00:05:31.733 "compare_and_write": false, 00:05:31.733 "abort": true, 00:05:31.733 "seek_hole": false, 00:05:31.733 "seek_data": false, 00:05:31.733 "copy": true, 00:05:31.733 "nvme_iov_md": false 00:05:31.733 }, 00:05:31.733 "memory_domains": [ 00:05:31.733 { 00:05:31.733 "dma_device_id": "system", 00:05:31.733 "dma_device_type": 1 00:05:31.733 }, 00:05:31.733 { 00:05:31.733 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:31.733 "dma_device_type": 2 00:05:31.733 } 00:05:31.733 ], 00:05:31.733 "driver_specific": {} 00:05:31.733 } 00:05:31.733 ]' 00:05:31.733 04:53:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:31.733 04:53:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:31.733 04:53:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:31.733 04:53:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.733 04:53:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.733 [2024-12-06 04:53:09.795006] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:31.733 [2024-12-06 04:53:09.795055] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:31.733 [2024-12-06 04:53:09.795075] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:05:31.733 [2024-12-06 04:53:09.795085] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:31.733 [2024-12-06 04:53:09.797210] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:31.733 [2024-12-06 04:53:09.797244] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:31.733 Passthru0 00:05:31.733 04:53:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.733 04:53:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:31.733 04:53:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.733 04:53:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.733 04:53:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.733 04:53:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:31.733 { 00:05:31.733 "name": "Malloc2", 00:05:31.733 "aliases": [ 00:05:31.733 "fc407374-ba4d-49cc-ba72-023b2f34d2b7" 00:05:31.733 ], 00:05:31.733 "product_name": "Malloc disk", 00:05:31.733 "block_size": 512, 00:05:31.733 "num_blocks": 16384, 00:05:31.733 "uuid": "fc407374-ba4d-49cc-ba72-023b2f34d2b7", 00:05:31.733 "assigned_rate_limits": { 00:05:31.733 "rw_ios_per_sec": 0, 00:05:31.733 "rw_mbytes_per_sec": 0, 00:05:31.733 "r_mbytes_per_sec": 0, 00:05:31.733 "w_mbytes_per_sec": 0 00:05:31.733 }, 00:05:31.733 "claimed": true, 00:05:31.733 "claim_type": "exclusive_write", 00:05:31.733 "zoned": false, 00:05:31.733 "supported_io_types": { 00:05:31.733 "read": true, 00:05:31.733 "write": true, 00:05:31.733 "unmap": true, 00:05:31.733 "flush": true, 00:05:31.733 "reset": true, 00:05:31.733 "nvme_admin": false, 00:05:31.733 "nvme_io": false, 00:05:31.733 "nvme_io_md": false, 00:05:31.733 "write_zeroes": true, 00:05:31.733 "zcopy": true, 00:05:31.733 "get_zone_info": false, 00:05:31.733 "zone_management": false, 00:05:31.733 "zone_append": false, 00:05:31.733 "compare": false, 00:05:31.733 "compare_and_write": false, 00:05:31.733 "abort": true, 00:05:31.733 "seek_hole": false, 00:05:31.733 "seek_data": false, 00:05:31.733 "copy": true, 00:05:31.733 "nvme_iov_md": false 00:05:31.733 }, 00:05:31.733 "memory_domains": [ 00:05:31.733 { 00:05:31.733 "dma_device_id": "system", 00:05:31.733 "dma_device_type": 1 00:05:31.733 }, 00:05:31.733 { 00:05:31.733 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:31.733 "dma_device_type": 2 00:05:31.733 } 00:05:31.733 ], 00:05:31.733 "driver_specific": {} 00:05:31.733 }, 00:05:31.733 { 00:05:31.733 "name": "Passthru0", 00:05:31.733 "aliases": [ 00:05:31.733 "1f4d9a70-147d-5a7e-8864-c30a0cf2d005" 00:05:31.733 ], 00:05:31.733 "product_name": "passthru", 00:05:31.733 "block_size": 512, 00:05:31.733 "num_blocks": 16384, 00:05:31.733 "uuid": "1f4d9a70-147d-5a7e-8864-c30a0cf2d005", 00:05:31.733 "assigned_rate_limits": { 00:05:31.733 "rw_ios_per_sec": 0, 00:05:31.733 "rw_mbytes_per_sec": 0, 00:05:31.733 "r_mbytes_per_sec": 0, 00:05:31.733 "w_mbytes_per_sec": 0 00:05:31.733 }, 00:05:31.733 "claimed": false, 00:05:31.733 "zoned": false, 00:05:31.733 "supported_io_types": { 00:05:31.733 "read": true, 00:05:31.733 "write": true, 00:05:31.733 "unmap": true, 00:05:31.733 "flush": true, 00:05:31.733 "reset": true, 00:05:31.733 "nvme_admin": false, 00:05:31.733 "nvme_io": false, 00:05:31.733 "nvme_io_md": false, 00:05:31.733 "write_zeroes": true, 00:05:31.733 "zcopy": true, 00:05:31.733 "get_zone_info": false, 00:05:31.733 "zone_management": false, 00:05:31.733 "zone_append": false, 00:05:31.733 "compare": false, 00:05:31.733 "compare_and_write": false, 00:05:31.733 "abort": true, 00:05:31.733 "seek_hole": false, 00:05:31.733 "seek_data": false, 00:05:31.733 "copy": true, 00:05:31.733 "nvme_iov_md": false 00:05:31.733 }, 00:05:31.733 "memory_domains": [ 00:05:31.733 { 00:05:31.733 "dma_device_id": "system", 00:05:31.733 "dma_device_type": 1 00:05:31.733 }, 00:05:31.733 { 00:05:31.733 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:31.733 "dma_device_type": 2 00:05:31.733 } 00:05:31.733 ], 00:05:31.733 "driver_specific": { 00:05:31.733 "passthru": { 00:05:31.733 "name": "Passthru0", 00:05:31.733 "base_bdev_name": "Malloc2" 00:05:31.733 } 00:05:31.733 } 00:05:31.733 } 00:05:31.733 ]' 00:05:31.733 04:53:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:31.733 04:53:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:31.733 04:53:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:31.733 04:53:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.733 04:53:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.733 04:53:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.733 04:53:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:31.733 04:53:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.733 04:53:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.734 04:53:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.734 04:53:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:31.734 04:53:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.734 04:53:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.734 04:53:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.734 04:53:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:31.734 04:53:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:31.734 04:53:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:31.734 00:05:31.734 real 0m0.213s 00:05:31.734 user 0m0.129s 00:05:31.734 sys 0m0.024s 00:05:31.734 04:53:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:31.734 04:53:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.734 ************************************ 00:05:31.734 END TEST rpc_daemon_integrity 00:05:31.734 ************************************ 00:05:31.734 04:53:09 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:31.734 04:53:09 rpc -- rpc/rpc.sh@84 -- # killprocess 69482 00:05:31.734 04:53:09 rpc -- common/autotest_common.sh@950 -- # '[' -z 69482 ']' 00:05:31.734 04:53:09 rpc -- common/autotest_common.sh@954 -- # kill -0 69482 00:05:31.734 04:53:09 rpc -- common/autotest_common.sh@955 -- # uname 00:05:31.734 04:53:09 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:31.734 04:53:09 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69482 00:05:31.734 04:53:09 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:31.734 killing process with pid 69482 00:05:31.734 04:53:09 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:31.734 04:53:09 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69482' 00:05:31.734 04:53:09 rpc -- common/autotest_common.sh@969 -- # kill 69482 00:05:31.734 04:53:09 rpc -- common/autotest_common.sh@974 -- # wait 69482 00:05:32.299 00:05:32.299 real 0m2.218s 00:05:32.299 user 0m2.690s 00:05:32.299 sys 0m0.534s 00:05:32.299 04:53:10 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:32.299 04:53:10 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.299 ************************************ 00:05:32.299 END TEST rpc 00:05:32.299 ************************************ 00:05:32.299 04:53:10 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:32.299 04:53:10 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:32.299 04:53:10 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:32.299 04:53:10 -- common/autotest_common.sh@10 -- # set +x 00:05:32.299 ************************************ 00:05:32.299 START TEST skip_rpc 00:05:32.299 ************************************ 00:05:32.299 04:53:10 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:32.299 * Looking for test storage... 00:05:32.299 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:32.299 04:53:10 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:32.299 04:53:10 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:32.299 04:53:10 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:32.299 04:53:10 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:32.299 04:53:10 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:32.299 04:53:10 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:32.299 04:53:10 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:32.299 04:53:10 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:32.299 04:53:10 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:32.299 04:53:10 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:32.299 04:53:10 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:32.299 04:53:10 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:32.299 04:53:10 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:32.299 04:53:10 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:32.299 04:53:10 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:32.299 04:53:10 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:32.299 04:53:10 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:32.299 04:53:10 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:32.299 04:53:10 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:32.299 04:53:10 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:32.299 04:53:10 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:32.299 04:53:10 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:32.299 04:53:10 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:32.299 04:53:10 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:32.299 04:53:10 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:32.299 04:53:10 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:32.299 04:53:10 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:32.299 04:53:10 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:32.299 04:53:10 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:32.299 04:53:10 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:32.299 04:53:10 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:32.299 04:53:10 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:32.299 04:53:10 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:32.299 04:53:10 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:32.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.299 --rc genhtml_branch_coverage=1 00:05:32.299 --rc genhtml_function_coverage=1 00:05:32.299 --rc genhtml_legend=1 00:05:32.299 --rc geninfo_all_blocks=1 00:05:32.299 --rc geninfo_unexecuted_blocks=1 00:05:32.299 00:05:32.299 ' 00:05:32.299 04:53:10 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:32.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.299 --rc genhtml_branch_coverage=1 00:05:32.299 --rc genhtml_function_coverage=1 00:05:32.299 --rc genhtml_legend=1 00:05:32.299 --rc geninfo_all_blocks=1 00:05:32.299 --rc geninfo_unexecuted_blocks=1 00:05:32.299 00:05:32.299 ' 00:05:32.299 04:53:10 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:32.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.299 --rc genhtml_branch_coverage=1 00:05:32.299 --rc genhtml_function_coverage=1 00:05:32.299 --rc genhtml_legend=1 00:05:32.299 --rc geninfo_all_blocks=1 00:05:32.299 --rc geninfo_unexecuted_blocks=1 00:05:32.299 00:05:32.299 ' 00:05:32.299 04:53:10 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:32.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.299 --rc genhtml_branch_coverage=1 00:05:32.299 --rc genhtml_function_coverage=1 00:05:32.299 --rc genhtml_legend=1 00:05:32.299 --rc geninfo_all_blocks=1 00:05:32.299 --rc geninfo_unexecuted_blocks=1 00:05:32.299 00:05:32.299 ' 00:05:32.299 04:53:10 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:32.299 04:53:10 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:32.299 04:53:10 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:32.299 04:53:10 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:32.299 04:53:10 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:32.299 04:53:10 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.299 ************************************ 00:05:32.299 START TEST skip_rpc 00:05:32.299 ************************************ 00:05:32.299 04:53:10 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:05:32.299 04:53:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=69678 00:05:32.299 04:53:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:32.299 04:53:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:32.299 04:53:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:32.299 [2024-12-06 04:53:10.482922] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:32.299 [2024-12-06 04:53:10.483345] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69678 ] 00:05:32.557 [2024-12-06 04:53:10.619052] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.557 [2024-12-06 04:53:10.652746] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.821 04:53:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:37.821 04:53:15 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:37.821 04:53:15 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:37.821 04:53:15 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:37.821 04:53:15 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:37.821 04:53:15 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:37.821 04:53:15 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:37.821 04:53:15 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:05:37.821 04:53:15 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.821 04:53:15 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.821 04:53:15 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:37.821 04:53:15 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:37.821 04:53:15 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:37.821 04:53:15 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:37.821 04:53:15 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:37.821 04:53:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:37.821 04:53:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 69678 00:05:37.821 04:53:15 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 69678 ']' 00:05:37.821 04:53:15 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 69678 00:05:37.821 04:53:15 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:05:37.821 04:53:15 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:37.821 04:53:15 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69678 00:05:37.821 killing process with pid 69678 00:05:37.821 04:53:15 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:37.821 04:53:15 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:37.821 04:53:15 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69678' 00:05:37.821 04:53:15 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 69678 00:05:37.821 04:53:15 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 69678 00:05:37.821 00:05:37.821 real 0m5.281s 00:05:37.821 user 0m4.948s 00:05:37.821 sys 0m0.235s 00:05:37.821 04:53:15 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:37.821 ************************************ 00:05:37.821 END TEST skip_rpc 00:05:37.821 ************************************ 00:05:37.821 04:53:15 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.821 04:53:15 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:37.821 04:53:15 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:37.821 04:53:15 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:37.821 04:53:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.821 ************************************ 00:05:37.821 START TEST skip_rpc_with_json 00:05:37.821 ************************************ 00:05:37.821 04:53:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:05:37.821 04:53:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:37.821 04:53:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=69760 00:05:37.821 04:53:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:37.821 04:53:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 69760 00:05:37.821 04:53:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 69760 ']' 00:05:37.821 04:53:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.821 04:53:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:37.821 04:53:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:37.821 04:53:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.821 04:53:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:37.821 04:53:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:37.821 [2024-12-06 04:53:15.808126] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:37.821 [2024-12-06 04:53:15.808239] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69760 ] 00:05:37.821 [2024-12-06 04:53:15.945457] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.821 [2024-12-06 04:53:15.978813] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.756 04:53:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:38.756 04:53:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:05:38.756 04:53:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:38.756 04:53:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:38.756 04:53:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:38.756 [2024-12-06 04:53:16.648935] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:38.756 request: 00:05:38.756 { 00:05:38.756 "trtype": "tcp", 00:05:38.756 "method": "nvmf_get_transports", 00:05:38.756 "req_id": 1 00:05:38.756 } 00:05:38.756 Got JSON-RPC error response 00:05:38.756 response: 00:05:38.756 { 00:05:38.756 "code": -19, 00:05:38.756 "message": "No such device" 00:05:38.756 } 00:05:38.756 04:53:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:38.756 04:53:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:38.756 04:53:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:38.756 04:53:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:38.756 [2024-12-06 04:53:16.661059] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:38.756 04:53:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:38.756 04:53:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:38.756 04:53:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:38.756 04:53:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:38.756 04:53:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:38.756 04:53:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:38.756 { 00:05:38.756 "subsystems": [ 00:05:38.756 { 00:05:38.756 "subsystem": "fsdev", 00:05:38.756 "config": [ 00:05:38.756 { 00:05:38.756 "method": "fsdev_set_opts", 00:05:38.756 "params": { 00:05:38.756 "fsdev_io_pool_size": 65535, 00:05:38.756 "fsdev_io_cache_size": 256 00:05:38.756 } 00:05:38.756 } 00:05:38.756 ] 00:05:38.756 }, 00:05:38.756 { 00:05:38.756 "subsystem": "keyring", 00:05:38.756 "config": [] 00:05:38.756 }, 00:05:38.756 { 00:05:38.756 "subsystem": "iobuf", 00:05:38.756 "config": [ 00:05:38.756 { 00:05:38.756 "method": "iobuf_set_options", 00:05:38.756 "params": { 00:05:38.756 "small_pool_count": 8192, 00:05:38.756 "large_pool_count": 1024, 00:05:38.756 "small_bufsize": 8192, 00:05:38.756 "large_bufsize": 135168 00:05:38.756 } 00:05:38.756 } 00:05:38.756 ] 00:05:38.756 }, 00:05:38.756 { 00:05:38.756 "subsystem": "sock", 00:05:38.756 "config": [ 00:05:38.756 { 00:05:38.756 "method": "sock_set_default_impl", 00:05:38.756 "params": { 00:05:38.756 "impl_name": "posix" 00:05:38.756 } 00:05:38.756 }, 00:05:38.756 { 00:05:38.756 "method": "sock_impl_set_options", 00:05:38.756 "params": { 00:05:38.756 "impl_name": "ssl", 00:05:38.756 "recv_buf_size": 4096, 00:05:38.756 "send_buf_size": 4096, 00:05:38.756 "enable_recv_pipe": true, 00:05:38.756 "enable_quickack": false, 00:05:38.756 "enable_placement_id": 0, 00:05:38.756 "enable_zerocopy_send_server": true, 00:05:38.756 "enable_zerocopy_send_client": false, 00:05:38.756 "zerocopy_threshold": 0, 00:05:38.756 "tls_version": 0, 00:05:38.756 "enable_ktls": false 00:05:38.756 } 00:05:38.756 }, 00:05:38.756 { 00:05:38.756 "method": "sock_impl_set_options", 00:05:38.756 "params": { 00:05:38.756 "impl_name": "posix", 00:05:38.756 "recv_buf_size": 2097152, 00:05:38.756 "send_buf_size": 2097152, 00:05:38.756 "enable_recv_pipe": true, 00:05:38.756 "enable_quickack": false, 00:05:38.756 "enable_placement_id": 0, 00:05:38.756 "enable_zerocopy_send_server": true, 00:05:38.756 "enable_zerocopy_send_client": false, 00:05:38.756 "zerocopy_threshold": 0, 00:05:38.756 "tls_version": 0, 00:05:38.756 "enable_ktls": false 00:05:38.756 } 00:05:38.756 } 00:05:38.756 ] 00:05:38.756 }, 00:05:38.756 { 00:05:38.756 "subsystem": "vmd", 00:05:38.756 "config": [] 00:05:38.756 }, 00:05:38.756 { 00:05:38.756 "subsystem": "accel", 00:05:38.756 "config": [ 00:05:38.756 { 00:05:38.756 "method": "accel_set_options", 00:05:38.756 "params": { 00:05:38.756 "small_cache_size": 128, 00:05:38.756 "large_cache_size": 16, 00:05:38.756 "task_count": 2048, 00:05:38.756 "sequence_count": 2048, 00:05:38.756 "buf_count": 2048 00:05:38.756 } 00:05:38.756 } 00:05:38.756 ] 00:05:38.756 }, 00:05:38.756 { 00:05:38.756 "subsystem": "bdev", 00:05:38.757 "config": [ 00:05:38.757 { 00:05:38.757 "method": "bdev_set_options", 00:05:38.757 "params": { 00:05:38.757 "bdev_io_pool_size": 65535, 00:05:38.757 "bdev_io_cache_size": 256, 00:05:38.757 "bdev_auto_examine": true, 00:05:38.757 "iobuf_small_cache_size": 128, 00:05:38.757 "iobuf_large_cache_size": 16 00:05:38.757 } 00:05:38.757 }, 00:05:38.757 { 00:05:38.757 "method": "bdev_raid_set_options", 00:05:38.757 "params": { 00:05:38.757 "process_window_size_kb": 1024, 00:05:38.757 "process_max_bandwidth_mb_sec": 0 00:05:38.757 } 00:05:38.757 }, 00:05:38.757 { 00:05:38.757 "method": "bdev_iscsi_set_options", 00:05:38.757 "params": { 00:05:38.757 "timeout_sec": 30 00:05:38.757 } 00:05:38.757 }, 00:05:38.757 { 00:05:38.757 "method": "bdev_nvme_set_options", 00:05:38.757 "params": { 00:05:38.757 "action_on_timeout": "none", 00:05:38.757 "timeout_us": 0, 00:05:38.757 "timeout_admin_us": 0, 00:05:38.757 "keep_alive_timeout_ms": 10000, 00:05:38.757 "arbitration_burst": 0, 00:05:38.757 "low_priority_weight": 0, 00:05:38.757 "medium_priority_weight": 0, 00:05:38.757 "high_priority_weight": 0, 00:05:38.757 "nvme_adminq_poll_period_us": 10000, 00:05:38.757 "nvme_ioq_poll_period_us": 0, 00:05:38.757 "io_queue_requests": 0, 00:05:38.757 "delay_cmd_submit": true, 00:05:38.757 "transport_retry_count": 4, 00:05:38.757 "bdev_retry_count": 3, 00:05:38.757 "transport_ack_timeout": 0, 00:05:38.757 "ctrlr_loss_timeout_sec": 0, 00:05:38.757 "reconnect_delay_sec": 0, 00:05:38.757 "fast_io_fail_timeout_sec": 0, 00:05:38.757 "disable_auto_failback": false, 00:05:38.757 "generate_uuids": false, 00:05:38.757 "transport_tos": 0, 00:05:38.757 "nvme_error_stat": false, 00:05:38.757 "rdma_srq_size": 0, 00:05:38.757 "io_path_stat": false, 00:05:38.757 "allow_accel_sequence": false, 00:05:38.757 "rdma_max_cq_size": 0, 00:05:38.757 "rdma_cm_event_timeout_ms": 0, 00:05:38.757 "dhchap_digests": [ 00:05:38.757 "sha256", 00:05:38.757 "sha384", 00:05:38.757 "sha512" 00:05:38.757 ], 00:05:38.757 "dhchap_dhgroups": [ 00:05:38.757 "null", 00:05:38.757 "ffdhe2048", 00:05:38.757 "ffdhe3072", 00:05:38.757 "ffdhe4096", 00:05:38.757 "ffdhe6144", 00:05:38.757 "ffdhe8192" 00:05:38.757 ] 00:05:38.757 } 00:05:38.757 }, 00:05:38.757 { 00:05:38.757 "method": "bdev_nvme_set_hotplug", 00:05:38.757 "params": { 00:05:38.757 "period_us": 100000, 00:05:38.757 "enable": false 00:05:38.757 } 00:05:38.757 }, 00:05:38.757 { 00:05:38.757 "method": "bdev_wait_for_examine" 00:05:38.757 } 00:05:38.757 ] 00:05:38.757 }, 00:05:38.757 { 00:05:38.757 "subsystem": "scsi", 00:05:38.757 "config": null 00:05:38.757 }, 00:05:38.757 { 00:05:38.757 "subsystem": "scheduler", 00:05:38.757 "config": [ 00:05:38.757 { 00:05:38.757 "method": "framework_set_scheduler", 00:05:38.757 "params": { 00:05:38.757 "name": "static" 00:05:38.757 } 00:05:38.757 } 00:05:38.757 ] 00:05:38.757 }, 00:05:38.757 { 00:05:38.757 "subsystem": "vhost_scsi", 00:05:38.757 "config": [] 00:05:38.757 }, 00:05:38.757 { 00:05:38.757 "subsystem": "vhost_blk", 00:05:38.757 "config": [] 00:05:38.757 }, 00:05:38.757 { 00:05:38.757 "subsystem": "ublk", 00:05:38.757 "config": [] 00:05:38.757 }, 00:05:38.757 { 00:05:38.757 "subsystem": "nbd", 00:05:38.757 "config": [] 00:05:38.757 }, 00:05:38.757 { 00:05:38.757 "subsystem": "nvmf", 00:05:38.757 "config": [ 00:05:38.757 { 00:05:38.757 "method": "nvmf_set_config", 00:05:38.757 "params": { 00:05:38.757 "discovery_filter": "match_any", 00:05:38.757 "admin_cmd_passthru": { 00:05:38.757 "identify_ctrlr": false 00:05:38.757 }, 00:05:38.757 "dhchap_digests": [ 00:05:38.757 "sha256", 00:05:38.757 "sha384", 00:05:38.757 "sha512" 00:05:38.757 ], 00:05:38.757 "dhchap_dhgroups": [ 00:05:38.757 "null", 00:05:38.757 "ffdhe2048", 00:05:38.757 "ffdhe3072", 00:05:38.757 "ffdhe4096", 00:05:38.757 "ffdhe6144", 00:05:38.757 "ffdhe8192" 00:05:38.757 ] 00:05:38.757 } 00:05:38.757 }, 00:05:38.757 { 00:05:38.757 "method": "nvmf_set_max_subsystems", 00:05:38.757 "params": { 00:05:38.757 "max_subsystems": 1024 00:05:38.757 } 00:05:38.757 }, 00:05:38.757 { 00:05:38.757 "method": "nvmf_set_crdt", 00:05:38.757 "params": { 00:05:38.757 "crdt1": 0, 00:05:38.757 "crdt2": 0, 00:05:38.757 "crdt3": 0 00:05:38.757 } 00:05:38.757 }, 00:05:38.757 { 00:05:38.757 "method": "nvmf_create_transport", 00:05:38.757 "params": { 00:05:38.757 "trtype": "TCP", 00:05:38.757 "max_queue_depth": 128, 00:05:38.757 "max_io_qpairs_per_ctrlr": 127, 00:05:38.757 "in_capsule_data_size": 4096, 00:05:38.757 "max_io_size": 131072, 00:05:38.757 "io_unit_size": 131072, 00:05:38.757 "max_aq_depth": 128, 00:05:38.757 "num_shared_buffers": 511, 00:05:38.757 "buf_cache_size": 4294967295, 00:05:38.757 "dif_insert_or_strip": false, 00:05:38.757 "zcopy": false, 00:05:38.757 "c2h_success": true, 00:05:38.757 "sock_priority": 0, 00:05:38.757 "abort_timeout_sec": 1, 00:05:38.757 "ack_timeout": 0, 00:05:38.757 "data_wr_pool_size": 0 00:05:38.757 } 00:05:38.757 } 00:05:38.757 ] 00:05:38.757 }, 00:05:38.757 { 00:05:38.757 "subsystem": "iscsi", 00:05:38.757 "config": [ 00:05:38.757 { 00:05:38.757 "method": "iscsi_set_options", 00:05:38.757 "params": { 00:05:38.757 "node_base": "iqn.2016-06.io.spdk", 00:05:38.757 "max_sessions": 128, 00:05:38.757 "max_connections_per_session": 2, 00:05:38.757 "max_queue_depth": 64, 00:05:38.757 "default_time2wait": 2, 00:05:38.757 "default_time2retain": 20, 00:05:38.757 "first_burst_length": 8192, 00:05:38.757 "immediate_data": true, 00:05:38.757 "allow_duplicated_isid": false, 00:05:38.757 "error_recovery_level": 0, 00:05:38.757 "nop_timeout": 60, 00:05:38.757 "nop_in_interval": 30, 00:05:38.757 "disable_chap": false, 00:05:38.757 "require_chap": false, 00:05:38.757 "mutual_chap": false, 00:05:38.757 "chap_group": 0, 00:05:38.757 "max_large_datain_per_connection": 64, 00:05:38.757 "max_r2t_per_connection": 4, 00:05:38.757 "pdu_pool_size": 36864, 00:05:38.757 "immediate_data_pool_size": 16384, 00:05:38.757 "data_out_pool_size": 2048 00:05:38.757 } 00:05:38.757 } 00:05:38.757 ] 00:05:38.757 } 00:05:38.757 ] 00:05:38.757 } 00:05:38.757 04:53:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:38.757 04:53:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 69760 00:05:38.757 04:53:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 69760 ']' 00:05:38.757 04:53:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 69760 00:05:38.757 04:53:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:38.757 04:53:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:38.757 04:53:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69760 00:05:38.757 04:53:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:38.757 killing process with pid 69760 00:05:38.757 04:53:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:38.757 04:53:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69760' 00:05:38.757 04:53:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 69760 00:05:38.757 04:53:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 69760 00:05:39.015 04:53:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=69789 00:05:39.015 04:53:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:39.015 04:53:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:44.281 04:53:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 69789 00:05:44.281 04:53:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 69789 ']' 00:05:44.281 04:53:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 69789 00:05:44.281 04:53:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:44.281 04:53:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:44.281 04:53:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69789 00:05:44.281 04:53:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:44.281 killing process with pid 69789 00:05:44.281 04:53:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:44.281 04:53:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69789' 00:05:44.281 04:53:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 69789 00:05:44.281 04:53:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 69789 00:05:44.281 04:53:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:44.281 04:53:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:44.281 00:05:44.281 real 0m6.656s 00:05:44.281 user 0m6.355s 00:05:44.281 sys 0m0.532s 00:05:44.281 04:53:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:44.281 04:53:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:44.281 ************************************ 00:05:44.281 END TEST skip_rpc_with_json 00:05:44.281 ************************************ 00:05:44.281 04:53:22 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:44.281 04:53:22 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:44.281 04:53:22 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:44.281 04:53:22 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.281 ************************************ 00:05:44.281 START TEST skip_rpc_with_delay 00:05:44.281 ************************************ 00:05:44.281 04:53:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:05:44.281 04:53:22 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:44.281 04:53:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:05:44.281 04:53:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:44.281 04:53:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:44.281 04:53:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:44.281 04:53:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:44.281 04:53:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:44.281 04:53:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:44.281 04:53:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:44.281 04:53:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:44.281 04:53:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:44.281 04:53:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:44.281 [2024-12-06 04:53:22.506374] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:44.281 [2024-12-06 04:53:22.506483] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:44.541 04:53:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:05:44.541 04:53:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:44.541 04:53:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:44.541 04:53:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:44.541 00:05:44.541 real 0m0.114s 00:05:44.541 user 0m0.055s 00:05:44.541 sys 0m0.058s 00:05:44.541 04:53:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:44.541 ************************************ 00:05:44.541 END TEST skip_rpc_with_delay 00:05:44.541 ************************************ 00:05:44.541 04:53:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:44.541 04:53:22 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:44.541 04:53:22 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:44.541 04:53:22 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:44.541 04:53:22 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:44.541 04:53:22 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:44.541 04:53:22 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.541 ************************************ 00:05:44.541 START TEST exit_on_failed_rpc_init 00:05:44.541 ************************************ 00:05:44.541 04:53:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:05:44.541 04:53:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=69900 00:05:44.541 04:53:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 69900 00:05:44.541 04:53:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:44.541 04:53:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 69900 ']' 00:05:44.541 04:53:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.541 04:53:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:44.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.541 04:53:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.541 04:53:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:44.541 04:53:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:44.541 [2024-12-06 04:53:22.668135] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:44.541 [2024-12-06 04:53:22.668256] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69900 ] 00:05:44.799 [2024-12-06 04:53:22.800853] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.799 [2024-12-06 04:53:22.831131] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.367 04:53:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:45.367 04:53:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:05:45.367 04:53:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:45.367 04:53:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:45.367 04:53:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:05:45.367 04:53:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:45.367 04:53:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:45.367 04:53:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:45.367 04:53:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:45.367 04:53:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:45.367 04:53:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:45.367 04:53:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:45.367 04:53:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:45.367 04:53:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:45.367 04:53:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:45.367 [2024-12-06 04:53:23.580124] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:45.367 [2024-12-06 04:53:23.580234] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69918 ] 00:05:45.625 [2024-12-06 04:53:23.715274] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.625 [2024-12-06 04:53:23.745698] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:45.625 [2024-12-06 04:53:23.745783] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:45.625 [2024-12-06 04:53:23.745800] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:45.625 [2024-12-06 04:53:23.745812] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:45.625 04:53:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:05:45.625 04:53:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:45.625 04:53:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:05:45.625 04:53:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:05:45.625 04:53:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:05:45.625 04:53:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:45.625 04:53:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:45.625 04:53:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 69900 00:05:45.625 04:53:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 69900 ']' 00:05:45.625 04:53:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 69900 00:05:45.625 04:53:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:05:45.625 04:53:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:45.625 04:53:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69900 00:05:45.885 killing process with pid 69900 00:05:45.885 04:53:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:45.885 04:53:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:45.885 04:53:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69900' 00:05:45.885 04:53:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 69900 00:05:45.885 04:53:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 69900 00:05:45.885 00:05:45.885 real 0m1.496s 00:05:45.885 user 0m1.662s 00:05:45.885 sys 0m0.366s 00:05:45.885 04:53:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:45.885 04:53:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:45.885 ************************************ 00:05:45.885 END TEST exit_on_failed_rpc_init 00:05:45.885 ************************************ 00:05:46.145 04:53:24 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:46.145 00:05:46.145 real 0m13.874s 00:05:46.145 user 0m13.156s 00:05:46.145 sys 0m1.374s 00:05:46.145 ************************************ 00:05:46.145 END TEST skip_rpc 00:05:46.145 ************************************ 00:05:46.145 04:53:24 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:46.145 04:53:24 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.145 04:53:24 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:46.145 04:53:24 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:46.145 04:53:24 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:46.145 04:53:24 -- common/autotest_common.sh@10 -- # set +x 00:05:46.145 ************************************ 00:05:46.145 START TEST rpc_client 00:05:46.145 ************************************ 00:05:46.145 04:53:24 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:46.145 * Looking for test storage... 00:05:46.145 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:46.145 04:53:24 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:46.145 04:53:24 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:05:46.145 04:53:24 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:46.145 04:53:24 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:46.145 04:53:24 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:46.145 04:53:24 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:46.145 04:53:24 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:46.145 04:53:24 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:46.145 04:53:24 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:46.145 04:53:24 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:46.145 04:53:24 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:46.145 04:53:24 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:46.145 04:53:24 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:46.145 04:53:24 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:46.145 04:53:24 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:46.145 04:53:24 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:46.145 04:53:24 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:46.145 04:53:24 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:46.145 04:53:24 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:46.145 04:53:24 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:46.145 04:53:24 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:46.145 04:53:24 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:46.146 04:53:24 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:46.146 04:53:24 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:46.146 04:53:24 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:46.146 04:53:24 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:46.146 04:53:24 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:46.146 04:53:24 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:46.146 04:53:24 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:46.146 04:53:24 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:46.146 04:53:24 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:46.146 04:53:24 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:46.146 04:53:24 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:46.146 04:53:24 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:46.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.146 --rc genhtml_branch_coverage=1 00:05:46.146 --rc genhtml_function_coverage=1 00:05:46.146 --rc genhtml_legend=1 00:05:46.146 --rc geninfo_all_blocks=1 00:05:46.146 --rc geninfo_unexecuted_blocks=1 00:05:46.146 00:05:46.146 ' 00:05:46.146 04:53:24 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:46.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.146 --rc genhtml_branch_coverage=1 00:05:46.146 --rc genhtml_function_coverage=1 00:05:46.146 --rc genhtml_legend=1 00:05:46.146 --rc geninfo_all_blocks=1 00:05:46.146 --rc geninfo_unexecuted_blocks=1 00:05:46.146 00:05:46.146 ' 00:05:46.146 04:53:24 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:46.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.146 --rc genhtml_branch_coverage=1 00:05:46.146 --rc genhtml_function_coverage=1 00:05:46.146 --rc genhtml_legend=1 00:05:46.146 --rc geninfo_all_blocks=1 00:05:46.146 --rc geninfo_unexecuted_blocks=1 00:05:46.146 00:05:46.146 ' 00:05:46.146 04:53:24 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:46.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.146 --rc genhtml_branch_coverage=1 00:05:46.146 --rc genhtml_function_coverage=1 00:05:46.146 --rc genhtml_legend=1 00:05:46.146 --rc geninfo_all_blocks=1 00:05:46.146 --rc geninfo_unexecuted_blocks=1 00:05:46.146 00:05:46.146 ' 00:05:46.146 04:53:24 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:46.146 OK 00:05:46.407 04:53:24 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:46.408 00:05:46.408 real 0m0.192s 00:05:46.408 user 0m0.110s 00:05:46.408 sys 0m0.088s 00:05:46.408 ************************************ 00:05:46.408 END TEST rpc_client 00:05:46.408 ************************************ 00:05:46.408 04:53:24 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:46.408 04:53:24 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:46.408 04:53:24 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:46.408 04:53:24 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:46.408 04:53:24 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:46.408 04:53:24 -- common/autotest_common.sh@10 -- # set +x 00:05:46.408 ************************************ 00:05:46.408 START TEST json_config 00:05:46.408 ************************************ 00:05:46.408 04:53:24 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:46.408 04:53:24 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:46.408 04:53:24 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:05:46.408 04:53:24 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:46.408 04:53:24 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:46.408 04:53:24 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:46.408 04:53:24 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:46.408 04:53:24 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:46.408 04:53:24 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:46.408 04:53:24 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:46.408 04:53:24 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:46.408 04:53:24 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:46.408 04:53:24 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:46.408 04:53:24 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:46.408 04:53:24 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:46.408 04:53:24 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:46.408 04:53:24 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:46.408 04:53:24 json_config -- scripts/common.sh@345 -- # : 1 00:05:46.408 04:53:24 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:46.408 04:53:24 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:46.408 04:53:24 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:46.408 04:53:24 json_config -- scripts/common.sh@353 -- # local d=1 00:05:46.408 04:53:24 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:46.408 04:53:24 json_config -- scripts/common.sh@355 -- # echo 1 00:05:46.408 04:53:24 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:46.408 04:53:24 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:46.408 04:53:24 json_config -- scripts/common.sh@353 -- # local d=2 00:05:46.408 04:53:24 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:46.408 04:53:24 json_config -- scripts/common.sh@355 -- # echo 2 00:05:46.408 04:53:24 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:46.408 04:53:24 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:46.408 04:53:24 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:46.408 04:53:24 json_config -- scripts/common.sh@368 -- # return 0 00:05:46.408 04:53:24 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:46.408 04:53:24 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:46.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.408 --rc genhtml_branch_coverage=1 00:05:46.408 --rc genhtml_function_coverage=1 00:05:46.408 --rc genhtml_legend=1 00:05:46.408 --rc geninfo_all_blocks=1 00:05:46.408 --rc geninfo_unexecuted_blocks=1 00:05:46.408 00:05:46.408 ' 00:05:46.408 04:53:24 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:46.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.408 --rc genhtml_branch_coverage=1 00:05:46.408 --rc genhtml_function_coverage=1 00:05:46.408 --rc genhtml_legend=1 00:05:46.408 --rc geninfo_all_blocks=1 00:05:46.408 --rc geninfo_unexecuted_blocks=1 00:05:46.408 00:05:46.408 ' 00:05:46.408 04:53:24 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:46.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.408 --rc genhtml_branch_coverage=1 00:05:46.408 --rc genhtml_function_coverage=1 00:05:46.408 --rc genhtml_legend=1 00:05:46.408 --rc geninfo_all_blocks=1 00:05:46.408 --rc geninfo_unexecuted_blocks=1 00:05:46.408 00:05:46.408 ' 00:05:46.408 04:53:24 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:46.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.408 --rc genhtml_branch_coverage=1 00:05:46.408 --rc genhtml_function_coverage=1 00:05:46.408 --rc genhtml_legend=1 00:05:46.408 --rc geninfo_all_blocks=1 00:05:46.408 --rc geninfo_unexecuted_blocks=1 00:05:46.408 00:05:46.408 ' 00:05:46.408 04:53:24 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:46.408 04:53:24 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:46.408 04:53:24 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:46.408 04:53:24 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:46.408 04:53:24 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:46.408 04:53:24 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:46.408 04:53:24 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:46.408 04:53:24 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:46.408 04:53:24 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:46.408 04:53:24 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:46.408 04:53:24 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:46.408 04:53:24 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:46.408 04:53:24 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:87da07b1-043b-42fc-9015-b8e07d74ef22 00:05:46.408 04:53:24 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=87da07b1-043b-42fc-9015-b8e07d74ef22 00:05:46.408 04:53:24 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:46.408 04:53:24 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:46.408 04:53:24 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:46.408 04:53:24 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:46.408 04:53:24 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:46.408 04:53:24 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:46.408 04:53:24 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:46.408 04:53:24 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:46.408 04:53:24 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:46.408 04:53:24 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.408 04:53:24 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.408 04:53:24 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.408 04:53:24 json_config -- paths/export.sh@5 -- # export PATH 00:05:46.408 04:53:24 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.408 04:53:24 json_config -- nvmf/common.sh@51 -- # : 0 00:05:46.408 04:53:24 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:46.408 04:53:24 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:46.408 04:53:24 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:46.408 04:53:24 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:46.408 04:53:24 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:46.408 04:53:24 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:46.408 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:46.408 04:53:24 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:46.408 04:53:24 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:46.408 04:53:24 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:46.408 04:53:24 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:46.408 04:53:24 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:46.408 04:53:24 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:46.408 04:53:24 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:46.408 04:53:24 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:46.408 WARNING: No tests are enabled so not running JSON configuration tests 00:05:46.408 04:53:24 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:46.408 04:53:24 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:46.408 00:05:46.408 real 0m0.139s 00:05:46.408 user 0m0.075s 00:05:46.408 sys 0m0.063s 00:05:46.408 04:53:24 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:46.408 04:53:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:46.409 ************************************ 00:05:46.409 END TEST json_config 00:05:46.409 ************************************ 00:05:46.409 04:53:24 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:46.409 04:53:24 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:46.409 04:53:24 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:46.409 04:53:24 -- common/autotest_common.sh@10 -- # set +x 00:05:46.670 ************************************ 00:05:46.670 START TEST json_config_extra_key 00:05:46.670 ************************************ 00:05:46.670 04:53:24 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:46.670 04:53:24 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:46.670 04:53:24 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:05:46.670 04:53:24 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:46.670 04:53:24 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:46.670 04:53:24 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:46.670 04:53:24 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:46.670 04:53:24 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:46.670 04:53:24 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:46.670 04:53:24 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:46.670 04:53:24 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:46.670 04:53:24 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:46.670 04:53:24 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:46.670 04:53:24 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:46.670 04:53:24 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:46.670 04:53:24 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:46.670 04:53:24 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:46.670 04:53:24 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:46.670 04:53:24 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:46.670 04:53:24 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:46.670 04:53:24 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:46.670 04:53:24 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:46.670 04:53:24 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:46.670 04:53:24 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:46.670 04:53:24 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:46.670 04:53:24 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:46.670 04:53:24 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:46.670 04:53:24 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:46.670 04:53:24 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:46.670 04:53:24 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:46.670 04:53:24 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:46.670 04:53:24 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:46.670 04:53:24 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:46.670 04:53:24 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:46.670 04:53:24 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:46.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.670 --rc genhtml_branch_coverage=1 00:05:46.670 --rc genhtml_function_coverage=1 00:05:46.670 --rc genhtml_legend=1 00:05:46.670 --rc geninfo_all_blocks=1 00:05:46.670 --rc geninfo_unexecuted_blocks=1 00:05:46.670 00:05:46.670 ' 00:05:46.670 04:53:24 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:46.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.670 --rc genhtml_branch_coverage=1 00:05:46.670 --rc genhtml_function_coverage=1 00:05:46.670 --rc genhtml_legend=1 00:05:46.670 --rc geninfo_all_blocks=1 00:05:46.670 --rc geninfo_unexecuted_blocks=1 00:05:46.670 00:05:46.670 ' 00:05:46.670 04:53:24 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:46.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.670 --rc genhtml_branch_coverage=1 00:05:46.670 --rc genhtml_function_coverage=1 00:05:46.670 --rc genhtml_legend=1 00:05:46.670 --rc geninfo_all_blocks=1 00:05:46.670 --rc geninfo_unexecuted_blocks=1 00:05:46.670 00:05:46.670 ' 00:05:46.670 04:53:24 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:46.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.670 --rc genhtml_branch_coverage=1 00:05:46.670 --rc genhtml_function_coverage=1 00:05:46.670 --rc genhtml_legend=1 00:05:46.670 --rc geninfo_all_blocks=1 00:05:46.670 --rc geninfo_unexecuted_blocks=1 00:05:46.670 00:05:46.670 ' 00:05:46.670 04:53:24 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:46.670 04:53:24 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:46.670 04:53:24 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:46.670 04:53:24 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:46.670 04:53:24 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:46.670 04:53:24 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:46.670 04:53:24 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:46.670 04:53:24 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:46.670 04:53:24 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:46.670 04:53:24 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:46.670 04:53:24 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:46.670 04:53:24 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:46.670 04:53:24 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:87da07b1-043b-42fc-9015-b8e07d74ef22 00:05:46.670 04:53:24 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=87da07b1-043b-42fc-9015-b8e07d74ef22 00:05:46.670 04:53:24 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:46.670 04:53:24 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:46.670 04:53:24 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:46.670 04:53:24 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:46.670 04:53:24 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:46.670 04:53:24 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:46.670 04:53:24 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:46.670 04:53:24 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:46.670 04:53:24 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:46.670 04:53:24 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.670 04:53:24 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.670 04:53:24 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.670 04:53:24 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:46.671 04:53:24 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.671 04:53:24 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:46.671 04:53:24 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:46.671 04:53:24 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:46.671 04:53:24 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:46.671 04:53:24 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:46.671 04:53:24 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:46.671 04:53:24 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:46.671 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:46.671 04:53:24 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:46.671 04:53:24 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:46.671 04:53:24 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:46.671 04:53:24 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:46.671 04:53:24 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:46.671 04:53:24 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:46.671 04:53:24 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:46.671 04:53:24 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:46.671 04:53:24 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:46.671 04:53:24 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:46.671 04:53:24 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:46.671 04:53:24 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:46.671 04:53:24 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:46.671 INFO: launching applications... 00:05:46.671 04:53:24 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:46.671 04:53:24 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:46.671 04:53:24 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:46.671 04:53:24 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:46.671 04:53:24 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:46.671 04:53:24 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:46.671 04:53:24 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:46.671 04:53:24 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:46.671 04:53:24 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:46.671 04:53:24 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=70095 00:05:46.671 04:53:24 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:46.671 Waiting for target to run... 00:05:46.671 04:53:24 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 70095 /var/tmp/spdk_tgt.sock 00:05:46.671 04:53:24 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 70095 ']' 00:05:46.671 04:53:24 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:46.671 04:53:24 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:46.671 04:53:24 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:46.671 04:53:24 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:46.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:46.671 04:53:24 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:46.671 04:53:24 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:46.671 [2024-12-06 04:53:24.865269] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:46.671 [2024-12-06 04:53:24.865396] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70095 ] 00:05:47.243 [2024-12-06 04:53:25.167285] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.243 [2024-12-06 04:53:25.187550] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.503 00:05:47.503 INFO: shutting down applications... 00:05:47.503 04:53:25 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:47.503 04:53:25 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:05:47.503 04:53:25 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:47.503 04:53:25 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:47.503 04:53:25 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:47.503 04:53:25 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:47.503 04:53:25 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:47.503 04:53:25 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 70095 ]] 00:05:47.503 04:53:25 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 70095 00:05:47.503 04:53:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:47.503 04:53:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:47.503 04:53:25 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 70095 00:05:47.503 04:53:25 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:48.074 04:53:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:48.074 04:53:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:48.074 04:53:26 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 70095 00:05:48.074 04:53:26 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:48.074 04:53:26 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:48.074 04:53:26 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:48.074 04:53:26 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:48.074 SPDK target shutdown done 00:05:48.074 04:53:26 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:48.074 Success 00:05:48.074 00:05:48.074 real 0m1.565s 00:05:48.074 user 0m1.276s 00:05:48.074 sys 0m0.351s 00:05:48.074 04:53:26 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:48.074 04:53:26 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:48.074 ************************************ 00:05:48.074 END TEST json_config_extra_key 00:05:48.074 ************************************ 00:05:48.074 04:53:26 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:48.074 04:53:26 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:48.074 04:53:26 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:48.074 04:53:26 -- common/autotest_common.sh@10 -- # set +x 00:05:48.074 ************************************ 00:05:48.074 START TEST alias_rpc 00:05:48.074 ************************************ 00:05:48.074 04:53:26 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:48.334 * Looking for test storage... 00:05:48.334 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:48.334 04:53:26 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:48.334 04:53:26 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:48.334 04:53:26 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:48.334 04:53:26 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:48.334 04:53:26 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:48.334 04:53:26 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:48.334 04:53:26 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:48.334 04:53:26 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:48.334 04:53:26 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:48.334 04:53:26 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:48.334 04:53:26 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:48.334 04:53:26 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:48.334 04:53:26 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:48.334 04:53:26 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:48.334 04:53:26 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:48.334 04:53:26 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:48.334 04:53:26 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:48.334 04:53:26 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:48.334 04:53:26 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:48.334 04:53:26 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:48.334 04:53:26 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:48.334 04:53:26 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:48.334 04:53:26 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:48.334 04:53:26 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:48.334 04:53:26 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:48.334 04:53:26 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:48.334 04:53:26 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:48.334 04:53:26 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:48.334 04:53:26 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:48.334 04:53:26 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:48.334 04:53:26 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:48.334 04:53:26 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:48.334 04:53:26 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:48.334 04:53:26 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:48.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.334 --rc genhtml_branch_coverage=1 00:05:48.334 --rc genhtml_function_coverage=1 00:05:48.334 --rc genhtml_legend=1 00:05:48.334 --rc geninfo_all_blocks=1 00:05:48.334 --rc geninfo_unexecuted_blocks=1 00:05:48.334 00:05:48.334 ' 00:05:48.334 04:53:26 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:48.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.334 --rc genhtml_branch_coverage=1 00:05:48.334 --rc genhtml_function_coverage=1 00:05:48.334 --rc genhtml_legend=1 00:05:48.334 --rc geninfo_all_blocks=1 00:05:48.334 --rc geninfo_unexecuted_blocks=1 00:05:48.334 00:05:48.334 ' 00:05:48.335 04:53:26 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:48.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.335 --rc genhtml_branch_coverage=1 00:05:48.335 --rc genhtml_function_coverage=1 00:05:48.335 --rc genhtml_legend=1 00:05:48.335 --rc geninfo_all_blocks=1 00:05:48.335 --rc geninfo_unexecuted_blocks=1 00:05:48.335 00:05:48.335 ' 00:05:48.335 04:53:26 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:48.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.335 --rc genhtml_branch_coverage=1 00:05:48.335 --rc genhtml_function_coverage=1 00:05:48.335 --rc genhtml_legend=1 00:05:48.335 --rc geninfo_all_blocks=1 00:05:48.335 --rc geninfo_unexecuted_blocks=1 00:05:48.335 00:05:48.335 ' 00:05:48.335 04:53:26 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:48.335 04:53:26 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=70169 00:05:48.335 04:53:26 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 70169 00:05:48.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.335 04:53:26 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 70169 ']' 00:05:48.335 04:53:26 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.335 04:53:26 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:48.335 04:53:26 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:48.335 04:53:26 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.335 04:53:26 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:48.335 04:53:26 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.335 [2024-12-06 04:53:26.504969] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:48.335 [2024-12-06 04:53:26.505090] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70169 ] 00:05:48.595 [2024-12-06 04:53:26.640891] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.595 [2024-12-06 04:53:26.673030] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.167 04:53:27 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:49.167 04:53:27 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:49.167 04:53:27 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:49.427 04:53:27 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 70169 00:05:49.427 04:53:27 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 70169 ']' 00:05:49.427 04:53:27 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 70169 00:05:49.427 04:53:27 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:05:49.427 04:53:27 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:49.427 04:53:27 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70169 00:05:49.427 killing process with pid 70169 00:05:49.427 04:53:27 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:49.427 04:53:27 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:49.427 04:53:27 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70169' 00:05:49.427 04:53:27 alias_rpc -- common/autotest_common.sh@969 -- # kill 70169 00:05:49.427 04:53:27 alias_rpc -- common/autotest_common.sh@974 -- # wait 70169 00:05:49.687 ************************************ 00:05:49.687 END TEST alias_rpc 00:05:49.687 ************************************ 00:05:49.687 00:05:49.687 real 0m1.576s 00:05:49.687 user 0m1.722s 00:05:49.687 sys 0m0.354s 00:05:49.687 04:53:27 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:49.687 04:53:27 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.687 04:53:27 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:49.687 04:53:27 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:49.687 04:53:27 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:49.687 04:53:27 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:49.687 04:53:27 -- common/autotest_common.sh@10 -- # set +x 00:05:49.687 ************************************ 00:05:49.687 START TEST spdkcli_tcp 00:05:49.687 ************************************ 00:05:49.687 04:53:27 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:49.947 * Looking for test storage... 00:05:49.947 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:49.947 04:53:27 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:49.947 04:53:27 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:05:49.947 04:53:27 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:49.947 04:53:28 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:49.947 04:53:28 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:49.947 04:53:28 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:49.947 04:53:28 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:49.947 04:53:28 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:49.947 04:53:28 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:49.947 04:53:28 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:49.947 04:53:28 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:49.947 04:53:28 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:49.947 04:53:28 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:49.947 04:53:28 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:49.947 04:53:28 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:49.947 04:53:28 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:49.948 04:53:28 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:49.948 04:53:28 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:49.948 04:53:28 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:49.948 04:53:28 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:49.948 04:53:28 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:49.948 04:53:28 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:49.948 04:53:28 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:49.948 04:53:28 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:49.948 04:53:28 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:49.948 04:53:28 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:49.948 04:53:28 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:49.948 04:53:28 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:49.948 04:53:28 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:49.948 04:53:28 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:49.948 04:53:28 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:49.948 04:53:28 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:49.948 04:53:28 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:49.948 04:53:28 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:49.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.948 --rc genhtml_branch_coverage=1 00:05:49.948 --rc genhtml_function_coverage=1 00:05:49.948 --rc genhtml_legend=1 00:05:49.948 --rc geninfo_all_blocks=1 00:05:49.948 --rc geninfo_unexecuted_blocks=1 00:05:49.948 00:05:49.948 ' 00:05:49.948 04:53:28 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:49.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.948 --rc genhtml_branch_coverage=1 00:05:49.948 --rc genhtml_function_coverage=1 00:05:49.948 --rc genhtml_legend=1 00:05:49.948 --rc geninfo_all_blocks=1 00:05:49.948 --rc geninfo_unexecuted_blocks=1 00:05:49.948 00:05:49.948 ' 00:05:49.948 04:53:28 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:49.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.948 --rc genhtml_branch_coverage=1 00:05:49.948 --rc genhtml_function_coverage=1 00:05:49.948 --rc genhtml_legend=1 00:05:49.948 --rc geninfo_all_blocks=1 00:05:49.948 --rc geninfo_unexecuted_blocks=1 00:05:49.948 00:05:49.948 ' 00:05:49.948 04:53:28 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:49.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.948 --rc genhtml_branch_coverage=1 00:05:49.948 --rc genhtml_function_coverage=1 00:05:49.948 --rc genhtml_legend=1 00:05:49.948 --rc geninfo_all_blocks=1 00:05:49.948 --rc geninfo_unexecuted_blocks=1 00:05:49.948 00:05:49.948 ' 00:05:49.948 04:53:28 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:49.948 04:53:28 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:49.948 04:53:28 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:49.948 04:53:28 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:49.948 04:53:28 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:49.948 04:53:28 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:49.948 04:53:28 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:49.948 04:53:28 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:49.948 04:53:28 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:49.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.948 04:53:28 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=70248 00:05:49.948 04:53:28 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 70248 00:05:49.948 04:53:28 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 70248 ']' 00:05:49.948 04:53:28 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.948 04:53:28 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:49.948 04:53:28 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.948 04:53:28 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:49.948 04:53:28 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:49.948 04:53:28 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:49.948 [2024-12-06 04:53:28.125930] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:49.948 [2024-12-06 04:53:28.126163] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70248 ] 00:05:50.206 [2024-12-06 04:53:28.257145] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:50.206 [2024-12-06 04:53:28.290456] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:50.206 [2024-12-06 04:53:28.290537] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.771 04:53:28 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:50.771 04:53:28 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:05:50.771 04:53:28 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:50.771 04:53:28 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=70265 00:05:50.771 04:53:28 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:51.031 [ 00:05:51.031 "bdev_malloc_delete", 00:05:51.031 "bdev_malloc_create", 00:05:51.031 "bdev_null_resize", 00:05:51.031 "bdev_null_delete", 00:05:51.031 "bdev_null_create", 00:05:51.031 "bdev_nvme_cuse_unregister", 00:05:51.031 "bdev_nvme_cuse_register", 00:05:51.031 "bdev_opal_new_user", 00:05:51.031 "bdev_opal_set_lock_state", 00:05:51.031 "bdev_opal_delete", 00:05:51.031 "bdev_opal_get_info", 00:05:51.031 "bdev_opal_create", 00:05:51.031 "bdev_nvme_opal_revert", 00:05:51.031 "bdev_nvme_opal_init", 00:05:51.031 "bdev_nvme_send_cmd", 00:05:51.031 "bdev_nvme_set_keys", 00:05:51.031 "bdev_nvme_get_path_iostat", 00:05:51.031 "bdev_nvme_get_mdns_discovery_info", 00:05:51.031 "bdev_nvme_stop_mdns_discovery", 00:05:51.031 "bdev_nvme_start_mdns_discovery", 00:05:51.031 "bdev_nvme_set_multipath_policy", 00:05:51.031 "bdev_nvme_set_preferred_path", 00:05:51.031 "bdev_nvme_get_io_paths", 00:05:51.031 "bdev_nvme_remove_error_injection", 00:05:51.031 "bdev_nvme_add_error_injection", 00:05:51.031 "bdev_nvme_get_discovery_info", 00:05:51.031 "bdev_nvme_stop_discovery", 00:05:51.031 "bdev_nvme_start_discovery", 00:05:51.031 "bdev_nvme_get_controller_health_info", 00:05:51.031 "bdev_nvme_disable_controller", 00:05:51.031 "bdev_nvme_enable_controller", 00:05:51.031 "bdev_nvme_reset_controller", 00:05:51.031 "bdev_nvme_get_transport_statistics", 00:05:51.031 "bdev_nvme_apply_firmware", 00:05:51.031 "bdev_nvme_detach_controller", 00:05:51.031 "bdev_nvme_get_controllers", 00:05:51.031 "bdev_nvme_attach_controller", 00:05:51.031 "bdev_nvme_set_hotplug", 00:05:51.031 "bdev_nvme_set_options", 00:05:51.031 "bdev_passthru_delete", 00:05:51.031 "bdev_passthru_create", 00:05:51.031 "bdev_lvol_set_parent_bdev", 00:05:51.031 "bdev_lvol_set_parent", 00:05:51.031 "bdev_lvol_check_shallow_copy", 00:05:51.031 "bdev_lvol_start_shallow_copy", 00:05:51.031 "bdev_lvol_grow_lvstore", 00:05:51.031 "bdev_lvol_get_lvols", 00:05:51.031 "bdev_lvol_get_lvstores", 00:05:51.031 "bdev_lvol_delete", 00:05:51.031 "bdev_lvol_set_read_only", 00:05:51.031 "bdev_lvol_resize", 00:05:51.031 "bdev_lvol_decouple_parent", 00:05:51.031 "bdev_lvol_inflate", 00:05:51.031 "bdev_lvol_rename", 00:05:51.031 "bdev_lvol_clone_bdev", 00:05:51.031 "bdev_lvol_clone", 00:05:51.031 "bdev_lvol_snapshot", 00:05:51.031 "bdev_lvol_create", 00:05:51.031 "bdev_lvol_delete_lvstore", 00:05:51.031 "bdev_lvol_rename_lvstore", 00:05:51.031 "bdev_lvol_create_lvstore", 00:05:51.031 "bdev_raid_set_options", 00:05:51.031 "bdev_raid_remove_base_bdev", 00:05:51.031 "bdev_raid_add_base_bdev", 00:05:51.031 "bdev_raid_delete", 00:05:51.031 "bdev_raid_create", 00:05:51.031 "bdev_raid_get_bdevs", 00:05:51.031 "bdev_error_inject_error", 00:05:51.031 "bdev_error_delete", 00:05:51.031 "bdev_error_create", 00:05:51.031 "bdev_split_delete", 00:05:51.031 "bdev_split_create", 00:05:51.031 "bdev_delay_delete", 00:05:51.031 "bdev_delay_create", 00:05:51.031 "bdev_delay_update_latency", 00:05:51.031 "bdev_zone_block_delete", 00:05:51.031 "bdev_zone_block_create", 00:05:51.031 "blobfs_create", 00:05:51.031 "blobfs_detect", 00:05:51.031 "blobfs_set_cache_size", 00:05:51.031 "bdev_xnvme_delete", 00:05:51.031 "bdev_xnvme_create", 00:05:51.031 "bdev_aio_delete", 00:05:51.031 "bdev_aio_rescan", 00:05:51.031 "bdev_aio_create", 00:05:51.031 "bdev_ftl_set_property", 00:05:51.031 "bdev_ftl_get_properties", 00:05:51.031 "bdev_ftl_get_stats", 00:05:51.031 "bdev_ftl_unmap", 00:05:51.031 "bdev_ftl_unload", 00:05:51.031 "bdev_ftl_delete", 00:05:51.031 "bdev_ftl_load", 00:05:51.031 "bdev_ftl_create", 00:05:51.031 "bdev_virtio_attach_controller", 00:05:51.031 "bdev_virtio_scsi_get_devices", 00:05:51.031 "bdev_virtio_detach_controller", 00:05:51.031 "bdev_virtio_blk_set_hotplug", 00:05:51.031 "bdev_iscsi_delete", 00:05:51.031 "bdev_iscsi_create", 00:05:51.032 "bdev_iscsi_set_options", 00:05:51.032 "accel_error_inject_error", 00:05:51.032 "ioat_scan_accel_module", 00:05:51.032 "dsa_scan_accel_module", 00:05:51.032 "iaa_scan_accel_module", 00:05:51.032 "keyring_file_remove_key", 00:05:51.032 "keyring_file_add_key", 00:05:51.032 "keyring_linux_set_options", 00:05:51.032 "fsdev_aio_delete", 00:05:51.032 "fsdev_aio_create", 00:05:51.032 "iscsi_get_histogram", 00:05:51.032 "iscsi_enable_histogram", 00:05:51.032 "iscsi_set_options", 00:05:51.032 "iscsi_get_auth_groups", 00:05:51.032 "iscsi_auth_group_remove_secret", 00:05:51.032 "iscsi_auth_group_add_secret", 00:05:51.032 "iscsi_delete_auth_group", 00:05:51.032 "iscsi_create_auth_group", 00:05:51.032 "iscsi_set_discovery_auth", 00:05:51.032 "iscsi_get_options", 00:05:51.032 "iscsi_target_node_request_logout", 00:05:51.032 "iscsi_target_node_set_redirect", 00:05:51.032 "iscsi_target_node_set_auth", 00:05:51.032 "iscsi_target_node_add_lun", 00:05:51.032 "iscsi_get_stats", 00:05:51.032 "iscsi_get_connections", 00:05:51.032 "iscsi_portal_group_set_auth", 00:05:51.032 "iscsi_start_portal_group", 00:05:51.032 "iscsi_delete_portal_group", 00:05:51.032 "iscsi_create_portal_group", 00:05:51.032 "iscsi_get_portal_groups", 00:05:51.032 "iscsi_delete_target_node", 00:05:51.032 "iscsi_target_node_remove_pg_ig_maps", 00:05:51.032 "iscsi_target_node_add_pg_ig_maps", 00:05:51.032 "iscsi_create_target_node", 00:05:51.032 "iscsi_get_target_nodes", 00:05:51.032 "iscsi_delete_initiator_group", 00:05:51.032 "iscsi_initiator_group_remove_initiators", 00:05:51.032 "iscsi_initiator_group_add_initiators", 00:05:51.032 "iscsi_create_initiator_group", 00:05:51.032 "iscsi_get_initiator_groups", 00:05:51.032 "nvmf_set_crdt", 00:05:51.032 "nvmf_set_config", 00:05:51.032 "nvmf_set_max_subsystems", 00:05:51.032 "nvmf_stop_mdns_prr", 00:05:51.032 "nvmf_publish_mdns_prr", 00:05:51.032 "nvmf_subsystem_get_listeners", 00:05:51.032 "nvmf_subsystem_get_qpairs", 00:05:51.032 "nvmf_subsystem_get_controllers", 00:05:51.032 "nvmf_get_stats", 00:05:51.032 "nvmf_get_transports", 00:05:51.032 "nvmf_create_transport", 00:05:51.032 "nvmf_get_targets", 00:05:51.032 "nvmf_delete_target", 00:05:51.032 "nvmf_create_target", 00:05:51.032 "nvmf_subsystem_allow_any_host", 00:05:51.032 "nvmf_subsystem_set_keys", 00:05:51.032 "nvmf_subsystem_remove_host", 00:05:51.032 "nvmf_subsystem_add_host", 00:05:51.032 "nvmf_ns_remove_host", 00:05:51.032 "nvmf_ns_add_host", 00:05:51.032 "nvmf_subsystem_remove_ns", 00:05:51.032 "nvmf_subsystem_set_ns_ana_group", 00:05:51.032 "nvmf_subsystem_add_ns", 00:05:51.032 "nvmf_subsystem_listener_set_ana_state", 00:05:51.032 "nvmf_discovery_get_referrals", 00:05:51.032 "nvmf_discovery_remove_referral", 00:05:51.032 "nvmf_discovery_add_referral", 00:05:51.032 "nvmf_subsystem_remove_listener", 00:05:51.032 "nvmf_subsystem_add_listener", 00:05:51.032 "nvmf_delete_subsystem", 00:05:51.032 "nvmf_create_subsystem", 00:05:51.032 "nvmf_get_subsystems", 00:05:51.032 "env_dpdk_get_mem_stats", 00:05:51.032 "nbd_get_disks", 00:05:51.032 "nbd_stop_disk", 00:05:51.032 "nbd_start_disk", 00:05:51.032 "ublk_recover_disk", 00:05:51.032 "ublk_get_disks", 00:05:51.032 "ublk_stop_disk", 00:05:51.032 "ublk_start_disk", 00:05:51.032 "ublk_destroy_target", 00:05:51.032 "ublk_create_target", 00:05:51.032 "virtio_blk_create_transport", 00:05:51.032 "virtio_blk_get_transports", 00:05:51.032 "vhost_controller_set_coalescing", 00:05:51.032 "vhost_get_controllers", 00:05:51.032 "vhost_delete_controller", 00:05:51.032 "vhost_create_blk_controller", 00:05:51.032 "vhost_scsi_controller_remove_target", 00:05:51.032 "vhost_scsi_controller_add_target", 00:05:51.032 "vhost_start_scsi_controller", 00:05:51.032 "vhost_create_scsi_controller", 00:05:51.032 "thread_set_cpumask", 00:05:51.032 "scheduler_set_options", 00:05:51.032 "framework_get_governor", 00:05:51.032 "framework_get_scheduler", 00:05:51.032 "framework_set_scheduler", 00:05:51.032 "framework_get_reactors", 00:05:51.032 "thread_get_io_channels", 00:05:51.032 "thread_get_pollers", 00:05:51.032 "thread_get_stats", 00:05:51.032 "framework_monitor_context_switch", 00:05:51.032 "spdk_kill_instance", 00:05:51.032 "log_enable_timestamps", 00:05:51.032 "log_get_flags", 00:05:51.032 "log_clear_flag", 00:05:51.032 "log_set_flag", 00:05:51.032 "log_get_level", 00:05:51.032 "log_set_level", 00:05:51.032 "log_get_print_level", 00:05:51.032 "log_set_print_level", 00:05:51.032 "framework_enable_cpumask_locks", 00:05:51.032 "framework_disable_cpumask_locks", 00:05:51.032 "framework_wait_init", 00:05:51.032 "framework_start_init", 00:05:51.032 "scsi_get_devices", 00:05:51.032 "bdev_get_histogram", 00:05:51.032 "bdev_enable_histogram", 00:05:51.032 "bdev_set_qos_limit", 00:05:51.032 "bdev_set_qd_sampling_period", 00:05:51.032 "bdev_get_bdevs", 00:05:51.032 "bdev_reset_iostat", 00:05:51.032 "bdev_get_iostat", 00:05:51.032 "bdev_examine", 00:05:51.032 "bdev_wait_for_examine", 00:05:51.032 "bdev_set_options", 00:05:51.032 "accel_get_stats", 00:05:51.032 "accel_set_options", 00:05:51.032 "accel_set_driver", 00:05:51.032 "accel_crypto_key_destroy", 00:05:51.032 "accel_crypto_keys_get", 00:05:51.032 "accel_crypto_key_create", 00:05:51.032 "accel_assign_opc", 00:05:51.032 "accel_get_module_info", 00:05:51.032 "accel_get_opc_assignments", 00:05:51.032 "vmd_rescan", 00:05:51.032 "vmd_remove_device", 00:05:51.032 "vmd_enable", 00:05:51.032 "sock_get_default_impl", 00:05:51.032 "sock_set_default_impl", 00:05:51.032 "sock_impl_set_options", 00:05:51.032 "sock_impl_get_options", 00:05:51.032 "iobuf_get_stats", 00:05:51.032 "iobuf_set_options", 00:05:51.032 "keyring_get_keys", 00:05:51.032 "framework_get_pci_devices", 00:05:51.032 "framework_get_config", 00:05:51.032 "framework_get_subsystems", 00:05:51.032 "fsdev_set_opts", 00:05:51.032 "fsdev_get_opts", 00:05:51.032 "trace_get_info", 00:05:51.032 "trace_get_tpoint_group_mask", 00:05:51.032 "trace_disable_tpoint_group", 00:05:51.032 "trace_enable_tpoint_group", 00:05:51.032 "trace_clear_tpoint_mask", 00:05:51.032 "trace_set_tpoint_mask", 00:05:51.032 "notify_get_notifications", 00:05:51.032 "notify_get_types", 00:05:51.032 "spdk_get_version", 00:05:51.032 "rpc_get_methods" 00:05:51.032 ] 00:05:51.032 04:53:29 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:51.032 04:53:29 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:51.032 04:53:29 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:51.032 04:53:29 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:51.032 04:53:29 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 70248 00:05:51.032 04:53:29 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 70248 ']' 00:05:51.032 04:53:29 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 70248 00:05:51.032 04:53:29 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:05:51.032 04:53:29 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:51.032 04:53:29 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70248 00:05:51.032 killing process with pid 70248 00:05:51.032 04:53:29 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:51.032 04:53:29 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:51.032 04:53:29 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70248' 00:05:51.032 04:53:29 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 70248 00:05:51.032 04:53:29 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 70248 00:05:51.292 ************************************ 00:05:51.292 END TEST spdkcli_tcp 00:05:51.292 ************************************ 00:05:51.292 00:05:51.292 real 0m1.539s 00:05:51.292 user 0m2.706s 00:05:51.292 sys 0m0.377s 00:05:51.292 04:53:29 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:51.292 04:53:29 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:51.292 04:53:29 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:51.292 04:53:29 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:51.292 04:53:29 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:51.292 04:53:29 -- common/autotest_common.sh@10 -- # set +x 00:05:51.292 ************************************ 00:05:51.292 START TEST dpdk_mem_utility 00:05:51.292 ************************************ 00:05:51.292 04:53:29 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:51.550 * Looking for test storage... 00:05:51.550 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:51.550 04:53:29 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:51.550 04:53:29 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:51.550 04:53:29 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:05:51.550 04:53:29 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:51.550 04:53:29 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:51.550 04:53:29 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:51.550 04:53:29 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:51.550 04:53:29 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:51.550 04:53:29 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:51.550 04:53:29 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:51.550 04:53:29 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:51.550 04:53:29 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:51.550 04:53:29 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:51.550 04:53:29 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:51.550 04:53:29 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:51.550 04:53:29 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:51.550 04:53:29 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:51.550 04:53:29 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:51.550 04:53:29 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:51.550 04:53:29 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:51.550 04:53:29 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:51.550 04:53:29 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:51.550 04:53:29 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:51.550 04:53:29 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:51.550 04:53:29 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:51.550 04:53:29 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:51.550 04:53:29 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:51.550 04:53:29 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:51.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.550 04:53:29 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:51.550 04:53:29 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:51.550 04:53:29 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:51.550 04:53:29 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:51.550 04:53:29 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:51.550 04:53:29 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:51.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.550 --rc genhtml_branch_coverage=1 00:05:51.550 --rc genhtml_function_coverage=1 00:05:51.550 --rc genhtml_legend=1 00:05:51.550 --rc geninfo_all_blocks=1 00:05:51.550 --rc geninfo_unexecuted_blocks=1 00:05:51.550 00:05:51.550 ' 00:05:51.550 04:53:29 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:51.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.550 --rc genhtml_branch_coverage=1 00:05:51.550 --rc genhtml_function_coverage=1 00:05:51.550 --rc genhtml_legend=1 00:05:51.550 --rc geninfo_all_blocks=1 00:05:51.550 --rc geninfo_unexecuted_blocks=1 00:05:51.550 00:05:51.550 ' 00:05:51.550 04:53:29 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:51.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.550 --rc genhtml_branch_coverage=1 00:05:51.550 --rc genhtml_function_coverage=1 00:05:51.550 --rc genhtml_legend=1 00:05:51.550 --rc geninfo_all_blocks=1 00:05:51.550 --rc geninfo_unexecuted_blocks=1 00:05:51.550 00:05:51.550 ' 00:05:51.550 04:53:29 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:51.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.550 --rc genhtml_branch_coverage=1 00:05:51.550 --rc genhtml_function_coverage=1 00:05:51.550 --rc genhtml_legend=1 00:05:51.550 --rc geninfo_all_blocks=1 00:05:51.550 --rc geninfo_unexecuted_blocks=1 00:05:51.550 00:05:51.550 ' 00:05:51.550 04:53:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:51.550 04:53:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=70348 00:05:51.550 04:53:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 70348 00:05:51.550 04:53:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:51.550 04:53:29 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 70348 ']' 00:05:51.550 04:53:29 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.550 04:53:29 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:51.550 04:53:29 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.550 04:53:29 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:51.550 04:53:29 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:51.550 [2024-12-06 04:53:29.707755] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:51.550 [2024-12-06 04:53:29.708048] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70348 ] 00:05:51.808 [2024-12-06 04:53:29.842091] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.808 [2024-12-06 04:53:29.875149] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.374 04:53:30 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:52.374 04:53:30 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:05:52.374 04:53:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:52.374 04:53:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:52.374 04:53:30 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:52.374 04:53:30 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:52.374 { 00:05:52.374 "filename": "/tmp/spdk_mem_dump.txt" 00:05:52.375 } 00:05:52.375 04:53:30 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:52.375 04:53:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:52.375 DPDK memory size 860.000000 MiB in 1 heap(s) 00:05:52.375 1 heaps totaling size 860.000000 MiB 00:05:52.375 size: 860.000000 MiB heap id: 0 00:05:52.375 end heaps---------- 00:05:52.375 9 mempools totaling size 642.649841 MiB 00:05:52.375 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:52.375 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:52.375 size: 92.545471 MiB name: bdev_io_70348 00:05:52.375 size: 51.011292 MiB name: evtpool_70348 00:05:52.375 size: 50.003479 MiB name: msgpool_70348 00:05:52.375 size: 36.509338 MiB name: fsdev_io_70348 00:05:52.375 size: 21.763794 MiB name: PDU_Pool 00:05:52.375 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:52.375 size: 0.026123 MiB name: Session_Pool 00:05:52.375 end mempools------- 00:05:52.375 6 memzones totaling size 4.142822 MiB 00:05:52.375 size: 1.000366 MiB name: RG_ring_0_70348 00:05:52.375 size: 1.000366 MiB name: RG_ring_1_70348 00:05:52.375 size: 1.000366 MiB name: RG_ring_4_70348 00:05:52.375 size: 1.000366 MiB name: RG_ring_5_70348 00:05:52.375 size: 0.125366 MiB name: RG_ring_2_70348 00:05:52.375 size: 0.015991 MiB name: RG_ring_3_70348 00:05:52.375 end memzones------- 00:05:52.375 04:53:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:52.637 heap id: 0 total size: 860.000000 MiB number of busy elements: 305 number of free elements: 16 00:05:52.637 list of free elements. size: 13.936890 MiB 00:05:52.637 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:52.637 element at address: 0x200000800000 with size: 1.996948 MiB 00:05:52.637 element at address: 0x20001bc00000 with size: 0.999878 MiB 00:05:52.637 element at address: 0x20001be00000 with size: 0.999878 MiB 00:05:52.637 element at address: 0x200034a00000 with size: 0.994446 MiB 00:05:52.637 element at address: 0x200009600000 with size: 0.959839 MiB 00:05:52.637 element at address: 0x200015e00000 with size: 0.954285 MiB 00:05:52.637 element at address: 0x20001c000000 with size: 0.936584 MiB 00:05:52.637 element at address: 0x200000200000 with size: 0.834839 MiB 00:05:52.637 element at address: 0x20001d800000 with size: 0.568237 MiB 00:05:52.637 element at address: 0x20000d800000 with size: 0.489258 MiB 00:05:52.637 element at address: 0x200003e00000 with size: 0.488281 MiB 00:05:52.637 element at address: 0x20001c200000 with size: 0.485657 MiB 00:05:52.637 element at address: 0x200007000000 with size: 0.480469 MiB 00:05:52.637 element at address: 0x20002ac00000 with size: 0.395752 MiB 00:05:52.637 element at address: 0x200003a00000 with size: 0.353027 MiB 00:05:52.637 list of standard malloc elements. size: 199.266418 MiB 00:05:52.637 element at address: 0x20000d9fff80 with size: 132.000122 MiB 00:05:52.637 element at address: 0x2000097fff80 with size: 64.000122 MiB 00:05:52.637 element at address: 0x20001bcfff80 with size: 1.000122 MiB 00:05:52.637 element at address: 0x20001befff80 with size: 1.000122 MiB 00:05:52.637 element at address: 0x20001c0fff80 with size: 1.000122 MiB 00:05:52.637 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:52.637 element at address: 0x20001c0eff00 with size: 0.062622 MiB 00:05:52.637 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:52.637 element at address: 0x20001c0efdc0 with size: 0.000305 MiB 00:05:52.637 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:05:52.637 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:05:52.637 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:05:52.637 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:05:52.637 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:05:52.637 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:05:52.637 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:05:52.637 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:05:52.637 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:05:52.637 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:05:52.637 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:05:52.637 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:05:52.637 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:05:52.637 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:05:52.637 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:05:52.637 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:05:52.637 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:05:52.637 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:05:52.637 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:05:52.637 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:05:52.637 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:05:52.637 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:05:52.638 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:05:52.638 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:05:52.638 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:05:52.638 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:05:52.638 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:05:52.638 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:05:52.638 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:05:52.638 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:05:52.638 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:05:52.638 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:05:52.638 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:05:52.638 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:05:52.638 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:05:52.638 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:05:52.638 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:05:52.638 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:05:52.638 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:05:52.638 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:05:52.638 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:52.638 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:52.638 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:52.638 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:52.638 element at address: 0x200003a5a600 with size: 0.000183 MiB 00:05:52.638 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:05:52.638 element at address: 0x200003a5eac0 with size: 0.000183 MiB 00:05:52.638 element at address: 0x200003a7ed80 with size: 0.000183 MiB 00:05:52.638 element at address: 0x200003a7ee40 with size: 0.000183 MiB 00:05:52.638 element at address: 0x200003a7ef00 with size: 0.000183 MiB 00:05:52.638 element at address: 0x200003a7efc0 with size: 0.000183 MiB 00:05:52.638 element at address: 0x200003a7f080 with size: 0.000183 MiB 00:05:52.638 element at address: 0x200003a7f140 with size: 0.000183 MiB 00:05:52.638 element at address: 0x200003a7f200 with size: 0.000183 MiB 00:05:52.638 element at address: 0x200003a7f2c0 with size: 0.000183 MiB 00:05:52.638 element at address: 0x200003a7f380 with size: 0.000183 MiB 00:05:52.638 element at address: 0x200003a7f440 with size: 0.000183 MiB 00:05:52.638 element at address: 0x200003a7f500 with size: 0.000183 MiB 00:05:52.638 element at address: 0x200003a7f5c0 with size: 0.000183 MiB 00:05:52.638 element at address: 0x200003aff880 with size: 0.000183 MiB 00:05:52.638 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:52.638 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:52.638 element at address: 0x200003e7d000 with size: 0.000183 MiB 00:05:52.638 element at address: 0x200003e7d0c0 with size: 0.000183 MiB 00:05:52.638 element at address: 0x200003e7d180 with size: 0.000183 MiB 00:05:52.638 element at address: 0x200003e7d240 with size: 0.000183 MiB 00:05:52.638 element at address: 0x200003e7d300 with size: 0.000183 MiB 00:05:52.638 element at address: 0x200003e7d3c0 with size: 0.000183 MiB 00:05:52.638 element at address: 0x200003e7d480 with size: 0.000183 MiB 00:05:52.638 element at address: 0x200003e7d540 with size: 0.000183 MiB 00:05:52.638 element at address: 0x200003e7d600 with size: 0.000183 MiB 00:05:52.638 element at address: 0x200003e7d6c0 with size: 0.000183 MiB 00:05:52.638 element at address: 0x200003e7d780 with size: 0.000183 MiB 00:05:52.638 element at address: 0x200003e7d840 with size: 0.000183 MiB 00:05:52.638 element at address: 0x200003e7d900 with size: 0.000183 MiB 00:05:52.638 element at address: 0x200003e7d9c0 with size: 0.000183 MiB 00:05:52.638 element at address: 0x200003e7da80 with size: 0.000183 MiB 00:05:52.638 element at address: 0x200003e7db40 with size: 0.000183 MiB 00:05:52.638 element at address: 0x200003e7dc00 with size: 0.000183 MiB 00:05:52.638 element at address: 0x200003e7dcc0 with size: 0.000183 MiB 00:05:52.638 element at address: 0x200003e7dd80 with size: 0.000183 MiB 00:05:52.638 element at address: 0x200003e7de40 with size: 0.000183 MiB 00:05:52.638 element at address: 0x200003e7df00 with size: 0.000183 MiB 00:05:52.638 element at address: 0x200003e7dfc0 with size: 0.000183 MiB 00:05:52.638 element at address: 0x200003e7e080 with size: 0.000183 MiB 00:05:52.638 element at address: 0x200003e7e140 with size: 0.000183 MiB 00:05:52.638 element at address: 0x200003e7e200 with size: 0.000183 MiB 00:05:52.638 element at address: 0x200003e7e2c0 with size: 0.000183 MiB 00:05:52.638 element at address: 0x200003e7e380 with size: 0.000183 MiB 00:05:52.638 element at address: 0x200003e7e440 with size: 0.000183 MiB 00:05:52.638 element at address: 0x200003e7e500 with size: 0.000183 MiB 00:05:52.638 element at address: 0x200003e7e5c0 with size: 0.000183 MiB 00:05:52.638 element at address: 0x200003e7e680 with size: 0.000183 MiB 00:05:52.638 element at address: 0x200003e7e740 with size: 0.000183 MiB 00:05:52.638 element at address: 0x200003e7e800 with size: 0.000183 MiB 00:05:52.638 element at address: 0x200003e7e8c0 with size: 0.000183 MiB 00:05:52.638 element at address: 0x200003e7e980 with size: 0.000183 MiB 00:05:52.638 element at address: 0x200003e7ea40 with size: 0.000183 MiB 00:05:52.638 element at address: 0x200003e7eb00 with size: 0.000183 MiB 00:05:52.638 element at address: 0x200003e7ebc0 with size: 0.000183 MiB 00:05:52.638 element at address: 0x200003e7ec80 with size: 0.000183 MiB 00:05:52.638 element at address: 0x200003e7ed40 with size: 0.000183 MiB 00:05:52.638 element at address: 0x200003e7ee00 with size: 0.000183 MiB 00:05:52.638 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:52.638 element at address: 0x20000707b000 with size: 0.000183 MiB 00:05:52.638 element at address: 0x20000707b0c0 with size: 0.000183 MiB 00:05:52.638 element at address: 0x20000707b180 with size: 0.000183 MiB 00:05:52.638 element at address: 0x20000707b240 with size: 0.000183 MiB 00:05:52.638 element at address: 0x20000707b300 with size: 0.000183 MiB 00:05:52.638 element at address: 0x20000707b3c0 with size: 0.000183 MiB 00:05:52.638 element at address: 0x20000707b480 with size: 0.000183 MiB 00:05:52.638 element at address: 0x20000707b540 with size: 0.000183 MiB 00:05:52.638 element at address: 0x20000707b600 with size: 0.000183 MiB 00:05:52.638 element at address: 0x20000707b6c0 with size: 0.000183 MiB 00:05:52.638 element at address: 0x2000070fb980 with size: 0.000183 MiB 00:05:52.638 element at address: 0x2000096fdd80 with size: 0.000183 MiB 00:05:52.638 element at address: 0x20000d87d400 with size: 0.000183 MiB 00:05:52.638 element at address: 0x20000d87d4c0 with size: 0.000183 MiB 00:05:52.638 element at address: 0x20000d87d580 with size: 0.000183 MiB 00:05:52.638 element at address: 0x20000d87d640 with size: 0.000183 MiB 00:05:52.638 element at address: 0x20000d87d700 with size: 0.000183 MiB 00:05:52.638 element at address: 0x20000d87d7c0 with size: 0.000183 MiB 00:05:52.638 element at address: 0x20000d87d880 with size: 0.000183 MiB 00:05:52.638 element at address: 0x20000d87d940 with size: 0.000183 MiB 00:05:52.638 element at address: 0x20000d87da00 with size: 0.000183 MiB 00:05:52.638 element at address: 0x20000d87dac0 with size: 0.000183 MiB 00:05:52.638 element at address: 0x20000d8fdd80 with size: 0.000183 MiB 00:05:52.638 element at address: 0x200015ef44c0 with size: 0.000183 MiB 00:05:52.638 element at address: 0x20001c0efc40 with size: 0.000183 MiB 00:05:52.638 element at address: 0x20001c0efd00 with size: 0.000183 MiB 00:05:52.638 element at address: 0x20001c2bc740 with size: 0.000183 MiB 00:05:52.638 element at address: 0x20001d891780 with size: 0.000183 MiB 00:05:52.638 element at address: 0x20001d891840 with size: 0.000183 MiB 00:05:52.638 element at address: 0x20001d891900 with size: 0.000183 MiB 00:05:52.638 element at address: 0x20001d8919c0 with size: 0.000183 MiB 00:05:52.638 element at address: 0x20001d891a80 with size: 0.000183 MiB 00:05:52.638 element at address: 0x20001d891b40 with size: 0.000183 MiB 00:05:52.638 element at address: 0x20001d891c00 with size: 0.000183 MiB 00:05:52.638 element at address: 0x20001d891cc0 with size: 0.000183 MiB 00:05:52.638 element at address: 0x20001d891d80 with size: 0.000183 MiB 00:05:52.638 element at address: 0x20001d891e40 with size: 0.000183 MiB 00:05:52.638 element at address: 0x20001d891f00 with size: 0.000183 MiB 00:05:52.638 element at address: 0x20001d891fc0 with size: 0.000183 MiB 00:05:52.638 element at address: 0x20001d892080 with size: 0.000183 MiB 00:05:52.638 element at address: 0x20001d892140 with size: 0.000183 MiB 00:05:52.638 element at address: 0x20001d892200 with size: 0.000183 MiB 00:05:52.638 element at address: 0x20001d8922c0 with size: 0.000183 MiB 00:05:52.638 element at address: 0x20001d892380 with size: 0.000183 MiB 00:05:52.638 element at address: 0x20001d892440 with size: 0.000183 MiB 00:05:52.638 element at address: 0x20001d892500 with size: 0.000183 MiB 00:05:52.638 element at address: 0x20001d8925c0 with size: 0.000183 MiB 00:05:52.638 element at address: 0x20001d892680 with size: 0.000183 MiB 00:05:52.638 element at address: 0x20001d892740 with size: 0.000183 MiB 00:05:52.638 element at address: 0x20001d892800 with size: 0.000183 MiB 00:05:52.638 element at address: 0x20001d8928c0 with size: 0.000183 MiB 00:05:52.638 element at address: 0x20001d892980 with size: 0.000183 MiB 00:05:52.638 element at address: 0x20001d892a40 with size: 0.000183 MiB 00:05:52.638 element at address: 0x20001d892b00 with size: 0.000183 MiB 00:05:52.638 element at address: 0x20001d892bc0 with size: 0.000183 MiB 00:05:52.638 element at address: 0x20001d892c80 with size: 0.000183 MiB 00:05:52.638 element at address: 0x20001d892d40 with size: 0.000183 MiB 00:05:52.638 element at address: 0x20001d892e00 with size: 0.000183 MiB 00:05:52.638 element at address: 0x20001d892ec0 with size: 0.000183 MiB 00:05:52.638 element at address: 0x20001d892f80 with size: 0.000183 MiB 00:05:52.638 element at address: 0x20001d893040 with size: 0.000183 MiB 00:05:52.638 element at address: 0x20001d893100 with size: 0.000183 MiB 00:05:52.638 element at address: 0x20001d8931c0 with size: 0.000183 MiB 00:05:52.638 element at address: 0x20001d893280 with size: 0.000183 MiB 00:05:52.638 element at address: 0x20001d893340 with size: 0.000183 MiB 00:05:52.638 element at address: 0x20001d893400 with size: 0.000183 MiB 00:05:52.638 element at address: 0x20001d8934c0 with size: 0.000183 MiB 00:05:52.638 element at address: 0x20001d893580 with size: 0.000183 MiB 00:05:52.638 element at address: 0x20001d893640 with size: 0.000183 MiB 00:05:52.638 element at address: 0x20001d893700 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20001d8937c0 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20001d893880 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20001d893940 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20001d893a00 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20001d893ac0 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20001d893b80 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20001d893c40 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20001d893d00 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20001d893dc0 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20001d893e80 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20001d893f40 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20001d894000 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20001d8940c0 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20001d894180 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20001d894240 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20001d894300 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20001d8943c0 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20001d894480 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20001d894540 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20001d894600 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20001d8946c0 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20001d894780 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20001d894840 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20001d894900 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20001d8949c0 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20001d894a80 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20001d894b40 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20001d894c00 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20001d894cc0 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20001d894d80 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20001d894e40 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20001d894f00 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20001d894fc0 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20001d895080 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20001d895140 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20001d895200 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20001d8952c0 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20001d895380 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20001d895440 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20002ac65500 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20002ac655c0 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20002ac6c1c0 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20002ac6c3c0 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20002ac6c480 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20002ac6c540 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20002ac6c600 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20002ac6c6c0 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20002ac6c780 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20002ac6c840 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20002ac6c900 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20002ac6c9c0 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20002ac6ca80 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20002ac6cb40 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20002ac6cc00 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20002ac6ccc0 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20002ac6cd80 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20002ac6ce40 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20002ac6cf00 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20002ac6cfc0 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20002ac6d080 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20002ac6d140 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20002ac6d200 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20002ac6d2c0 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20002ac6d380 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20002ac6d440 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20002ac6d500 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20002ac6d5c0 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20002ac6d680 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20002ac6d740 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20002ac6d800 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20002ac6d8c0 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20002ac6d980 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20002ac6da40 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20002ac6db00 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20002ac6dbc0 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20002ac6dc80 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20002ac6dd40 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20002ac6de00 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20002ac6dec0 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20002ac6df80 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20002ac6e040 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20002ac6e100 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20002ac6e1c0 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20002ac6e280 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20002ac6e340 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20002ac6e400 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20002ac6e4c0 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20002ac6e580 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20002ac6e640 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20002ac6e700 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20002ac6e7c0 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20002ac6e880 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20002ac6e940 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20002ac6ea00 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20002ac6eac0 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20002ac6eb80 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20002ac6ec40 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20002ac6ed00 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20002ac6edc0 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20002ac6ee80 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20002ac6ef40 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20002ac6f000 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20002ac6f0c0 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20002ac6f180 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20002ac6f240 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20002ac6f300 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20002ac6f3c0 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20002ac6f480 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20002ac6f540 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20002ac6f600 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20002ac6f6c0 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20002ac6f780 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20002ac6f840 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20002ac6f900 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20002ac6f9c0 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20002ac6fa80 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20002ac6fb40 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20002ac6fc00 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20002ac6fcc0 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20002ac6fd80 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20002ac6fe40 with size: 0.000183 MiB 00:05:52.639 element at address: 0x20002ac6ff00 with size: 0.000183 MiB 00:05:52.639 list of memzone associated elements. size: 646.796692 MiB 00:05:52.639 element at address: 0x20001d895500 with size: 211.416748 MiB 00:05:52.639 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:52.639 element at address: 0x20002ac6ffc0 with size: 157.562561 MiB 00:05:52.639 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:52.639 element at address: 0x200015ff4780 with size: 92.045044 MiB 00:05:52.639 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_70348_0 00:05:52.639 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:52.639 associated memzone info: size: 48.002930 MiB name: MP_evtpool_70348_0 00:05:52.639 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:52.639 associated memzone info: size: 48.002930 MiB name: MP_msgpool_70348_0 00:05:52.639 element at address: 0x2000071fdb80 with size: 36.008911 MiB 00:05:52.639 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_70348_0 00:05:52.639 element at address: 0x20001c3be940 with size: 20.255554 MiB 00:05:52.639 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:52.639 element at address: 0x200034bfeb40 with size: 18.005066 MiB 00:05:52.639 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:52.639 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:52.639 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_70348 00:05:52.639 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:52.639 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_70348 00:05:52.639 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:52.639 associated memzone info: size: 1.007996 MiB name: MP_evtpool_70348 00:05:52.639 element at address: 0x20000d8fde40 with size: 1.008118 MiB 00:05:52.639 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:52.639 element at address: 0x20001c2bc800 with size: 1.008118 MiB 00:05:52.639 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:52.639 element at address: 0x2000096fde40 with size: 1.008118 MiB 00:05:52.639 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:52.639 element at address: 0x2000070fba40 with size: 1.008118 MiB 00:05:52.639 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:52.640 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:52.640 associated memzone info: size: 1.000366 MiB name: RG_ring_0_70348 00:05:52.640 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:52.640 associated memzone info: size: 1.000366 MiB name: RG_ring_1_70348 00:05:52.640 element at address: 0x200015ef4580 with size: 1.000488 MiB 00:05:52.640 associated memzone info: size: 1.000366 MiB name: RG_ring_4_70348 00:05:52.640 element at address: 0x200034afe940 with size: 1.000488 MiB 00:05:52.640 associated memzone info: size: 1.000366 MiB name: RG_ring_5_70348 00:05:52.640 element at address: 0x200003a7f680 with size: 0.500488 MiB 00:05:52.640 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_70348 00:05:52.640 element at address: 0x200003e7eec0 with size: 0.500488 MiB 00:05:52.640 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_70348 00:05:52.640 element at address: 0x20000d87db80 with size: 0.500488 MiB 00:05:52.640 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:52.640 element at address: 0x20000707b780 with size: 0.500488 MiB 00:05:52.640 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:52.640 element at address: 0x20001c27c540 with size: 0.250488 MiB 00:05:52.640 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:52.640 element at address: 0x200003a5eb80 with size: 0.125488 MiB 00:05:52.640 associated memzone info: size: 0.125366 MiB name: RG_ring_2_70348 00:05:52.640 element at address: 0x2000096f5b80 with size: 0.031738 MiB 00:05:52.640 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:52.640 element at address: 0x20002ac65680 with size: 0.023743 MiB 00:05:52.640 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:52.640 element at address: 0x200003a5a8c0 with size: 0.016113 MiB 00:05:52.640 associated memzone info: size: 0.015991 MiB name: RG_ring_3_70348 00:05:52.640 element at address: 0x20002ac6b7c0 with size: 0.002441 MiB 00:05:52.640 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:52.640 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:05:52.640 associated memzone info: size: 0.000183 MiB name: MP_msgpool_70348 00:05:52.640 element at address: 0x200003aff940 with size: 0.000305 MiB 00:05:52.640 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_70348 00:05:52.640 element at address: 0x200003a5a6c0 with size: 0.000305 MiB 00:05:52.640 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_70348 00:05:52.640 element at address: 0x20002ac6c280 with size: 0.000305 MiB 00:05:52.640 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:52.640 04:53:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:52.640 04:53:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 70348 00:05:52.640 04:53:30 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 70348 ']' 00:05:52.640 04:53:30 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 70348 00:05:52.640 04:53:30 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:05:52.640 04:53:30 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:52.640 04:53:30 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70348 00:05:52.640 04:53:30 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:52.640 04:53:30 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:52.640 04:53:30 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70348' 00:05:52.640 killing process with pid 70348 00:05:52.640 04:53:30 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 70348 00:05:52.640 04:53:30 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 70348 00:05:52.901 00:05:52.901 real 0m1.427s 00:05:52.901 user 0m1.464s 00:05:52.901 sys 0m0.355s 00:05:52.901 ************************************ 00:05:52.901 END TEST dpdk_mem_utility 00:05:52.901 ************************************ 00:05:52.901 04:53:30 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:52.901 04:53:30 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:52.901 04:53:30 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:52.901 04:53:30 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:52.901 04:53:30 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:52.901 04:53:30 -- common/autotest_common.sh@10 -- # set +x 00:05:52.901 ************************************ 00:05:52.901 START TEST event 00:05:52.901 ************************************ 00:05:52.901 04:53:30 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:52.901 * Looking for test storage... 00:05:52.901 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:52.901 04:53:31 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:52.901 04:53:31 event -- common/autotest_common.sh@1681 -- # lcov --version 00:05:52.901 04:53:31 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:52.901 04:53:31 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:52.901 04:53:31 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:52.901 04:53:31 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:52.901 04:53:31 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:52.901 04:53:31 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:52.901 04:53:31 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:52.901 04:53:31 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:52.901 04:53:31 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:52.901 04:53:31 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:52.901 04:53:31 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:52.901 04:53:31 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:52.901 04:53:31 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:52.901 04:53:31 event -- scripts/common.sh@344 -- # case "$op" in 00:05:52.901 04:53:31 event -- scripts/common.sh@345 -- # : 1 00:05:52.901 04:53:31 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:52.901 04:53:31 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:52.901 04:53:31 event -- scripts/common.sh@365 -- # decimal 1 00:05:52.901 04:53:31 event -- scripts/common.sh@353 -- # local d=1 00:05:52.901 04:53:31 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:52.901 04:53:31 event -- scripts/common.sh@355 -- # echo 1 00:05:52.901 04:53:31 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:52.901 04:53:31 event -- scripts/common.sh@366 -- # decimal 2 00:05:52.901 04:53:31 event -- scripts/common.sh@353 -- # local d=2 00:05:52.901 04:53:31 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:52.901 04:53:31 event -- scripts/common.sh@355 -- # echo 2 00:05:52.901 04:53:31 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:52.901 04:53:31 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:52.901 04:53:31 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:52.901 04:53:31 event -- scripts/common.sh@368 -- # return 0 00:05:52.901 04:53:31 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:52.901 04:53:31 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:52.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.901 --rc genhtml_branch_coverage=1 00:05:52.901 --rc genhtml_function_coverage=1 00:05:52.901 --rc genhtml_legend=1 00:05:52.901 --rc geninfo_all_blocks=1 00:05:52.901 --rc geninfo_unexecuted_blocks=1 00:05:52.901 00:05:52.901 ' 00:05:52.901 04:53:31 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:52.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.901 --rc genhtml_branch_coverage=1 00:05:52.901 --rc genhtml_function_coverage=1 00:05:52.901 --rc genhtml_legend=1 00:05:52.901 --rc geninfo_all_blocks=1 00:05:52.901 --rc geninfo_unexecuted_blocks=1 00:05:52.901 00:05:52.901 ' 00:05:52.901 04:53:31 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:52.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.901 --rc genhtml_branch_coverage=1 00:05:52.901 --rc genhtml_function_coverage=1 00:05:52.901 --rc genhtml_legend=1 00:05:52.901 --rc geninfo_all_blocks=1 00:05:52.901 --rc geninfo_unexecuted_blocks=1 00:05:52.901 00:05:52.901 ' 00:05:52.901 04:53:31 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:52.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.901 --rc genhtml_branch_coverage=1 00:05:52.901 --rc genhtml_function_coverage=1 00:05:52.901 --rc genhtml_legend=1 00:05:52.901 --rc geninfo_all_blocks=1 00:05:52.901 --rc geninfo_unexecuted_blocks=1 00:05:52.901 00:05:52.901 ' 00:05:52.901 04:53:31 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:52.901 04:53:31 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:52.901 04:53:31 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:52.901 04:53:31 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:05:52.901 04:53:31 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:52.901 04:53:31 event -- common/autotest_common.sh@10 -- # set +x 00:05:53.162 ************************************ 00:05:53.162 START TEST event_perf 00:05:53.162 ************************************ 00:05:53.162 04:53:31 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:53.162 Running I/O for 1 seconds...[2024-12-06 04:53:31.162325] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:53.162 [2024-12-06 04:53:31.162523] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70423 ] 00:05:53.162 [2024-12-06 04:53:31.295816] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:53.162 [2024-12-06 04:53:31.331297] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:53.162 [2024-12-06 04:53:31.331569] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:05:53.162 [2024-12-06 04:53:31.331899] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:05:53.162 [2024-12-06 04:53:31.331990] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.547 Running I/O for 1 seconds... 00:05:54.547 lcore 0: 149477 00:05:54.547 lcore 1: 149479 00:05:54.547 lcore 2: 149478 00:05:54.547 lcore 3: 149477 00:05:54.547 done. 00:05:54.547 ************************************ 00:05:54.547 END TEST event_perf 00:05:54.547 ************************************ 00:05:54.547 00:05:54.547 real 0m1.259s 00:05:54.547 user 0m4.054s 00:05:54.547 sys 0m0.084s 00:05:54.547 04:53:32 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:54.547 04:53:32 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:54.547 04:53:32 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:54.547 04:53:32 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:54.547 04:53:32 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:54.547 04:53:32 event -- common/autotest_common.sh@10 -- # set +x 00:05:54.547 ************************************ 00:05:54.547 START TEST event_reactor 00:05:54.547 ************************************ 00:05:54.547 04:53:32 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:54.547 [2024-12-06 04:53:32.472770] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:54.547 [2024-12-06 04:53:32.473017] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70463 ] 00:05:54.547 [2024-12-06 04:53:32.605787] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.547 [2024-12-06 04:53:32.657307] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.524 test_start 00:05:55.524 oneshot 00:05:55.524 tick 100 00:05:55.524 tick 100 00:05:55.524 tick 250 00:05:55.524 tick 100 00:05:55.524 tick 100 00:05:55.524 tick 100 00:05:55.524 tick 250 00:05:55.524 tick 500 00:05:55.524 tick 100 00:05:55.524 tick 100 00:05:55.524 tick 250 00:05:55.524 tick 100 00:05:55.524 tick 100 00:05:55.524 test_end 00:05:55.524 00:05:55.524 real 0m1.297s 00:05:55.524 user 0m1.111s 00:05:55.524 sys 0m0.076s 00:05:55.524 04:53:33 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:55.524 04:53:33 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:55.524 ************************************ 00:05:55.524 END TEST event_reactor 00:05:55.524 ************************************ 00:05:55.786 04:53:33 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:55.786 04:53:33 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:55.786 04:53:33 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:55.786 04:53:33 event -- common/autotest_common.sh@10 -- # set +x 00:05:55.786 ************************************ 00:05:55.786 START TEST event_reactor_perf 00:05:55.786 ************************************ 00:05:55.786 04:53:33 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:55.786 [2024-12-06 04:53:33.834352] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:55.786 [2024-12-06 04:53:33.834649] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70499 ] 00:05:55.786 [2024-12-06 04:53:33.962827] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.786 [2024-12-06 04:53:34.015655] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.169 test_start 00:05:57.169 test_end 00:05:57.169 Performance: 308117 events per second 00:05:57.169 00:05:57.169 real 0m1.262s 00:05:57.169 user 0m1.086s 00:05:57.169 sys 0m0.067s 00:05:57.169 04:53:35 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:57.169 ************************************ 00:05:57.169 04:53:35 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:57.169 END TEST event_reactor_perf 00:05:57.169 ************************************ 00:05:57.169 04:53:35 event -- event/event.sh@49 -- # uname -s 00:05:57.169 04:53:35 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:57.169 04:53:35 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:57.169 04:53:35 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:57.169 04:53:35 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:57.169 04:53:35 event -- common/autotest_common.sh@10 -- # set +x 00:05:57.170 ************************************ 00:05:57.170 START TEST event_scheduler 00:05:57.170 ************************************ 00:05:57.170 04:53:35 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:57.170 * Looking for test storage... 00:05:57.170 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:57.170 04:53:35 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:57.170 04:53:35 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:05:57.170 04:53:35 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:57.170 04:53:35 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:57.170 04:53:35 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:57.170 04:53:35 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:57.170 04:53:35 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:57.170 04:53:35 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:57.170 04:53:35 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:57.170 04:53:35 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:57.170 04:53:35 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:57.170 04:53:35 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:57.170 04:53:35 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:57.170 04:53:35 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:57.170 04:53:35 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:57.170 04:53:35 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:57.170 04:53:35 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:57.170 04:53:35 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:57.170 04:53:35 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:57.170 04:53:35 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:57.170 04:53:35 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:57.170 04:53:35 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:57.170 04:53:35 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:57.170 04:53:35 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:57.170 04:53:35 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:57.170 04:53:35 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:57.170 04:53:35 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:57.170 04:53:35 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:57.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.170 04:53:35 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:57.170 04:53:35 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:57.170 04:53:35 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:57.170 04:53:35 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:57.170 04:53:35 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:57.170 04:53:35 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:57.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.170 --rc genhtml_branch_coverage=1 00:05:57.170 --rc genhtml_function_coverage=1 00:05:57.170 --rc genhtml_legend=1 00:05:57.170 --rc geninfo_all_blocks=1 00:05:57.170 --rc geninfo_unexecuted_blocks=1 00:05:57.170 00:05:57.170 ' 00:05:57.170 04:53:35 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:57.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.170 --rc genhtml_branch_coverage=1 00:05:57.170 --rc genhtml_function_coverage=1 00:05:57.170 --rc genhtml_legend=1 00:05:57.170 --rc geninfo_all_blocks=1 00:05:57.170 --rc geninfo_unexecuted_blocks=1 00:05:57.170 00:05:57.170 ' 00:05:57.170 04:53:35 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:57.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.170 --rc genhtml_branch_coverage=1 00:05:57.170 --rc genhtml_function_coverage=1 00:05:57.170 --rc genhtml_legend=1 00:05:57.170 --rc geninfo_all_blocks=1 00:05:57.170 --rc geninfo_unexecuted_blocks=1 00:05:57.170 00:05:57.170 ' 00:05:57.170 04:53:35 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:57.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.170 --rc genhtml_branch_coverage=1 00:05:57.170 --rc genhtml_function_coverage=1 00:05:57.170 --rc genhtml_legend=1 00:05:57.170 --rc geninfo_all_blocks=1 00:05:57.170 --rc geninfo_unexecuted_blocks=1 00:05:57.170 00:05:57.170 ' 00:05:57.170 04:53:35 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:57.170 04:53:35 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=70569 00:05:57.170 04:53:35 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:57.170 04:53:35 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:57.170 04:53:35 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 70569 00:05:57.170 04:53:35 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 70569 ']' 00:05:57.170 04:53:35 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.170 04:53:35 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:57.170 04:53:35 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.170 04:53:35 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:57.170 04:53:35 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:57.170 [2024-12-06 04:53:35.313493] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:57.170 [2024-12-06 04:53:35.313728] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70569 ] 00:05:57.427 [2024-12-06 04:53:35.446663] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:57.427 [2024-12-06 04:53:35.480391] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.427 [2024-12-06 04:53:35.480642] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:57.427 [2024-12-06 04:53:35.480891] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:05:57.427 [2024-12-06 04:53:35.480978] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:05:57.993 04:53:36 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:57.993 04:53:36 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:05:57.993 04:53:36 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:57.993 04:53:36 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.993 04:53:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:57.993 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:57.993 POWER: Cannot set governor of lcore 0 to userspace 00:05:57.993 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:57.993 POWER: Cannot set governor of lcore 0 to performance 00:05:57.993 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:57.993 POWER: Cannot set governor of lcore 0 to userspace 00:05:57.993 GUEST_CHANNEL: Unable to to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:57.993 POWER: Unable to set Power Management Environment for lcore 0 00:05:57.993 [2024-12-06 04:53:36.130618] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:57.993 [2024-12-06 04:53:36.130727] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:57.993 [2024-12-06 04:53:36.130765] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:57.993 [2024-12-06 04:53:36.130795] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:57.993 [2024-12-06 04:53:36.130815] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:57.993 [2024-12-06 04:53:36.130944] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:57.993 04:53:36 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:57.993 04:53:36 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:57.993 04:53:36 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.993 04:53:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:57.993 [2024-12-06 04:53:36.190722] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:57.993 04:53:36 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:57.993 04:53:36 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:57.993 04:53:36 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:57.993 04:53:36 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:57.993 04:53:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:57.993 ************************************ 00:05:57.993 START TEST scheduler_create_thread 00:05:57.993 ************************************ 00:05:57.993 04:53:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:05:57.993 04:53:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:57.993 04:53:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.993 04:53:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.251 2 00:05:58.251 04:53:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.251 04:53:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:58.251 04:53:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.251 04:53:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.251 3 00:05:58.251 04:53:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.251 04:53:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:58.251 04:53:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.251 04:53:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.251 4 00:05:58.251 04:53:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.251 04:53:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:58.251 04:53:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.251 04:53:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.251 5 00:05:58.251 04:53:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.251 04:53:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:58.251 04:53:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.251 04:53:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.251 6 00:05:58.251 04:53:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.251 04:53:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:58.251 04:53:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.251 04:53:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.251 7 00:05:58.251 04:53:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.252 04:53:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:58.252 04:53:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.252 04:53:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.252 8 00:05:58.252 04:53:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.252 04:53:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:58.252 04:53:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.252 04:53:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.252 9 00:05:58.252 04:53:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.252 04:53:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:58.252 04:53:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.252 04:53:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.252 10 00:05:58.252 04:53:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.252 04:53:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:58.252 04:53:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.252 04:53:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.252 04:53:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.252 04:53:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:58.252 04:53:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:58.252 04:53:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.252 04:53:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.818 04:53:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.818 04:53:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:58.818 04:53:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.818 04:53:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:00.192 04:53:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:00.192 04:53:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:00.192 04:53:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:00.192 04:53:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.192 04:53:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:01.136 ************************************ 00:06:01.136 END TEST scheduler_create_thread 00:06:01.136 ************************************ 00:06:01.136 04:53:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.136 00:06:01.136 real 0m3.095s 00:06:01.136 user 0m0.014s 00:06:01.136 sys 0m0.007s 00:06:01.136 04:53:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:01.136 04:53:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:01.136 04:53:39 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:01.136 04:53:39 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 70569 00:06:01.136 04:53:39 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 70569 ']' 00:06:01.136 04:53:39 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 70569 00:06:01.136 04:53:39 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:06:01.136 04:53:39 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:01.136 04:53:39 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70569 00:06:01.136 killing process with pid 70569 00:06:01.136 04:53:39 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:01.136 04:53:39 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:01.136 04:53:39 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70569' 00:06:01.136 04:53:39 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 70569 00:06:01.136 04:53:39 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 70569 00:06:01.737 [2024-12-06 04:53:39.682129] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:01.737 ************************************ 00:06:01.737 END TEST event_scheduler 00:06:01.737 ************************************ 00:06:01.737 00:06:01.737 real 0m4.746s 00:06:01.737 user 0m8.944s 00:06:01.737 sys 0m0.304s 00:06:01.737 04:53:39 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:01.737 04:53:39 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:01.737 04:53:39 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:01.737 04:53:39 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:01.737 04:53:39 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:01.737 04:53:39 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:01.737 04:53:39 event -- common/autotest_common.sh@10 -- # set +x 00:06:01.737 ************************************ 00:06:01.737 START TEST app_repeat 00:06:01.737 ************************************ 00:06:01.737 04:53:39 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:06:01.737 04:53:39 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.737 04:53:39 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.737 04:53:39 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:01.737 04:53:39 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:01.737 04:53:39 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:01.737 04:53:39 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:01.737 04:53:39 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:01.737 Process app_repeat pid: 70669 00:06:01.737 spdk_app_start Round 0 00:06:01.737 04:53:39 event.app_repeat -- event/event.sh@19 -- # repeat_pid=70669 00:06:01.738 04:53:39 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:01.738 04:53:39 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 70669' 00:06:01.738 04:53:39 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:01.738 04:53:39 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:01.738 04:53:39 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70669 /var/tmp/spdk-nbd.sock 00:06:01.738 04:53:39 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70669 ']' 00:06:01.738 04:53:39 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:01.738 04:53:39 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:01.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:01.738 04:53:39 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:01.738 04:53:39 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:01.738 04:53:39 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:01.738 04:53:39 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:01.738 [2024-12-06 04:53:39.942454] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:01.738 [2024-12-06 04:53:39.942575] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70669 ] 00:06:01.996 [2024-12-06 04:53:40.076155] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:01.996 [2024-12-06 04:53:40.106978] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:01.996 [2024-12-06 04:53:40.107035] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.563 04:53:40 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:02.563 04:53:40 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:02.563 04:53:40 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:02.821 Malloc0 00:06:02.821 04:53:40 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:03.079 Malloc1 00:06:03.079 04:53:41 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:03.079 04:53:41 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.079 04:53:41 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:03.079 04:53:41 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:03.079 04:53:41 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.079 04:53:41 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:03.079 04:53:41 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:03.079 04:53:41 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.079 04:53:41 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:03.079 04:53:41 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:03.079 04:53:41 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.079 04:53:41 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:03.079 04:53:41 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:03.079 04:53:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:03.079 04:53:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:03.079 04:53:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:03.337 /dev/nbd0 00:06:03.337 04:53:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:03.337 04:53:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:03.337 04:53:41 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:03.337 04:53:41 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:03.337 04:53:41 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:03.337 04:53:41 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:03.337 04:53:41 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:03.337 04:53:41 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:03.337 04:53:41 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:03.337 04:53:41 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:03.337 04:53:41 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:03.337 1+0 records in 00:06:03.337 1+0 records out 00:06:03.337 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000629712 s, 6.5 MB/s 00:06:03.338 04:53:41 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:03.338 04:53:41 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:03.338 04:53:41 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:03.338 04:53:41 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:03.338 04:53:41 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:03.338 04:53:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:03.338 04:53:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:03.338 04:53:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:03.596 /dev/nbd1 00:06:03.596 04:53:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:03.596 04:53:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:03.596 04:53:41 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:03.596 04:53:41 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:03.596 04:53:41 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:03.596 04:53:41 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:03.596 04:53:41 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:03.596 04:53:41 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:03.596 04:53:41 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:03.596 04:53:41 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:03.596 04:53:41 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:03.596 1+0 records in 00:06:03.596 1+0 records out 00:06:03.596 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000191923 s, 21.3 MB/s 00:06:03.596 04:53:41 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:03.596 04:53:41 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:03.596 04:53:41 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:03.596 04:53:41 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:03.596 04:53:41 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:03.596 04:53:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:03.596 04:53:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:03.596 04:53:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:03.596 04:53:41 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.596 04:53:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:03.596 04:53:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:03.596 { 00:06:03.596 "nbd_device": "/dev/nbd0", 00:06:03.596 "bdev_name": "Malloc0" 00:06:03.596 }, 00:06:03.596 { 00:06:03.596 "nbd_device": "/dev/nbd1", 00:06:03.596 "bdev_name": "Malloc1" 00:06:03.596 } 00:06:03.596 ]' 00:06:03.596 04:53:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:03.596 { 00:06:03.596 "nbd_device": "/dev/nbd0", 00:06:03.596 "bdev_name": "Malloc0" 00:06:03.596 }, 00:06:03.596 { 00:06:03.596 "nbd_device": "/dev/nbd1", 00:06:03.596 "bdev_name": "Malloc1" 00:06:03.596 } 00:06:03.596 ]' 00:06:03.596 04:53:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:03.596 04:53:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:03.596 /dev/nbd1' 00:06:03.596 04:53:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:03.596 04:53:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:03.596 /dev/nbd1' 00:06:03.855 04:53:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:03.855 04:53:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:03.855 04:53:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:03.855 04:53:41 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:03.855 04:53:41 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:03.855 04:53:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.855 04:53:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:03.855 04:53:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:03.855 04:53:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:03.855 04:53:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:03.855 04:53:41 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:03.855 256+0 records in 00:06:03.855 256+0 records out 00:06:03.855 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00431526 s, 243 MB/s 00:06:03.855 04:53:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:03.855 04:53:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:03.855 256+0 records in 00:06:03.855 256+0 records out 00:06:03.855 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0202898 s, 51.7 MB/s 00:06:03.855 04:53:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:03.855 04:53:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:03.855 256+0 records in 00:06:03.855 256+0 records out 00:06:03.855 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0159095 s, 65.9 MB/s 00:06:03.855 04:53:41 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:03.855 04:53:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.855 04:53:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:03.855 04:53:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:03.855 04:53:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:03.855 04:53:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:03.855 04:53:41 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:03.855 04:53:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:03.855 04:53:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:03.855 04:53:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:03.855 04:53:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:03.855 04:53:41 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:03.855 04:53:41 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:03.855 04:53:41 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.855 04:53:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.855 04:53:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:03.855 04:53:41 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:03.855 04:53:41 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:03.855 04:53:41 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:04.113 04:53:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:04.113 04:53:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:04.113 04:53:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:04.113 04:53:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:04.113 04:53:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:04.113 04:53:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:04.113 04:53:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:04.113 04:53:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:04.113 04:53:42 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:04.113 04:53:42 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:04.113 04:53:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:04.113 04:53:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:04.113 04:53:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:04.113 04:53:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:04.113 04:53:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:04.113 04:53:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:04.113 04:53:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:04.113 04:53:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:04.113 04:53:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:04.113 04:53:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.113 04:53:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:04.370 04:53:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:04.370 04:53:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:04.370 04:53:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:04.370 04:53:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:04.370 04:53:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:04.370 04:53:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:04.370 04:53:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:04.370 04:53:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:04.370 04:53:42 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:04.370 04:53:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:04.370 04:53:42 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:04.371 04:53:42 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:04.371 04:53:42 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:04.630 04:53:42 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:04.887 [2024-12-06 04:53:42.869489] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:04.887 [2024-12-06 04:53:42.901224] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:04.887 [2024-12-06 04:53:42.901331] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.887 [2024-12-06 04:53:42.932294] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:04.887 [2024-12-06 04:53:42.932365] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:08.167 spdk_app_start Round 1 00:06:08.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:08.167 04:53:45 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:08.167 04:53:45 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:08.167 04:53:45 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70669 /var/tmp/spdk-nbd.sock 00:06:08.167 04:53:45 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70669 ']' 00:06:08.167 04:53:45 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:08.167 04:53:45 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:08.167 04:53:45 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:08.167 04:53:45 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:08.167 04:53:45 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:08.167 04:53:45 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:08.167 04:53:45 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:08.167 04:53:45 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:08.167 Malloc0 00:06:08.167 04:53:46 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:08.167 Malloc1 00:06:08.425 04:53:46 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:08.425 04:53:46 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.425 04:53:46 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:08.425 04:53:46 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:08.425 04:53:46 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.425 04:53:46 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:08.425 04:53:46 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:08.425 04:53:46 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.425 04:53:46 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:08.425 04:53:46 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:08.425 04:53:46 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.425 04:53:46 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:08.425 04:53:46 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:08.425 04:53:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:08.425 04:53:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:08.425 04:53:46 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:08.425 /dev/nbd0 00:06:08.425 04:53:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:08.425 04:53:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:08.425 04:53:46 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:08.425 04:53:46 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:08.425 04:53:46 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:08.425 04:53:46 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:08.425 04:53:46 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:08.425 04:53:46 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:08.425 04:53:46 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:08.425 04:53:46 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:08.425 04:53:46 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:08.425 1+0 records in 00:06:08.425 1+0 records out 00:06:08.425 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000234408 s, 17.5 MB/s 00:06:08.425 04:53:46 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:08.425 04:53:46 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:08.425 04:53:46 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:08.425 04:53:46 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:08.425 04:53:46 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:08.425 04:53:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:08.425 04:53:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:08.425 04:53:46 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:08.683 /dev/nbd1 00:06:08.683 04:53:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:08.683 04:53:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:08.683 04:53:46 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:08.683 04:53:46 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:08.683 04:53:46 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:08.683 04:53:46 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:08.683 04:53:46 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:08.683 04:53:46 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:08.683 04:53:46 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:08.683 04:53:46 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:08.683 04:53:46 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:08.683 1+0 records in 00:06:08.683 1+0 records out 00:06:08.683 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000271488 s, 15.1 MB/s 00:06:08.683 04:53:46 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:08.683 04:53:46 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:08.683 04:53:46 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:08.683 04:53:46 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:08.683 04:53:46 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:08.683 04:53:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:08.683 04:53:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:08.683 04:53:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:08.683 04:53:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.683 04:53:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:08.941 04:53:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:08.941 { 00:06:08.941 "nbd_device": "/dev/nbd0", 00:06:08.941 "bdev_name": "Malloc0" 00:06:08.941 }, 00:06:08.941 { 00:06:08.941 "nbd_device": "/dev/nbd1", 00:06:08.941 "bdev_name": "Malloc1" 00:06:08.941 } 00:06:08.941 ]' 00:06:08.941 04:53:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:08.941 { 00:06:08.941 "nbd_device": "/dev/nbd0", 00:06:08.941 "bdev_name": "Malloc0" 00:06:08.941 }, 00:06:08.941 { 00:06:08.941 "nbd_device": "/dev/nbd1", 00:06:08.941 "bdev_name": "Malloc1" 00:06:08.941 } 00:06:08.941 ]' 00:06:08.941 04:53:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:08.941 04:53:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:08.941 /dev/nbd1' 00:06:08.941 04:53:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:08.941 04:53:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:08.941 /dev/nbd1' 00:06:08.941 04:53:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:08.941 04:53:47 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:08.941 04:53:47 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:08.941 04:53:47 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:08.941 04:53:47 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:08.941 04:53:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.941 04:53:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:08.941 04:53:47 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:08.941 04:53:47 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:08.941 04:53:47 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:08.941 04:53:47 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:08.941 256+0 records in 00:06:08.941 256+0 records out 00:06:08.941 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00753929 s, 139 MB/s 00:06:08.941 04:53:47 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:08.941 04:53:47 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:08.941 256+0 records in 00:06:08.941 256+0 records out 00:06:08.941 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0166313 s, 63.0 MB/s 00:06:08.941 04:53:47 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:08.941 04:53:47 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:08.941 256+0 records in 00:06:08.941 256+0 records out 00:06:08.941 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0153555 s, 68.3 MB/s 00:06:08.941 04:53:47 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:08.941 04:53:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.941 04:53:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:08.941 04:53:47 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:08.941 04:53:47 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:08.941 04:53:47 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:08.941 04:53:47 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:08.941 04:53:47 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:08.941 04:53:47 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:08.941 04:53:47 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:08.941 04:53:47 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:08.941 04:53:47 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:08.941 04:53:47 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:08.941 04:53:47 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.941 04:53:47 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.941 04:53:47 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:08.941 04:53:47 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:08.941 04:53:47 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:08.941 04:53:47 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:09.199 04:53:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:09.199 04:53:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:09.199 04:53:47 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:09.199 04:53:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:09.199 04:53:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:09.199 04:53:47 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:09.199 04:53:47 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:09.199 04:53:47 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:09.199 04:53:47 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:09.199 04:53:47 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:09.457 04:53:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:09.457 04:53:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:09.457 04:53:47 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:09.457 04:53:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:09.457 04:53:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:09.457 04:53:47 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:09.457 04:53:47 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:09.457 04:53:47 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:09.457 04:53:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:09.457 04:53:47 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.457 04:53:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:09.740 04:53:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:09.740 04:53:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:09.740 04:53:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:09.740 04:53:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:09.740 04:53:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:09.740 04:53:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:09.740 04:53:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:09.740 04:53:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:09.740 04:53:47 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:09.740 04:53:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:09.740 04:53:47 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:09.740 04:53:47 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:09.740 04:53:47 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:09.998 04:53:48 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:09.998 [2024-12-06 04:53:48.133722] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:09.998 [2024-12-06 04:53:48.161099] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:09.998 [2024-12-06 04:53:48.161216] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.998 [2024-12-06 04:53:48.191701] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:09.998 [2024-12-06 04:53:48.191739] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:13.275 spdk_app_start Round 2 00:06:13.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:13.275 04:53:51 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:13.275 04:53:51 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:13.275 04:53:51 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70669 /var/tmp/spdk-nbd.sock 00:06:13.275 04:53:51 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70669 ']' 00:06:13.275 04:53:51 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:13.275 04:53:51 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:13.275 04:53:51 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:13.275 04:53:51 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:13.275 04:53:51 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:13.275 04:53:51 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:13.275 04:53:51 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:13.275 04:53:51 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:13.275 Malloc0 00:06:13.275 04:53:51 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:13.534 Malloc1 00:06:13.534 04:53:51 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:13.534 04:53:51 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.534 04:53:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:13.534 04:53:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:13.534 04:53:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.534 04:53:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:13.534 04:53:51 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:13.534 04:53:51 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.534 04:53:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:13.534 04:53:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:13.534 04:53:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.534 04:53:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:13.534 04:53:51 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:13.534 04:53:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:13.534 04:53:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:13.534 04:53:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:13.791 /dev/nbd0 00:06:13.791 04:53:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:13.791 04:53:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:13.791 04:53:51 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:13.791 04:53:51 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:13.791 04:53:51 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:13.791 04:53:51 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:13.791 04:53:51 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:13.791 04:53:51 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:13.791 04:53:51 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:13.791 04:53:51 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:13.791 04:53:51 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:13.791 1+0 records in 00:06:13.791 1+0 records out 00:06:13.791 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000170422 s, 24.0 MB/s 00:06:13.791 04:53:51 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:13.791 04:53:51 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:13.791 04:53:51 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:13.791 04:53:51 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:13.791 04:53:51 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:13.791 04:53:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:13.791 04:53:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:13.791 04:53:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:14.051 /dev/nbd1 00:06:14.051 04:53:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:14.051 04:53:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:14.051 04:53:52 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:14.051 04:53:52 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:14.051 04:53:52 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:14.051 04:53:52 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:14.051 04:53:52 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:14.051 04:53:52 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:14.051 04:53:52 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:14.051 04:53:52 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:14.051 04:53:52 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:14.051 1+0 records in 00:06:14.051 1+0 records out 00:06:14.051 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000160246 s, 25.6 MB/s 00:06:14.051 04:53:52 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:14.051 04:53:52 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:14.051 04:53:52 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:14.051 04:53:52 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:14.051 04:53:52 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:14.051 04:53:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:14.051 04:53:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:14.051 04:53:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:14.051 04:53:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.051 04:53:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:14.311 04:53:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:14.311 { 00:06:14.311 "nbd_device": "/dev/nbd0", 00:06:14.311 "bdev_name": "Malloc0" 00:06:14.311 }, 00:06:14.311 { 00:06:14.311 "nbd_device": "/dev/nbd1", 00:06:14.311 "bdev_name": "Malloc1" 00:06:14.311 } 00:06:14.311 ]' 00:06:14.311 04:53:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:14.311 04:53:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:14.311 { 00:06:14.311 "nbd_device": "/dev/nbd0", 00:06:14.311 "bdev_name": "Malloc0" 00:06:14.311 }, 00:06:14.311 { 00:06:14.311 "nbd_device": "/dev/nbd1", 00:06:14.311 "bdev_name": "Malloc1" 00:06:14.311 } 00:06:14.311 ]' 00:06:14.311 04:53:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:14.311 /dev/nbd1' 00:06:14.311 04:53:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:14.311 /dev/nbd1' 00:06:14.311 04:53:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:14.311 04:53:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:14.311 04:53:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:14.311 04:53:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:14.311 04:53:52 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:14.311 04:53:52 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:14.311 04:53:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.311 04:53:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:14.311 04:53:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:14.311 04:53:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:14.311 04:53:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:14.311 04:53:52 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:14.311 256+0 records in 00:06:14.311 256+0 records out 00:06:14.311 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00870333 s, 120 MB/s 00:06:14.311 04:53:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:14.311 04:53:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:14.311 256+0 records in 00:06:14.311 256+0 records out 00:06:14.311 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0203337 s, 51.6 MB/s 00:06:14.311 04:53:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:14.311 04:53:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:14.311 256+0 records in 00:06:14.311 256+0 records out 00:06:14.311 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0344545 s, 30.4 MB/s 00:06:14.311 04:53:52 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:14.311 04:53:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.311 04:53:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:14.311 04:53:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:14.311 04:53:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:14.311 04:53:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:14.311 04:53:52 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:14.311 04:53:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:14.311 04:53:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:14.311 04:53:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:14.311 04:53:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:14.311 04:53:52 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:14.311 04:53:52 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:14.311 04:53:52 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.311 04:53:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.311 04:53:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:14.311 04:53:52 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:14.311 04:53:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:14.311 04:53:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:14.572 04:53:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:14.572 04:53:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:14.572 04:53:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:14.572 04:53:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:14.572 04:53:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:14.572 04:53:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:14.572 04:53:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:14.572 04:53:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:14.572 04:53:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:14.572 04:53:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:14.861 04:53:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:14.861 04:53:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:14.861 04:53:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:14.861 04:53:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:14.861 04:53:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:14.861 04:53:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:14.861 04:53:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:14.861 04:53:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:14.861 04:53:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:14.861 04:53:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.861 04:53:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:15.122 04:53:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:15.122 04:53:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:15.122 04:53:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:15.122 04:53:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:15.122 04:53:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:15.122 04:53:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:15.122 04:53:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:15.122 04:53:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:15.122 04:53:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:15.122 04:53:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:15.122 04:53:53 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:15.122 04:53:53 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:15.122 04:53:53 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:15.382 04:53:53 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:15.382 [2024-12-06 04:53:53.465460] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:15.382 [2024-12-06 04:53:53.499494] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.382 [2024-12-06 04:53:53.499577] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.382 [2024-12-06 04:53:53.534029] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:15.382 [2024-12-06 04:53:53.534081] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:18.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:18.674 04:53:56 event.app_repeat -- event/event.sh@38 -- # waitforlisten 70669 /var/tmp/spdk-nbd.sock 00:06:18.674 04:53:56 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70669 ']' 00:06:18.674 04:53:56 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:18.674 04:53:56 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:18.674 04:53:56 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:18.674 04:53:56 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:18.674 04:53:56 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:18.674 04:53:56 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:18.674 04:53:56 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:18.674 04:53:56 event.app_repeat -- event/event.sh@39 -- # killprocess 70669 00:06:18.674 04:53:56 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 70669 ']' 00:06:18.674 04:53:56 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 70669 00:06:18.674 04:53:56 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:06:18.675 04:53:56 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:18.675 04:53:56 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70669 00:06:18.675 killing process with pid 70669 00:06:18.675 04:53:56 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:18.675 04:53:56 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:18.675 04:53:56 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70669' 00:06:18.675 04:53:56 event.app_repeat -- common/autotest_common.sh@969 -- # kill 70669 00:06:18.675 04:53:56 event.app_repeat -- common/autotest_common.sh@974 -- # wait 70669 00:06:18.675 spdk_app_start is called in Round 0. 00:06:18.675 Shutdown signal received, stop current app iteration 00:06:18.675 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 reinitialization... 00:06:18.675 spdk_app_start is called in Round 1. 00:06:18.675 Shutdown signal received, stop current app iteration 00:06:18.675 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 reinitialization... 00:06:18.675 spdk_app_start is called in Round 2. 00:06:18.675 Shutdown signal received, stop current app iteration 00:06:18.675 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 reinitialization... 00:06:18.675 spdk_app_start is called in Round 3. 00:06:18.675 Shutdown signal received, stop current app iteration 00:06:18.675 04:53:56 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:18.675 04:53:56 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:18.675 00:06:18.675 real 0m16.808s 00:06:18.675 user 0m37.503s 00:06:18.675 sys 0m1.996s 00:06:18.675 04:53:56 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:18.675 04:53:56 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:18.675 ************************************ 00:06:18.675 END TEST app_repeat 00:06:18.675 ************************************ 00:06:18.675 04:53:56 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:18.675 04:53:56 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:18.675 04:53:56 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:18.675 04:53:56 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:18.675 04:53:56 event -- common/autotest_common.sh@10 -- # set +x 00:06:18.675 ************************************ 00:06:18.675 START TEST cpu_locks 00:06:18.675 ************************************ 00:06:18.675 04:53:56 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:18.675 * Looking for test storage... 00:06:18.675 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:18.675 04:53:56 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:18.675 04:53:56 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:18.675 04:53:56 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:06:18.675 04:53:56 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:18.675 04:53:56 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:18.675 04:53:56 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:18.675 04:53:56 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:18.675 04:53:56 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:18.675 04:53:56 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:18.675 04:53:56 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:18.675 04:53:56 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:18.675 04:53:56 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:18.675 04:53:56 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:18.675 04:53:56 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:18.675 04:53:56 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:18.675 04:53:56 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:18.675 04:53:56 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:18.675 04:53:56 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:18.675 04:53:56 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:18.675 04:53:56 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:18.933 04:53:56 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:18.933 04:53:56 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:18.933 04:53:56 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:18.933 04:53:56 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:18.933 04:53:56 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:18.933 04:53:56 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:18.933 04:53:56 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:18.933 04:53:56 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:18.933 04:53:56 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:18.933 04:53:56 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:18.933 04:53:56 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:18.933 04:53:56 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:18.933 04:53:56 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:18.933 04:53:56 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:18.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.933 --rc genhtml_branch_coverage=1 00:06:18.933 --rc genhtml_function_coverage=1 00:06:18.933 --rc genhtml_legend=1 00:06:18.933 --rc geninfo_all_blocks=1 00:06:18.933 --rc geninfo_unexecuted_blocks=1 00:06:18.933 00:06:18.933 ' 00:06:18.933 04:53:56 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:18.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.933 --rc genhtml_branch_coverage=1 00:06:18.933 --rc genhtml_function_coverage=1 00:06:18.933 --rc genhtml_legend=1 00:06:18.933 --rc geninfo_all_blocks=1 00:06:18.933 --rc geninfo_unexecuted_blocks=1 00:06:18.933 00:06:18.933 ' 00:06:18.933 04:53:56 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:18.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.933 --rc genhtml_branch_coverage=1 00:06:18.933 --rc genhtml_function_coverage=1 00:06:18.933 --rc genhtml_legend=1 00:06:18.933 --rc geninfo_all_blocks=1 00:06:18.933 --rc geninfo_unexecuted_blocks=1 00:06:18.933 00:06:18.933 ' 00:06:18.933 04:53:56 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:18.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.933 --rc genhtml_branch_coverage=1 00:06:18.933 --rc genhtml_function_coverage=1 00:06:18.933 --rc genhtml_legend=1 00:06:18.933 --rc geninfo_all_blocks=1 00:06:18.933 --rc geninfo_unexecuted_blocks=1 00:06:18.933 00:06:18.933 ' 00:06:18.933 04:53:56 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:18.933 04:53:56 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:18.933 04:53:56 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:18.933 04:53:56 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:18.933 04:53:56 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:18.933 04:53:56 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:18.933 04:53:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:18.933 ************************************ 00:06:18.933 START TEST default_locks 00:06:18.933 ************************************ 00:06:18.933 04:53:56 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:06:18.933 04:53:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=71090 00:06:18.933 04:53:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 71090 00:06:18.933 04:53:56 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 71090 ']' 00:06:18.933 04:53:56 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.933 04:53:56 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:18.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.933 04:53:56 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.933 04:53:56 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:18.933 04:53:56 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:18.933 04:53:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:18.933 [2024-12-06 04:53:56.990851] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:18.933 [2024-12-06 04:53:56.990964] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71090 ] 00:06:18.933 [2024-12-06 04:53:57.123593] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.933 [2024-12-06 04:53:57.156226] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.895 04:53:57 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:19.895 04:53:57 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:06:19.895 04:53:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 71090 00:06:19.895 04:53:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 71090 00:06:19.895 04:53:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:19.895 04:53:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 71090 00:06:19.895 04:53:58 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 71090 ']' 00:06:19.895 04:53:58 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 71090 00:06:19.895 04:53:58 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:06:19.895 04:53:58 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:19.895 04:53:58 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71090 00:06:19.895 killing process with pid 71090 00:06:19.895 04:53:58 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:19.895 04:53:58 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:19.895 04:53:58 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71090' 00:06:19.895 04:53:58 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 71090 00:06:19.895 04:53:58 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 71090 00:06:20.152 04:53:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 71090 00:06:20.152 04:53:58 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:06:20.152 04:53:58 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 71090 00:06:20.152 04:53:58 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:20.152 04:53:58 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:20.152 04:53:58 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:20.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.152 04:53:58 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:20.152 04:53:58 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 71090 00:06:20.152 04:53:58 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 71090 ']' 00:06:20.152 04:53:58 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.152 04:53:58 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:20.152 04:53:58 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.152 04:53:58 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:20.152 04:53:58 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:20.152 ERROR: process (pid: 71090) is no longer running 00:06:20.152 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (71090) - No such process 00:06:20.152 04:53:58 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:20.152 04:53:58 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:06:20.152 04:53:58 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:06:20.152 04:53:58 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:20.152 04:53:58 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:20.152 04:53:58 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:20.152 04:53:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:20.152 04:53:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:20.152 04:53:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:20.152 04:53:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:20.152 00:06:20.152 real 0m1.433s 00:06:20.152 user 0m1.486s 00:06:20.152 sys 0m0.406s 00:06:20.152 04:53:58 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:20.152 04:53:58 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:20.152 ************************************ 00:06:20.152 END TEST default_locks 00:06:20.152 ************************************ 00:06:20.409 04:53:58 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:20.409 04:53:58 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:20.409 04:53:58 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:20.409 04:53:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:20.409 ************************************ 00:06:20.409 START TEST default_locks_via_rpc 00:06:20.409 ************************************ 00:06:20.409 04:53:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:06:20.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.409 04:53:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=71132 00:06:20.409 04:53:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 71132 00:06:20.409 04:53:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71132 ']' 00:06:20.409 04:53:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.409 04:53:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:20.409 04:53:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.409 04:53:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:20.409 04:53:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.409 04:53:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:20.409 [2024-12-06 04:53:58.478104] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:20.409 [2024-12-06 04:53:58.478218] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71132 ] 00:06:20.409 [2024-12-06 04:53:58.611854] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.668 [2024-12-06 04:53:58.642727] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.233 04:53:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:21.233 04:53:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:21.233 04:53:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:21.233 04:53:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.233 04:53:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.233 04:53:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.233 04:53:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:21.233 04:53:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:21.233 04:53:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:21.233 04:53:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:21.233 04:53:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:21.233 04:53:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.233 04:53:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.233 04:53:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.233 04:53:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 71132 00:06:21.233 04:53:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 71132 00:06:21.233 04:53:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:21.490 04:53:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 71132 00:06:21.490 04:53:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 71132 ']' 00:06:21.490 04:53:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 71132 00:06:21.490 04:53:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:06:21.490 04:53:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:21.490 04:53:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71132 00:06:21.490 killing process with pid 71132 00:06:21.490 04:53:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:21.490 04:53:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:21.490 04:53:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71132' 00:06:21.490 04:53:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 71132 00:06:21.490 04:53:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 71132 00:06:21.746 ************************************ 00:06:21.746 END TEST default_locks_via_rpc 00:06:21.746 ************************************ 00:06:21.746 00:06:21.746 real 0m1.369s 00:06:21.746 user 0m1.366s 00:06:21.746 sys 0m0.416s 00:06:21.746 04:53:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:21.746 04:53:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.746 04:53:59 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:21.747 04:53:59 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:21.747 04:53:59 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:21.747 04:53:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:21.747 ************************************ 00:06:21.747 START TEST non_locking_app_on_locked_coremask 00:06:21.747 ************************************ 00:06:21.747 04:53:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:06:21.747 04:53:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=71184 00:06:21.747 04:53:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 71184 /var/tmp/spdk.sock 00:06:21.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.747 04:53:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71184 ']' 00:06:21.747 04:53:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.747 04:53:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:21.747 04:53:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.747 04:53:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:21.747 04:53:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:21.747 04:53:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:21.747 [2024-12-06 04:53:59.887563] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:21.747 [2024-12-06 04:53:59.887704] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71184 ] 00:06:22.005 [2024-12-06 04:54:00.019783] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.005 [2024-12-06 04:54:00.055719] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:22.576 04:54:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:22.576 04:54:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:22.576 04:54:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=71194 00:06:22.576 04:54:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 71194 /var/tmp/spdk2.sock 00:06:22.576 04:54:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71194 ']' 00:06:22.577 04:54:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:22.577 04:54:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:22.577 04:54:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:22.577 04:54:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:22.577 04:54:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:22.577 04:54:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:22.577 [2024-12-06 04:54:00.807819] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:22.838 [2024-12-06 04:54:00.808216] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71194 ] 00:06:22.838 [2024-12-06 04:54:00.948073] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:22.838 [2024-12-06 04:54:00.948128] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.838 [2024-12-06 04:54:01.013470] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.781 04:54:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:23.781 04:54:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:23.782 04:54:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 71184 00:06:23.782 04:54:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:23.782 04:54:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71184 00:06:24.043 04:54:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 71184 00:06:24.043 04:54:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71184 ']' 00:06:24.043 04:54:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 71184 00:06:24.043 04:54:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:24.043 04:54:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:24.043 04:54:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71184 00:06:24.043 04:54:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:24.043 04:54:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:24.044 killing process with pid 71184 00:06:24.044 04:54:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71184' 00:06:24.044 04:54:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 71184 00:06:24.044 04:54:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 71184 00:06:24.617 04:54:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 71194 00:06:24.617 04:54:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71194 ']' 00:06:24.617 04:54:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 71194 00:06:24.617 04:54:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:24.617 04:54:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:24.617 04:54:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71194 00:06:24.617 killing process with pid 71194 00:06:24.617 04:54:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:24.617 04:54:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:24.617 04:54:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71194' 00:06:24.617 04:54:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 71194 00:06:24.617 04:54:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 71194 00:06:24.878 ************************************ 00:06:24.878 END TEST non_locking_app_on_locked_coremask 00:06:24.878 ************************************ 00:06:24.878 00:06:24.878 real 0m3.082s 00:06:24.878 user 0m3.394s 00:06:24.878 sys 0m0.822s 00:06:24.878 04:54:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:24.878 04:54:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:24.878 04:54:02 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:24.878 04:54:02 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:24.878 04:54:02 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:24.878 04:54:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:24.878 ************************************ 00:06:24.878 START TEST locking_app_on_unlocked_coremask 00:06:24.878 ************************************ 00:06:24.878 04:54:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:06:24.878 04:54:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=71258 00:06:24.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.878 04:54:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 71258 /var/tmp/spdk.sock 00:06:24.878 04:54:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71258 ']' 00:06:24.878 04:54:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.878 04:54:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:24.878 04:54:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:24.878 04:54:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.878 04:54:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:24.879 04:54:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:24.879 [2024-12-06 04:54:03.031279] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:24.879 [2024-12-06 04:54:03.031408] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71258 ] 00:06:25.140 [2024-12-06 04:54:03.169798] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:25.140 [2024-12-06 04:54:03.169843] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.140 [2024-12-06 04:54:03.202732] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:25.713 04:54:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:25.713 04:54:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:25.713 04:54:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:25.713 04:54:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=71274 00:06:25.713 04:54:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 71274 /var/tmp/spdk2.sock 00:06:25.713 04:54:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71274 ']' 00:06:25.713 04:54:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:25.713 04:54:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:25.713 04:54:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:25.713 04:54:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:25.713 04:54:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:25.713 [2024-12-06 04:54:03.926612] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:25.713 [2024-12-06 04:54:03.926934] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71274 ] 00:06:25.972 [2024-12-06 04:54:04.068446] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.972 [2024-12-06 04:54:04.139256] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.544 04:54:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:26.544 04:54:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:26.544 04:54:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 71274 00:06:26.545 04:54:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:26.545 04:54:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71274 00:06:27.115 04:54:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 71258 00:06:27.115 04:54:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71258 ']' 00:06:27.115 04:54:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 71258 00:06:27.115 04:54:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:27.115 04:54:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:27.115 04:54:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71258 00:06:27.115 killing process with pid 71258 00:06:27.115 04:54:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:27.115 04:54:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:27.115 04:54:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71258' 00:06:27.115 04:54:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 71258 00:06:27.115 04:54:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 71258 00:06:27.683 04:54:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 71274 00:06:27.683 04:54:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71274 ']' 00:06:27.683 04:54:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 71274 00:06:27.683 04:54:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:27.683 04:54:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:27.683 04:54:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71274 00:06:27.684 killing process with pid 71274 00:06:27.684 04:54:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:27.684 04:54:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:27.684 04:54:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71274' 00:06:27.684 04:54:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 71274 00:06:27.684 04:54:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 71274 00:06:28.000 ************************************ 00:06:28.000 END TEST locking_app_on_unlocked_coremask 00:06:28.000 ************************************ 00:06:28.000 00:06:28.000 real 0m3.085s 00:06:28.000 user 0m3.331s 00:06:28.000 sys 0m0.844s 00:06:28.000 04:54:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:28.000 04:54:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:28.000 04:54:06 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:28.000 04:54:06 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:28.000 04:54:06 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:28.000 04:54:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:28.000 ************************************ 00:06:28.000 START TEST locking_app_on_locked_coremask 00:06:28.000 ************************************ 00:06:28.000 04:54:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:06:28.000 04:54:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=71332 00:06:28.000 04:54:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 71332 /var/tmp/spdk.sock 00:06:28.000 04:54:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71332 ']' 00:06:28.000 04:54:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.000 04:54:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:28.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.000 04:54:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.000 04:54:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:28.000 04:54:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:28.000 04:54:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:28.000 [2024-12-06 04:54:06.172988] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:28.000 [2024-12-06 04:54:06.173575] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71332 ] 00:06:28.289 [2024-12-06 04:54:06.309305] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.289 [2024-12-06 04:54:06.343086] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.859 04:54:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:28.859 04:54:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:28.859 04:54:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=71348 00:06:28.859 04:54:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 71348 /var/tmp/spdk2.sock 00:06:28.859 04:54:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:28.859 04:54:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 71348 /var/tmp/spdk2.sock 00:06:28.859 04:54:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:28.859 04:54:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:28.859 04:54:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:28.859 04:54:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:28.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:28.859 04:54:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:28.859 04:54:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 71348 /var/tmp/spdk2.sock 00:06:28.859 04:54:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71348 ']' 00:06:28.859 04:54:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:28.859 04:54:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:28.859 04:54:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:28.859 04:54:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:28.859 04:54:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:29.120 [2024-12-06 04:54:07.144615] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:29.120 [2024-12-06 04:54:07.144770] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71348 ] 00:06:29.120 [2024-12-06 04:54:07.284552] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 71332 has claimed it. 00:06:29.120 [2024-12-06 04:54:07.284614] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:29.691 ERROR: process (pid: 71348) is no longer running 00:06:29.691 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (71348) - No such process 00:06:29.691 04:54:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:29.691 04:54:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:29.691 04:54:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:29.691 04:54:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:29.691 04:54:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:29.691 04:54:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:29.691 04:54:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 71332 00:06:29.691 04:54:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:29.691 04:54:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71332 00:06:29.950 04:54:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 71332 00:06:29.950 04:54:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71332 ']' 00:06:29.950 04:54:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 71332 00:06:29.950 04:54:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:29.950 04:54:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:29.950 04:54:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71332 00:06:29.950 killing process with pid 71332 00:06:29.950 04:54:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:29.950 04:54:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:29.950 04:54:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71332' 00:06:29.950 04:54:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 71332 00:06:29.950 04:54:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 71332 00:06:30.209 ************************************ 00:06:30.209 END TEST locking_app_on_locked_coremask 00:06:30.209 ************************************ 00:06:30.209 00:06:30.209 real 0m2.201s 00:06:30.209 user 0m2.506s 00:06:30.209 sys 0m0.518s 00:06:30.209 04:54:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:30.209 04:54:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:30.209 04:54:08 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:30.209 04:54:08 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:30.209 04:54:08 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:30.209 04:54:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:30.209 ************************************ 00:06:30.209 START TEST locking_overlapped_coremask 00:06:30.209 ************************************ 00:06:30.209 04:54:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:06:30.209 04:54:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=71390 00:06:30.209 04:54:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:30.209 04:54:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 71390 /var/tmp/spdk.sock 00:06:30.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.209 04:54:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 71390 ']' 00:06:30.209 04:54:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.209 04:54:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:30.209 04:54:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.209 04:54:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:30.209 04:54:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:30.209 [2024-12-06 04:54:08.414737] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:30.209 [2024-12-06 04:54:08.414866] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71390 ] 00:06:30.468 [2024-12-06 04:54:08.551890] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:30.468 [2024-12-06 04:54:08.590118] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:30.468 [2024-12-06 04:54:08.590439] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.468 [2024-12-06 04:54:08.590480] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:31.036 04:54:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:31.036 04:54:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:31.036 04:54:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=71408 00:06:31.036 04:54:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 71408 /var/tmp/spdk2.sock 00:06:31.036 04:54:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:31.036 04:54:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:31.036 04:54:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 71408 /var/tmp/spdk2.sock 00:06:31.036 04:54:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:31.036 04:54:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:31.036 04:54:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:31.036 04:54:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:31.036 04:54:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 71408 /var/tmp/spdk2.sock 00:06:31.036 04:54:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 71408 ']' 00:06:31.036 04:54:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:31.036 04:54:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:31.036 04:54:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:31.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:31.036 04:54:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:31.036 04:54:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:31.295 [2024-12-06 04:54:09.333578] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:31.295 [2024-12-06 04:54:09.333887] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71408 ] 00:06:31.295 [2024-12-06 04:54:09.475922] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71390 has claimed it. 00:06:31.295 [2024-12-06 04:54:09.475990] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:31.862 ERROR: process (pid: 71408) is no longer running 00:06:31.862 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (71408) - No such process 00:06:31.862 04:54:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:31.862 04:54:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:31.862 04:54:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:31.862 04:54:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:31.862 04:54:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:31.862 04:54:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:31.862 04:54:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:31.862 04:54:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:31.862 04:54:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:31.862 04:54:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:31.862 04:54:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 71390 00:06:31.862 04:54:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 71390 ']' 00:06:31.862 04:54:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 71390 00:06:31.862 04:54:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:06:31.862 04:54:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:31.862 04:54:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71390 00:06:31.862 killing process with pid 71390 00:06:31.862 04:54:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:31.862 04:54:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:31.862 04:54:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71390' 00:06:31.862 04:54:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 71390 00:06:31.862 04:54:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 71390 00:06:32.121 00:06:32.121 real 0m1.916s 00:06:32.121 user 0m5.276s 00:06:32.121 sys 0m0.391s 00:06:32.121 04:54:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:32.121 04:54:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:32.121 ************************************ 00:06:32.121 END TEST locking_overlapped_coremask 00:06:32.121 ************************************ 00:06:32.121 04:54:10 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:32.121 04:54:10 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:32.121 04:54:10 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:32.121 04:54:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:32.121 ************************************ 00:06:32.121 START TEST locking_overlapped_coremask_via_rpc 00:06:32.121 ************************************ 00:06:32.121 04:54:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:06:32.121 04:54:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=71450 00:06:32.121 04:54:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 71450 /var/tmp/spdk.sock 00:06:32.121 04:54:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71450 ']' 00:06:32.121 04:54:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.121 04:54:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:32.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.121 04:54:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:32.121 04:54:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.121 04:54:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:32.121 04:54:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.380 [2024-12-06 04:54:10.379395] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:32.380 [2024-12-06 04:54:10.379809] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71450 ] 00:06:32.380 [2024-12-06 04:54:10.515273] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:32.380 [2024-12-06 04:54:10.515314] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:32.380 [2024-12-06 04:54:10.548037] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:32.380 [2024-12-06 04:54:10.548188] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.380 [2024-12-06 04:54:10.548259] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:32.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:32.949 04:54:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:32.949 04:54:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:32.949 04:54:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=71468 00:06:32.949 04:54:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 71468 /var/tmp/spdk2.sock 00:06:32.949 04:54:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71468 ']' 00:06:32.949 04:54:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:32.949 04:54:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:32.949 04:54:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:32.949 04:54:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:32.949 04:54:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:32.949 04:54:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:33.209 [2024-12-06 04:54:11.240319] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:33.209 [2024-12-06 04:54:11.240601] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71468 ] 00:06:33.209 [2024-12-06 04:54:11.382688] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:33.209 [2024-12-06 04:54:11.382737] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:33.471 [2024-12-06 04:54:11.454803] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:33.471 [2024-12-06 04:54:11.454831] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:33.471 [2024-12-06 04:54:11.454910] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:06:34.043 04:54:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:34.043 04:54:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:34.043 04:54:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:34.043 04:54:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.043 04:54:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.043 04:54:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.043 04:54:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:34.043 04:54:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:34.043 04:54:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:34.043 04:54:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:34.043 04:54:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:34.043 04:54:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:34.043 04:54:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:34.043 04:54:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:34.043 04:54:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.043 04:54:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.043 [2024-12-06 04:54:12.098828] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71450 has claimed it. 00:06:34.043 request: 00:06:34.043 { 00:06:34.043 "method": "framework_enable_cpumask_locks", 00:06:34.043 "req_id": 1 00:06:34.043 } 00:06:34.043 Got JSON-RPC error response 00:06:34.043 response: 00:06:34.043 { 00:06:34.043 "code": -32603, 00:06:34.043 "message": "Failed to claim CPU core: 2" 00:06:34.043 } 00:06:34.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.043 04:54:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:34.043 04:54:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:34.043 04:54:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:34.043 04:54:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:34.043 04:54:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:34.043 04:54:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 71450 /var/tmp/spdk.sock 00:06:34.043 04:54:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71450 ']' 00:06:34.043 04:54:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.043 04:54:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:34.043 04:54:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.043 04:54:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:34.043 04:54:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:34.302 04:54:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:34.302 04:54:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:34.302 04:54:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 71468 /var/tmp/spdk2.sock 00:06:34.302 04:54:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71468 ']' 00:06:34.302 04:54:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:34.302 04:54:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:34.302 04:54:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:34.302 04:54:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:34.302 04:54:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.302 ************************************ 00:06:34.302 END TEST locking_overlapped_coremask_via_rpc 00:06:34.302 ************************************ 00:06:34.302 04:54:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:34.302 04:54:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:34.302 04:54:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:34.302 04:54:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:34.302 04:54:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:34.302 04:54:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:34.302 00:06:34.302 real 0m2.225s 00:06:34.302 user 0m1.007s 00:06:34.302 sys 0m0.145s 00:06:34.302 04:54:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:34.302 04:54:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.560 04:54:12 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:34.560 04:54:12 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71450 ]] 00:06:34.560 04:54:12 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71450 00:06:34.560 04:54:12 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71450 ']' 00:06:34.560 04:54:12 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71450 00:06:34.560 04:54:12 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:34.560 04:54:12 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:34.560 04:54:12 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71450 00:06:34.560 killing process with pid 71450 00:06:34.560 04:54:12 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:34.560 04:54:12 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:34.560 04:54:12 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71450' 00:06:34.560 04:54:12 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 71450 00:06:34.560 04:54:12 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 71450 00:06:34.819 04:54:12 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71468 ]] 00:06:34.819 04:54:12 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71468 00:06:34.819 04:54:12 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71468 ']' 00:06:34.819 04:54:12 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71468 00:06:34.819 04:54:12 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:34.819 04:54:12 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:34.819 04:54:12 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71468 00:06:34.819 killing process with pid 71468 00:06:34.819 04:54:12 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:34.819 04:54:12 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:34.819 04:54:12 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71468' 00:06:34.819 04:54:12 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 71468 00:06:34.819 04:54:12 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 71468 00:06:35.078 04:54:13 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:35.078 04:54:13 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:35.078 04:54:13 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71450 ]] 00:06:35.078 04:54:13 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71450 00:06:35.078 04:54:13 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71450 ']' 00:06:35.078 Process with pid 71450 is not found 00:06:35.078 Process with pid 71468 is not found 00:06:35.078 04:54:13 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71450 00:06:35.078 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (71450) - No such process 00:06:35.078 04:54:13 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 71450 is not found' 00:06:35.078 04:54:13 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71468 ]] 00:06:35.078 04:54:13 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71468 00:06:35.078 04:54:13 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71468 ']' 00:06:35.078 04:54:13 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71468 00:06:35.078 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (71468) - No such process 00:06:35.078 04:54:13 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 71468 is not found' 00:06:35.078 04:54:13 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:35.078 ************************************ 00:06:35.078 END TEST cpu_locks 00:06:35.078 ************************************ 00:06:35.078 00:06:35.078 real 0m16.395s 00:06:35.078 user 0m28.544s 00:06:35.078 sys 0m4.304s 00:06:35.078 04:54:13 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:35.078 04:54:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:35.078 ************************************ 00:06:35.078 END TEST event 00:06:35.078 ************************************ 00:06:35.078 00:06:35.078 real 0m42.224s 00:06:35.078 user 1m21.406s 00:06:35.078 sys 0m7.070s 00:06:35.078 04:54:13 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:35.078 04:54:13 event -- common/autotest_common.sh@10 -- # set +x 00:06:35.078 04:54:13 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:35.078 04:54:13 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:35.078 04:54:13 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:35.078 04:54:13 -- common/autotest_common.sh@10 -- # set +x 00:06:35.078 ************************************ 00:06:35.078 START TEST thread 00:06:35.078 ************************************ 00:06:35.078 04:54:13 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:35.078 * Looking for test storage... 00:06:35.078 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:35.078 04:54:13 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:35.078 04:54:13 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:06:35.078 04:54:13 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:35.339 04:54:13 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:35.339 04:54:13 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:35.339 04:54:13 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:35.339 04:54:13 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:35.339 04:54:13 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:35.339 04:54:13 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:35.339 04:54:13 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:35.339 04:54:13 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:35.339 04:54:13 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:35.339 04:54:13 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:35.339 04:54:13 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:35.339 04:54:13 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:35.339 04:54:13 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:35.339 04:54:13 thread -- scripts/common.sh@345 -- # : 1 00:06:35.339 04:54:13 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:35.339 04:54:13 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:35.339 04:54:13 thread -- scripts/common.sh@365 -- # decimal 1 00:06:35.339 04:54:13 thread -- scripts/common.sh@353 -- # local d=1 00:06:35.339 04:54:13 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:35.339 04:54:13 thread -- scripts/common.sh@355 -- # echo 1 00:06:35.339 04:54:13 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:35.339 04:54:13 thread -- scripts/common.sh@366 -- # decimal 2 00:06:35.339 04:54:13 thread -- scripts/common.sh@353 -- # local d=2 00:06:35.339 04:54:13 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:35.339 04:54:13 thread -- scripts/common.sh@355 -- # echo 2 00:06:35.339 04:54:13 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:35.339 04:54:13 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:35.339 04:54:13 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:35.339 04:54:13 thread -- scripts/common.sh@368 -- # return 0 00:06:35.339 04:54:13 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:35.339 04:54:13 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:35.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.339 --rc genhtml_branch_coverage=1 00:06:35.339 --rc genhtml_function_coverage=1 00:06:35.339 --rc genhtml_legend=1 00:06:35.339 --rc geninfo_all_blocks=1 00:06:35.339 --rc geninfo_unexecuted_blocks=1 00:06:35.339 00:06:35.339 ' 00:06:35.339 04:54:13 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:35.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.339 --rc genhtml_branch_coverage=1 00:06:35.339 --rc genhtml_function_coverage=1 00:06:35.339 --rc genhtml_legend=1 00:06:35.339 --rc geninfo_all_blocks=1 00:06:35.339 --rc geninfo_unexecuted_blocks=1 00:06:35.339 00:06:35.339 ' 00:06:35.339 04:54:13 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:35.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.339 --rc genhtml_branch_coverage=1 00:06:35.339 --rc genhtml_function_coverage=1 00:06:35.339 --rc genhtml_legend=1 00:06:35.339 --rc geninfo_all_blocks=1 00:06:35.339 --rc geninfo_unexecuted_blocks=1 00:06:35.339 00:06:35.339 ' 00:06:35.339 04:54:13 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:35.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.339 --rc genhtml_branch_coverage=1 00:06:35.339 --rc genhtml_function_coverage=1 00:06:35.339 --rc genhtml_legend=1 00:06:35.339 --rc geninfo_all_blocks=1 00:06:35.339 --rc geninfo_unexecuted_blocks=1 00:06:35.339 00:06:35.339 ' 00:06:35.339 04:54:13 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:35.339 04:54:13 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:35.339 04:54:13 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:35.339 04:54:13 thread -- common/autotest_common.sh@10 -- # set +x 00:06:35.339 ************************************ 00:06:35.339 START TEST thread_poller_perf 00:06:35.339 ************************************ 00:06:35.339 04:54:13 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:35.339 [2024-12-06 04:54:13.403876] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:35.339 [2024-12-06 04:54:13.404115] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71595 ] 00:06:35.339 [2024-12-06 04:54:13.539359] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.601 [2024-12-06 04:54:13.571394] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.601 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:36.545 [2024-12-06T04:54:14.777Z] ====================================== 00:06:36.545 [2024-12-06T04:54:14.777Z] busy:2616680996 (cyc) 00:06:36.545 [2024-12-06T04:54:14.777Z] total_run_count: 304000 00:06:36.545 [2024-12-06T04:54:14.777Z] tsc_hz: 2600000000 (cyc) 00:06:36.545 [2024-12-06T04:54:14.777Z] ====================================== 00:06:36.545 [2024-12-06T04:54:14.777Z] poller_cost: 8607 (cyc), 3310 (nsec) 00:06:36.545 ************************************ 00:06:36.545 END TEST thread_poller_perf 00:06:36.545 ************************************ 00:06:36.545 00:06:36.545 real 0m1.264s 00:06:36.545 user 0m1.093s 00:06:36.545 sys 0m0.063s 00:06:36.545 04:54:14 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:36.545 04:54:14 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:36.545 04:54:14 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:36.545 04:54:14 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:36.545 04:54:14 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:36.545 04:54:14 thread -- common/autotest_common.sh@10 -- # set +x 00:06:36.545 ************************************ 00:06:36.545 START TEST thread_poller_perf 00:06:36.545 ************************************ 00:06:36.545 04:54:14 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:36.545 [2024-12-06 04:54:14.719510] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:36.545 [2024-12-06 04:54:14.720215] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71626 ] 00:06:36.824 [2024-12-06 04:54:14.854163] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.824 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:36.824 [2024-12-06 04:54:14.887951] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.758 [2024-12-06T04:54:15.990Z] ====================================== 00:06:37.758 [2024-12-06T04:54:15.990Z] busy:2603438514 (cyc) 00:06:37.758 [2024-12-06T04:54:15.990Z] total_run_count: 3936000 00:06:37.758 [2024-12-06T04:54:15.990Z] tsc_hz: 2600000000 (cyc) 00:06:37.758 [2024-12-06T04:54:15.990Z] ====================================== 00:06:37.758 [2024-12-06T04:54:15.990Z] poller_cost: 661 (cyc), 254 (nsec) 00:06:37.758 ************************************ 00:06:37.758 00:06:37.758 real 0m1.249s 00:06:37.758 user 0m1.080s 00:06:37.758 sys 0m0.063s 00:06:37.758 04:54:15 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:37.758 04:54:15 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:37.758 END TEST thread_poller_perf 00:06:37.758 ************************************ 00:06:37.758 04:54:15 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:38.016 00:06:38.016 real 0m2.757s 00:06:38.016 user 0m2.275s 00:06:38.016 sys 0m0.253s 00:06:38.016 ************************************ 00:06:38.016 END TEST thread 00:06:38.016 ************************************ 00:06:38.016 04:54:15 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:38.016 04:54:15 thread -- common/autotest_common.sh@10 -- # set +x 00:06:38.016 04:54:16 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:38.016 04:54:16 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:38.016 04:54:16 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:38.016 04:54:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:38.016 04:54:16 -- common/autotest_common.sh@10 -- # set +x 00:06:38.016 ************************************ 00:06:38.016 START TEST app_cmdline 00:06:38.016 ************************************ 00:06:38.016 04:54:16 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:38.016 * Looking for test storage... 00:06:38.016 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:38.016 04:54:16 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:38.016 04:54:16 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:06:38.016 04:54:16 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:38.016 04:54:16 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:38.016 04:54:16 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:38.016 04:54:16 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:38.016 04:54:16 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:38.016 04:54:16 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:38.017 04:54:16 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:38.017 04:54:16 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:38.017 04:54:16 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:38.017 04:54:16 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:38.017 04:54:16 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:38.017 04:54:16 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:38.017 04:54:16 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:38.017 04:54:16 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:38.017 04:54:16 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:38.017 04:54:16 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:38.017 04:54:16 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:38.017 04:54:16 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:38.017 04:54:16 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:38.017 04:54:16 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:38.017 04:54:16 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:38.017 04:54:16 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:38.017 04:54:16 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:38.017 04:54:16 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:38.017 04:54:16 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:38.017 04:54:16 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:38.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.017 04:54:16 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:38.017 04:54:16 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:38.017 04:54:16 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:38.017 04:54:16 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:38.017 04:54:16 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:38.017 04:54:16 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:38.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.017 --rc genhtml_branch_coverage=1 00:06:38.017 --rc genhtml_function_coverage=1 00:06:38.017 --rc genhtml_legend=1 00:06:38.017 --rc geninfo_all_blocks=1 00:06:38.017 --rc geninfo_unexecuted_blocks=1 00:06:38.017 00:06:38.017 ' 00:06:38.017 04:54:16 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:38.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.017 --rc genhtml_branch_coverage=1 00:06:38.017 --rc genhtml_function_coverage=1 00:06:38.017 --rc genhtml_legend=1 00:06:38.017 --rc geninfo_all_blocks=1 00:06:38.017 --rc geninfo_unexecuted_blocks=1 00:06:38.017 00:06:38.017 ' 00:06:38.017 04:54:16 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:38.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.017 --rc genhtml_branch_coverage=1 00:06:38.017 --rc genhtml_function_coverage=1 00:06:38.017 --rc genhtml_legend=1 00:06:38.017 --rc geninfo_all_blocks=1 00:06:38.017 --rc geninfo_unexecuted_blocks=1 00:06:38.017 00:06:38.017 ' 00:06:38.017 04:54:16 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:38.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.017 --rc genhtml_branch_coverage=1 00:06:38.017 --rc genhtml_function_coverage=1 00:06:38.017 --rc genhtml_legend=1 00:06:38.017 --rc geninfo_all_blocks=1 00:06:38.017 --rc geninfo_unexecuted_blocks=1 00:06:38.017 00:06:38.017 ' 00:06:38.017 04:54:16 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:38.017 04:54:16 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=71715 00:06:38.017 04:54:16 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 71715 00:06:38.017 04:54:16 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 71715 ']' 00:06:38.017 04:54:16 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.017 04:54:16 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:38.017 04:54:16 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.017 04:54:16 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:38.017 04:54:16 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:38.017 04:54:16 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:38.017 [2024-12-06 04:54:16.225469] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:38.017 [2024-12-06 04:54:16.225585] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71715 ] 00:06:38.280 [2024-12-06 04:54:16.361452] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.280 [2024-12-06 04:54:16.395644] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.850 04:54:17 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:38.850 04:54:17 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:06:38.850 04:54:17 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:39.111 { 00:06:39.111 "version": "SPDK v24.09.1-pre git sha1 b18e1bd62", 00:06:39.111 "fields": { 00:06:39.111 "major": 24, 00:06:39.111 "minor": 9, 00:06:39.111 "patch": 1, 00:06:39.111 "suffix": "-pre", 00:06:39.111 "commit": "b18e1bd62" 00:06:39.111 } 00:06:39.111 } 00:06:39.111 04:54:17 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:39.111 04:54:17 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:39.111 04:54:17 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:39.111 04:54:17 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:39.111 04:54:17 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:39.111 04:54:17 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:39.111 04:54:17 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:39.111 04:54:17 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:39.111 04:54:17 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:39.111 04:54:17 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:39.111 04:54:17 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:39.111 04:54:17 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:39.111 04:54:17 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:39.111 04:54:17 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:06:39.111 04:54:17 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:39.111 04:54:17 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:39.111 04:54:17 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:39.111 04:54:17 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:39.111 04:54:17 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:39.111 04:54:17 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:39.111 04:54:17 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:39.111 04:54:17 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:39.111 04:54:17 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:39.111 04:54:17 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:39.370 request: 00:06:39.370 { 00:06:39.370 "method": "env_dpdk_get_mem_stats", 00:06:39.370 "req_id": 1 00:06:39.370 } 00:06:39.370 Got JSON-RPC error response 00:06:39.370 response: 00:06:39.370 { 00:06:39.370 "code": -32601, 00:06:39.370 "message": "Method not found" 00:06:39.370 } 00:06:39.370 04:54:17 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:06:39.370 04:54:17 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:39.371 04:54:17 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:39.371 04:54:17 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:39.371 04:54:17 app_cmdline -- app/cmdline.sh@1 -- # killprocess 71715 00:06:39.371 04:54:17 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 71715 ']' 00:06:39.371 04:54:17 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 71715 00:06:39.371 04:54:17 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:06:39.371 04:54:17 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:39.371 04:54:17 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71715 00:06:39.371 killing process with pid 71715 00:06:39.371 04:54:17 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:39.371 04:54:17 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:39.371 04:54:17 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71715' 00:06:39.371 04:54:17 app_cmdline -- common/autotest_common.sh@969 -- # kill 71715 00:06:39.371 04:54:17 app_cmdline -- common/autotest_common.sh@974 -- # wait 71715 00:06:39.627 00:06:39.627 real 0m1.789s 00:06:39.627 user 0m2.058s 00:06:39.627 sys 0m0.384s 00:06:39.627 ************************************ 00:06:39.627 END TEST app_cmdline 00:06:39.627 ************************************ 00:06:39.627 04:54:17 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:39.627 04:54:17 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:39.885 04:54:17 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:39.885 04:54:17 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:39.885 04:54:17 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:39.885 04:54:17 -- common/autotest_common.sh@10 -- # set +x 00:06:39.885 ************************************ 00:06:39.885 START TEST version 00:06:39.885 ************************************ 00:06:39.885 04:54:17 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:39.885 * Looking for test storage... 00:06:39.885 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:39.885 04:54:17 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:39.885 04:54:17 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:39.885 04:54:17 version -- common/autotest_common.sh@1681 -- # lcov --version 00:06:39.885 04:54:17 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:39.885 04:54:17 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:39.885 04:54:17 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:39.885 04:54:17 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:39.885 04:54:17 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:39.885 04:54:17 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:39.885 04:54:17 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:39.885 04:54:17 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:39.885 04:54:17 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:39.885 04:54:17 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:39.885 04:54:17 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:39.885 04:54:17 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:39.885 04:54:17 version -- scripts/common.sh@344 -- # case "$op" in 00:06:39.885 04:54:17 version -- scripts/common.sh@345 -- # : 1 00:06:39.885 04:54:17 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:39.885 04:54:17 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:39.885 04:54:18 version -- scripts/common.sh@365 -- # decimal 1 00:06:39.885 04:54:18 version -- scripts/common.sh@353 -- # local d=1 00:06:39.885 04:54:18 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:39.885 04:54:18 version -- scripts/common.sh@355 -- # echo 1 00:06:39.885 04:54:18 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:39.885 04:54:18 version -- scripts/common.sh@366 -- # decimal 2 00:06:39.885 04:54:18 version -- scripts/common.sh@353 -- # local d=2 00:06:39.885 04:54:18 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:39.885 04:54:18 version -- scripts/common.sh@355 -- # echo 2 00:06:39.885 04:54:18 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:39.885 04:54:18 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:39.885 04:54:18 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:39.885 04:54:18 version -- scripts/common.sh@368 -- # return 0 00:06:39.885 04:54:18 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:39.885 04:54:18 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:39.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.885 --rc genhtml_branch_coverage=1 00:06:39.885 --rc genhtml_function_coverage=1 00:06:39.885 --rc genhtml_legend=1 00:06:39.885 --rc geninfo_all_blocks=1 00:06:39.885 --rc geninfo_unexecuted_blocks=1 00:06:39.885 00:06:39.885 ' 00:06:39.885 04:54:18 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:39.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.885 --rc genhtml_branch_coverage=1 00:06:39.885 --rc genhtml_function_coverage=1 00:06:39.885 --rc genhtml_legend=1 00:06:39.885 --rc geninfo_all_blocks=1 00:06:39.885 --rc geninfo_unexecuted_blocks=1 00:06:39.885 00:06:39.885 ' 00:06:39.885 04:54:18 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:39.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.885 --rc genhtml_branch_coverage=1 00:06:39.885 --rc genhtml_function_coverage=1 00:06:39.885 --rc genhtml_legend=1 00:06:39.885 --rc geninfo_all_blocks=1 00:06:39.885 --rc geninfo_unexecuted_blocks=1 00:06:39.885 00:06:39.885 ' 00:06:39.885 04:54:18 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:39.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.885 --rc genhtml_branch_coverage=1 00:06:39.885 --rc genhtml_function_coverage=1 00:06:39.885 --rc genhtml_legend=1 00:06:39.885 --rc geninfo_all_blocks=1 00:06:39.885 --rc geninfo_unexecuted_blocks=1 00:06:39.885 00:06:39.885 ' 00:06:39.885 04:54:18 version -- app/version.sh@17 -- # get_header_version major 00:06:39.885 04:54:18 version -- app/version.sh@14 -- # cut -f2 00:06:39.886 04:54:18 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:39.886 04:54:18 version -- app/version.sh@14 -- # tr -d '"' 00:06:39.886 04:54:18 version -- app/version.sh@17 -- # major=24 00:06:39.886 04:54:18 version -- app/version.sh@18 -- # get_header_version minor 00:06:39.886 04:54:18 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:39.886 04:54:18 version -- app/version.sh@14 -- # cut -f2 00:06:39.886 04:54:18 version -- app/version.sh@14 -- # tr -d '"' 00:06:39.886 04:54:18 version -- app/version.sh@18 -- # minor=9 00:06:39.886 04:54:18 version -- app/version.sh@19 -- # get_header_version patch 00:06:39.886 04:54:18 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:39.886 04:54:18 version -- app/version.sh@14 -- # cut -f2 00:06:39.886 04:54:18 version -- app/version.sh@14 -- # tr -d '"' 00:06:39.886 04:54:18 version -- app/version.sh@19 -- # patch=1 00:06:39.886 04:54:18 version -- app/version.sh@20 -- # get_header_version suffix 00:06:39.886 04:54:18 version -- app/version.sh@14 -- # cut -f2 00:06:39.886 04:54:18 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:39.886 04:54:18 version -- app/version.sh@14 -- # tr -d '"' 00:06:39.886 04:54:18 version -- app/version.sh@20 -- # suffix=-pre 00:06:39.886 04:54:18 version -- app/version.sh@22 -- # version=24.9 00:06:39.886 04:54:18 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:39.886 04:54:18 version -- app/version.sh@25 -- # version=24.9.1 00:06:39.886 04:54:18 version -- app/version.sh@28 -- # version=24.9.1rc0 00:06:39.886 04:54:18 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:39.886 04:54:18 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:39.886 04:54:18 version -- app/version.sh@30 -- # py_version=24.9.1rc0 00:06:39.886 04:54:18 version -- app/version.sh@31 -- # [[ 24.9.1rc0 == \2\4\.\9\.\1\r\c\0 ]] 00:06:39.886 ************************************ 00:06:39.886 END TEST version 00:06:39.886 ************************************ 00:06:39.886 00:06:39.886 real 0m0.202s 00:06:39.886 user 0m0.124s 00:06:39.886 sys 0m0.106s 00:06:39.886 04:54:18 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:39.886 04:54:18 version -- common/autotest_common.sh@10 -- # set +x 00:06:39.886 04:54:18 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:39.886 04:54:18 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:39.886 04:54:18 -- spdk/autotest.sh@194 -- # uname -s 00:06:39.886 04:54:18 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:39.886 04:54:18 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:39.886 04:54:18 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:39.886 04:54:18 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:06:39.886 04:54:18 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:39.886 04:54:18 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:39.886 04:54:18 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:39.886 04:54:18 -- common/autotest_common.sh@10 -- # set +x 00:06:39.886 ************************************ 00:06:39.886 START TEST blockdev_nvme 00:06:39.886 ************************************ 00:06:39.886 04:54:18 blockdev_nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:40.145 * Looking for test storage... 00:06:40.145 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:40.145 04:54:18 blockdev_nvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:40.145 04:54:18 blockdev_nvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:40.145 04:54:18 blockdev_nvme -- common/autotest_common.sh@1681 -- # lcov --version 00:06:40.145 04:54:18 blockdev_nvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:40.145 04:54:18 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:40.145 04:54:18 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:40.145 04:54:18 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:40.145 04:54:18 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:06:40.145 04:54:18 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:06:40.145 04:54:18 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:06:40.145 04:54:18 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:06:40.145 04:54:18 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:06:40.145 04:54:18 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:06:40.145 04:54:18 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:06:40.145 04:54:18 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:40.145 04:54:18 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:06:40.145 04:54:18 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:06:40.145 04:54:18 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:40.145 04:54:18 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:40.145 04:54:18 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:06:40.145 04:54:18 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:06:40.145 04:54:18 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:40.145 04:54:18 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:06:40.145 04:54:18 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:06:40.145 04:54:18 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:06:40.145 04:54:18 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:06:40.145 04:54:18 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:40.145 04:54:18 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:06:40.145 04:54:18 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:06:40.145 04:54:18 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:40.145 04:54:18 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:40.145 04:54:18 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:06:40.145 04:54:18 blockdev_nvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:40.145 04:54:18 blockdev_nvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:40.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.145 --rc genhtml_branch_coverage=1 00:06:40.145 --rc genhtml_function_coverage=1 00:06:40.145 --rc genhtml_legend=1 00:06:40.145 --rc geninfo_all_blocks=1 00:06:40.145 --rc geninfo_unexecuted_blocks=1 00:06:40.145 00:06:40.145 ' 00:06:40.145 04:54:18 blockdev_nvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:40.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.145 --rc genhtml_branch_coverage=1 00:06:40.145 --rc genhtml_function_coverage=1 00:06:40.145 --rc genhtml_legend=1 00:06:40.145 --rc geninfo_all_blocks=1 00:06:40.145 --rc geninfo_unexecuted_blocks=1 00:06:40.145 00:06:40.145 ' 00:06:40.145 04:54:18 blockdev_nvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:40.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.145 --rc genhtml_branch_coverage=1 00:06:40.145 --rc genhtml_function_coverage=1 00:06:40.145 --rc genhtml_legend=1 00:06:40.145 --rc geninfo_all_blocks=1 00:06:40.145 --rc geninfo_unexecuted_blocks=1 00:06:40.145 00:06:40.145 ' 00:06:40.145 04:54:18 blockdev_nvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:40.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.145 --rc genhtml_branch_coverage=1 00:06:40.145 --rc genhtml_function_coverage=1 00:06:40.145 --rc genhtml_legend=1 00:06:40.145 --rc geninfo_all_blocks=1 00:06:40.145 --rc geninfo_unexecuted_blocks=1 00:06:40.145 00:06:40.145 ' 00:06:40.145 04:54:18 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:40.145 04:54:18 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:06:40.145 04:54:18 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:40.145 04:54:18 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:40.145 04:54:18 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:40.145 04:54:18 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:40.145 04:54:18 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:40.145 04:54:18 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:40.145 04:54:18 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:06:40.145 04:54:18 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:06:40.145 04:54:18 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:06:40.145 04:54:18 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:06:40.145 04:54:18 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:06:40.145 04:54:18 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:06:40.145 04:54:18 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:06:40.145 04:54:18 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:06:40.145 04:54:18 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:06:40.145 04:54:18 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:06:40.145 04:54:18 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:06:40.145 04:54:18 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:06:40.145 04:54:18 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:06:40.145 04:54:18 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:06:40.145 04:54:18 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:06:40.145 04:54:18 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:06:40.145 04:54:18 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=71876 00:06:40.145 04:54:18 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:40.145 04:54:18 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 71876 00:06:40.145 04:54:18 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:40.145 04:54:18 blockdev_nvme -- common/autotest_common.sh@831 -- # '[' -z 71876 ']' 00:06:40.145 04:54:18 blockdev_nvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.145 04:54:18 blockdev_nvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:40.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.145 04:54:18 blockdev_nvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.145 04:54:18 blockdev_nvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:40.145 04:54:18 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:40.145 [2024-12-06 04:54:18.348409] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:40.145 [2024-12-06 04:54:18.348703] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71876 ] 00:06:40.404 [2024-12-06 04:54:18.483946] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.404 [2024-12-06 04:54:18.552526] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.342 04:54:19 blockdev_nvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:41.342 04:54:19 blockdev_nvme -- common/autotest_common.sh@864 -- # return 0 00:06:41.342 04:54:19 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:06:41.342 04:54:19 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:06:41.342 04:54:19 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:06:41.342 04:54:19 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:41.342 04:54:19 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:41.342 04:54:19 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:41.342 04:54:19 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.342 04:54:19 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:41.342 04:54:19 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.342 04:54:19 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:06:41.342 04:54:19 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.342 04:54:19 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:41.602 04:54:19 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.602 04:54:19 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:06:41.602 04:54:19 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:06:41.602 04:54:19 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.602 04:54:19 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:41.602 04:54:19 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.602 04:54:19 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:06:41.602 04:54:19 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.602 04:54:19 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:41.602 04:54:19 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.602 04:54:19 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:41.602 04:54:19 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.602 04:54:19 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:41.602 04:54:19 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.602 04:54:19 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:06:41.602 04:54:19 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:06:41.602 04:54:19 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:06:41.602 04:54:19 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.602 04:54:19 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:41.602 04:54:19 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.602 04:54:19 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:06:41.603 04:54:19 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "c6932d09-ea28-487a-8cdc-639f40b32760"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "c6932d09-ea28-487a-8cdc-639f40b32760",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "bec3c0ea-199b-41ad-9a1a-d72d1bd664b9"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "bec3c0ea-199b-41ad-9a1a-d72d1bd664b9",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "bf1bcb6b-591a-4ae2-9853-6ea3f569acac"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "bf1bcb6b-591a-4ae2-9853-6ea3f569acac",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "02674d30-9d03-4c4c-9140-280393e811db"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "02674d30-9d03-4c4c-9140-280393e811db",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "3bb7953d-fce8-46a0-9b78-73c99293bf22"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "3bb7953d-fce8-46a0-9b78-73c99293bf22",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "b9655707-bc1a-46e7-a2d3-43949a0f53ea"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "b9655707-bc1a-46e7-a2d3-43949a0f53ea",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:41.603 04:54:19 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:06:41.603 04:54:19 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:06:41.603 04:54:19 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:06:41.603 04:54:19 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:06:41.603 04:54:19 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 71876 00:06:41.603 04:54:19 blockdev_nvme -- common/autotest_common.sh@950 -- # '[' -z 71876 ']' 00:06:41.603 04:54:19 blockdev_nvme -- common/autotest_common.sh@954 -- # kill -0 71876 00:06:41.603 04:54:19 blockdev_nvme -- common/autotest_common.sh@955 -- # uname 00:06:41.603 04:54:19 blockdev_nvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:41.603 04:54:19 blockdev_nvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71876 00:06:41.603 killing process with pid 71876 00:06:41.603 04:54:19 blockdev_nvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:41.603 04:54:19 blockdev_nvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:41.603 04:54:19 blockdev_nvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71876' 00:06:41.603 04:54:19 blockdev_nvme -- common/autotest_common.sh@969 -- # kill 71876 00:06:41.603 04:54:19 blockdev_nvme -- common/autotest_common.sh@974 -- # wait 71876 00:06:41.864 04:54:20 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:41.864 04:54:20 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:41.864 04:54:20 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:06:41.864 04:54:20 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:41.864 04:54:20 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:41.864 ************************************ 00:06:41.864 START TEST bdev_hello_world 00:06:41.864 ************************************ 00:06:41.864 04:54:20 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:42.122 [2024-12-06 04:54:20.159078] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:42.122 [2024-12-06 04:54:20.159208] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71938 ] 00:06:42.122 [2024-12-06 04:54:20.295093] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.122 [2024-12-06 04:54:20.329480] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.695 [2024-12-06 04:54:20.702833] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:42.695 [2024-12-06 04:54:20.702902] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:42.695 [2024-12-06 04:54:20.702932] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:42.695 [2024-12-06 04:54:20.705210] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:42.695 [2024-12-06 04:54:20.706063] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:42.695 [2024-12-06 04:54:20.706107] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:42.695 [2024-12-06 04:54:20.706686] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:42.695 00:06:42.695 [2024-12-06 04:54:20.706720] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:42.695 00:06:42.695 real 0m0.823s 00:06:42.695 user 0m0.556s 00:06:42.695 sys 0m0.161s 00:06:42.695 04:54:20 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:42.695 ************************************ 00:06:42.695 END TEST bdev_hello_world 00:06:42.695 ************************************ 00:06:42.695 04:54:20 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:42.956 04:54:20 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:06:42.956 04:54:20 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:42.956 04:54:20 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:42.956 04:54:20 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:42.956 ************************************ 00:06:42.956 START TEST bdev_bounds 00:06:42.956 ************************************ 00:06:42.956 04:54:20 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:06:42.956 04:54:20 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=71969 00:06:42.956 04:54:20 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:42.956 Process bdevio pid: 71969 00:06:42.956 04:54:20 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 71969' 00:06:42.956 04:54:20 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:42.956 04:54:20 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 71969 00:06:42.956 04:54:20 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 71969 ']' 00:06:42.956 04:54:20 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.956 04:54:20 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:42.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.956 04:54:20 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.956 04:54:20 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:42.956 04:54:20 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:42.956 [2024-12-06 04:54:21.062512] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:42.956 [2024-12-06 04:54:21.062680] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71969 ] 00:06:43.216 [2024-12-06 04:54:21.197795] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:43.216 [2024-12-06 04:54:21.254593] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:43.216 [2024-12-06 04:54:21.254932] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:43.216 [2024-12-06 04:54:21.254977] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.788 04:54:21 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:43.788 04:54:21 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:06:43.788 04:54:21 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:44.049 I/O targets: 00:06:44.049 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:44.049 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:06:44.049 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:44.049 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:44.049 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:44.049 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:44.049 00:06:44.049 00:06:44.049 CUnit - A unit testing framework for C - Version 2.1-3 00:06:44.049 http://cunit.sourceforge.net/ 00:06:44.049 00:06:44.049 00:06:44.049 Suite: bdevio tests on: Nvme3n1 00:06:44.049 Test: blockdev write read block ...passed 00:06:44.049 Test: blockdev write zeroes read block ...passed 00:06:44.049 Test: blockdev write zeroes read no split ...passed 00:06:44.049 Test: blockdev write zeroes read split ...passed 00:06:44.049 Test: blockdev write zeroes read split partial ...passed 00:06:44.049 Test: blockdev reset ...[2024-12-06 04:54:22.074854] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:06:44.049 passed 00:06:44.049 Test: blockdev write read 8 blocks ...[2024-12-06 04:54:22.078127] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:44.049 passed 00:06:44.049 Test: blockdev write read size > 128k ...passed 00:06:44.049 Test: blockdev write read invalid size ...passed 00:06:44.049 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:44.049 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:44.049 Test: blockdev write read max offset ...passed 00:06:44.049 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:44.049 Test: blockdev writev readv 8 blocks ...passed 00:06:44.049 Test: blockdev writev readv 30 x 1block ...passed 00:06:44.049 Test: blockdev writev readv block ...passed 00:06:44.049 Test: blockdev writev readv size > 128k ...passed 00:06:44.049 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:44.049 Test: blockdev comparev and writev ...[2024-12-06 04:54:22.092484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c000a000 len:0x1000 00:06:44.049 [2024-12-06 04:54:22.092543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:44.049 passed 00:06:44.049 Test: blockdev nvme passthru rw ...passed 00:06:44.049 Test: blockdev nvme passthru vendor specific ...passed 00:06:44.049 Test: blockdev nvme admin passthru ...[2024-12-06 04:54:22.094226] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:44.049 [2024-12-06 04:54:22.094261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:44.049 passed 00:06:44.049 Test: blockdev copy ...passed 00:06:44.049 Suite: bdevio tests on: Nvme2n3 00:06:44.049 Test: blockdev write read block ...passed 00:06:44.049 Test: blockdev write zeroes read block ...passed 00:06:44.049 Test: blockdev write zeroes read no split ...passed 00:06:44.049 Test: blockdev write zeroes read split ...passed 00:06:44.049 Test: blockdev write zeroes read split partial ...passed 00:06:44.049 Test: blockdev reset ...[2024-12-06 04:54:22.112873] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:06:44.049 passed 00:06:44.049 Test: blockdev write read 8 blocks ...[2024-12-06 04:54:22.117314] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:44.049 passed 00:06:44.049 Test: blockdev write read size > 128k ...passed 00:06:44.049 Test: blockdev write read invalid size ...passed 00:06:44.049 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:44.049 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:44.049 Test: blockdev write read max offset ...passed 00:06:44.049 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:44.049 Test: blockdev writev readv 8 blocks ...passed 00:06:44.049 Test: blockdev writev readv 30 x 1block ...passed 00:06:44.049 Test: blockdev writev readv block ...passed 00:06:44.049 Test: blockdev writev readv size > 128k ...passed 00:06:44.049 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:44.049 Test: blockdev comparev and writev ...[2024-12-06 04:54:22.129694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c0003000 len:0x1000 00:06:44.049 [2024-12-06 04:54:22.129741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:44.049 passed 00:06:44.049 Test: blockdev nvme passthru rw ...passed 00:06:44.049 Test: blockdev nvme passthru vendor specific ...[2024-12-06 04:54:22.131051] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:44.049 [2024-12-06 04:54:22.131084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:44.049 passed 00:06:44.049 Test: blockdev nvme admin passthru ...passed 00:06:44.049 Test: blockdev copy ...passed 00:06:44.049 Suite: bdevio tests on: Nvme2n2 00:06:44.049 Test: blockdev write read block ...passed 00:06:44.049 Test: blockdev write zeroes read block ...passed 00:06:44.049 Test: blockdev write zeroes read no split ...passed 00:06:44.049 Test: blockdev write zeroes read split ...passed 00:06:44.049 Test: blockdev write zeroes read split partial ...passed 00:06:44.049 Test: blockdev reset ...[2024-12-06 04:54:22.149439] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:06:44.049 passed 00:06:44.049 Test: blockdev write read 8 blocks ...[2024-12-06 04:54:22.152192] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:44.049 passed 00:06:44.049 Test: blockdev write read size > 128k ...passed 00:06:44.049 Test: blockdev write read invalid size ...passed 00:06:44.049 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:44.049 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:44.049 Test: blockdev write read max offset ...passed 00:06:44.049 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:44.049 Test: blockdev writev readv 8 blocks ...passed 00:06:44.049 Test: blockdev writev readv 30 x 1block ...passed 00:06:44.049 Test: blockdev writev readv block ...passed 00:06:44.049 Test: blockdev writev readv size > 128k ...passed 00:06:44.049 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:44.049 Test: blockdev comparev and writev ...[2024-12-06 04:54:22.163068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c0003000 len:0x1000 00:06:44.049 [2024-12-06 04:54:22.163109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:44.049 passed 00:06:44.049 Test: blockdev nvme passthru rw ...passed 00:06:44.049 Test: blockdev nvme passthru vendor specific ...passed 00:06:44.050 Test: blockdev nvme admin passthru ...[2024-12-06 04:54:22.164564] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:44.050 [2024-12-06 04:54:22.164595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:44.050 passed 00:06:44.050 Test: blockdev copy ...passed 00:06:44.050 Suite: bdevio tests on: Nvme2n1 00:06:44.050 Test: blockdev write read block ...passed 00:06:44.050 Test: blockdev write zeroes read block ...passed 00:06:44.050 Test: blockdev write zeroes read no split ...passed 00:06:44.050 Test: blockdev write zeroes read split ...passed 00:06:44.050 Test: blockdev write zeroes read split partial ...passed 00:06:44.050 Test: blockdev reset ...[2024-12-06 04:54:22.184303] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:06:44.050 [2024-12-06 04:54:22.186104] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:44.050 passed 00:06:44.050 Test: blockdev write read 8 blocks ...passed 00:06:44.050 Test: blockdev write read size > 128k ...passed 00:06:44.050 Test: blockdev write read invalid size ...passed 00:06:44.050 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:44.050 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:44.050 Test: blockdev write read max offset ...passed 00:06:44.050 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:44.050 Test: blockdev writev readv 8 blocks ...passed 00:06:44.050 Test: blockdev writev readv 30 x 1block ...passed 00:06:44.050 Test: blockdev writev readv block ...passed 00:06:44.050 Test: blockdev writev readv size > 128k ...passed 00:06:44.050 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:44.050 Test: blockdev comparev and writev ...[2024-12-06 04:54:22.200132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c0003000 len:0x1000 00:06:44.050 [2024-12-06 04:54:22.200320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:44.050 passed 00:06:44.050 Test: blockdev nvme passthru rw ...passed 00:06:44.050 Test: blockdev nvme passthru vendor specific ...[2024-12-06 04:54:22.201970] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:44.050 [2024-12-06 04:54:22.202093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed sqhd:001c p:1 m:0 dnr:1 00:06:44.050 00:06:44.050 Test: blockdev nvme admin passthru ...passed 00:06:44.050 Test: blockdev copy ...passed 00:06:44.050 Suite: bdevio tests on: Nvme1n1 00:06:44.050 Test: blockdev write read block ...passed 00:06:44.050 Test: blockdev write zeroes read block ...passed 00:06:44.050 Test: blockdev write zeroes read no split ...passed 00:06:44.050 Test: blockdev write zeroes read split ...passed 00:06:44.050 Test: blockdev write zeroes read split partial ...passed 00:06:44.050 Test: blockdev reset ...[2024-12-06 04:54:22.217603] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:06:44.050 passed 00:06:44.050 Test: blockdev write read 8 blocks ...[2024-12-06 04:54:22.219228] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:44.050 passed 00:06:44.050 Test: blockdev write read size > 128k ...passed 00:06:44.050 Test: blockdev write read invalid size ...passed 00:06:44.050 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:44.050 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:44.050 Test: blockdev write read max offset ...passed 00:06:44.050 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:44.050 Test: blockdev writev readv 8 blocks ...passed 00:06:44.050 Test: blockdev writev readv 30 x 1block ...passed 00:06:44.050 Test: blockdev writev readv block ...passed 00:06:44.050 Test: blockdev writev readv size > 128k ...passed 00:06:44.050 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:44.050 Test: blockdev comparev and writev ...[2024-12-06 04:54:22.225554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c0436000 len:0x1000 00:06:44.050 [2024-12-06 04:54:22.225593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:44.050 passed 00:06:44.050 Test: blockdev nvme passthru rw ...passed 00:06:44.050 Test: blockdev nvme passthru vendor specific ...passed 00:06:44.050 Test: blockdev nvme admin passthru ...[2024-12-06 04:54:22.226231] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:44.050 [2024-12-06 04:54:22.226263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:44.050 passed 00:06:44.050 Test: blockdev copy ...passed 00:06:44.050 Suite: bdevio tests on: Nvme0n1 00:06:44.050 Test: blockdev write read block ...passed 00:06:44.050 Test: blockdev write zeroes read block ...passed 00:06:44.050 Test: blockdev write zeroes read no split ...passed 00:06:44.050 Test: blockdev write zeroes read split ...passed 00:06:44.050 Test: blockdev write zeroes read split partial ...passed 00:06:44.050 Test: blockdev reset ...[2024-12-06 04:54:22.243331] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:06:44.050 [2024-12-06 04:54:22.244997] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:44.050 passed 00:06:44.050 Test: blockdev write read 8 blocks ...passed 00:06:44.050 Test: blockdev write read size > 128k ...passed 00:06:44.050 Test: blockdev write read invalid size ...passed 00:06:44.050 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:44.050 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:44.050 Test: blockdev write read max offset ...passed 00:06:44.050 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:44.050 Test: blockdev writev readv 8 blocks ...passed 00:06:44.050 Test: blockdev writev readv 30 x 1block ...passed 00:06:44.050 Test: blockdev writev readv block ...passed 00:06:44.050 Test: blockdev writev readv size > 128k ...passed 00:06:44.050 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:44.050 Test: blockdev comparev and writev ...passed 00:06:44.050 Test: blockdev nvme passthru rw ...[2024-12-06 04:54:22.255324] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:44.050 separate metadata which is not supported yet. 00:06:44.050 passed 00:06:44.050 Test: blockdev nvme passthru vendor specific ...passed 00:06:44.050 Test: blockdev nvme admin passthru ...[2024-12-06 04:54:22.256723] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:06:44.050 [2024-12-06 04:54:22.256770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:06:44.050 passed 00:06:44.050 Test: blockdev copy ...passed 00:06:44.050 00:06:44.050 Run Summary: Type Total Ran Passed Failed Inactive 00:06:44.050 suites 6 6 n/a 0 0 00:06:44.050 tests 138 138 138 0 0 00:06:44.050 asserts 893 893 893 0 n/a 00:06:44.050 00:06:44.050 Elapsed time = 0.476 seconds 00:06:44.050 0 00:06:44.311 04:54:22 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 71969 00:06:44.311 04:54:22 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 71969 ']' 00:06:44.311 04:54:22 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 71969 00:06:44.311 04:54:22 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:06:44.311 04:54:22 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:44.311 04:54:22 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71969 00:06:44.311 04:54:22 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:44.311 killing process with pid 71969 00:06:44.311 04:54:22 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:44.311 04:54:22 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71969' 00:06:44.311 04:54:22 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 71969 00:06:44.311 04:54:22 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 71969 00:06:44.311 ************************************ 00:06:44.311 END TEST bdev_bounds 00:06:44.311 ************************************ 00:06:44.311 04:54:22 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:44.311 00:06:44.311 real 0m1.461s 00:06:44.311 user 0m3.630s 00:06:44.311 sys 0m0.321s 00:06:44.311 04:54:22 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:44.311 04:54:22 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:44.311 04:54:22 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:44.311 04:54:22 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:06:44.311 04:54:22 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:44.311 04:54:22 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:44.311 ************************************ 00:06:44.311 START TEST bdev_nbd 00:06:44.311 ************************************ 00:06:44.311 04:54:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:44.311 04:54:22 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:44.311 04:54:22 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:44.311 04:54:22 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:44.311 04:54:22 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:44.311 04:54:22 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:44.311 04:54:22 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:44.312 04:54:22 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:06:44.312 04:54:22 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:44.312 04:54:22 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:44.312 04:54:22 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:44.312 04:54:22 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:06:44.312 04:54:22 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:44.312 04:54:22 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:44.312 04:54:22 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:44.312 04:54:22 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:44.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:44.312 04:54:22 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=72023 00:06:44.312 04:54:22 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:44.312 04:54:22 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 72023 /var/tmp/spdk-nbd.sock 00:06:44.312 04:54:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 72023 ']' 00:06:44.312 04:54:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:44.312 04:54:22 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:44.312 04:54:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:44.312 04:54:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:44.312 04:54:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:44.312 04:54:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:44.573 [2024-12-06 04:54:22.573283] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:44.573 [2024-12-06 04:54:22.573568] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:44.573 [2024-12-06 04:54:22.705909] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.573 [2024-12-06 04:54:22.739731] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.515 04:54:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:45.515 04:54:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:06:45.515 04:54:23 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:45.515 04:54:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.515 04:54:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:45.515 04:54:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:45.515 04:54:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:45.515 04:54:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.515 04:54:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:45.515 04:54:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:45.515 04:54:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:45.515 04:54:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:45.515 04:54:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:45.515 04:54:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:45.515 04:54:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:45.515 04:54:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:45.515 04:54:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:45.515 04:54:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:45.515 04:54:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:45.515 04:54:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:45.515 04:54:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:45.515 04:54:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:45.515 04:54:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:45.515 04:54:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:45.515 04:54:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:45.515 04:54:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:45.515 04:54:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:45.515 1+0 records in 00:06:45.515 1+0 records out 00:06:45.515 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000856567 s, 4.8 MB/s 00:06:45.515 04:54:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:45.515 04:54:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:45.515 04:54:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:45.515 04:54:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:45.515 04:54:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:45.515 04:54:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:45.515 04:54:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:45.515 04:54:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:06:45.821 04:54:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:45.821 04:54:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:45.821 04:54:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:45.821 04:54:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:45.821 04:54:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:45.821 04:54:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:45.821 04:54:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:45.821 04:54:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:45.821 04:54:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:45.821 04:54:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:45.821 04:54:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:45.821 04:54:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:45.821 1+0 records in 00:06:45.821 1+0 records out 00:06:45.821 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00131818 s, 3.1 MB/s 00:06:45.821 04:54:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:45.821 04:54:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:45.821 04:54:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:45.821 04:54:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:45.821 04:54:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:45.821 04:54:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:45.821 04:54:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:45.821 04:54:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:06:46.083 04:54:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:06:46.083 04:54:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:06:46.083 04:54:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:06:46.083 04:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:06:46.083 04:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:46.083 04:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:46.083 04:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:46.083 04:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:06:46.083 04:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:46.083 04:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:46.083 04:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:46.083 04:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:46.083 1+0 records in 00:06:46.083 1+0 records out 00:06:46.083 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000682789 s, 6.0 MB/s 00:06:46.083 04:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:46.083 04:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:46.083 04:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:46.083 04:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:46.083 04:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:46.083 04:54:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:46.083 04:54:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:46.083 04:54:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:06:46.344 04:54:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:06:46.344 04:54:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:06:46.344 04:54:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:06:46.344 04:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:06:46.344 04:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:46.344 04:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:46.344 04:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:46.344 04:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:06:46.344 04:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:46.344 04:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:46.344 04:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:46.344 04:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:46.344 1+0 records in 00:06:46.344 1+0 records out 00:06:46.344 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000526951 s, 7.8 MB/s 00:06:46.344 04:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:46.344 04:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:46.344 04:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:46.344 04:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:46.344 04:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:46.344 04:54:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:46.344 04:54:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:46.344 04:54:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:06:46.605 04:54:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:06:46.605 04:54:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:06:46.605 04:54:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:06:46.605 04:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:06:46.605 04:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:46.605 04:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:46.605 04:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:46.605 04:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:06:46.605 04:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:46.605 04:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:46.605 04:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:46.605 04:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:46.605 1+0 records in 00:06:46.605 1+0 records out 00:06:46.605 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000928074 s, 4.4 MB/s 00:06:46.605 04:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:46.605 04:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:46.605 04:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:46.605 04:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:46.605 04:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:46.605 04:54:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:46.605 04:54:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:46.605 04:54:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:06:46.866 04:54:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:06:46.866 04:54:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:06:46.866 04:54:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:06:46.866 04:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:06:46.866 04:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:46.866 04:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:46.866 04:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:46.866 04:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:06:46.866 04:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:46.866 04:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:46.866 04:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:46.866 04:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:46.866 1+0 records in 00:06:46.866 1+0 records out 00:06:46.866 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00106993 s, 3.8 MB/s 00:06:46.866 04:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:46.866 04:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:46.866 04:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:46.866 04:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:46.866 04:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:46.866 04:54:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:46.866 04:54:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:46.866 04:54:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:46.866 04:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:06:46.866 { 00:06:46.866 "nbd_device": "/dev/nbd0", 00:06:46.866 "bdev_name": "Nvme0n1" 00:06:46.866 }, 00:06:46.866 { 00:06:46.866 "nbd_device": "/dev/nbd1", 00:06:46.866 "bdev_name": "Nvme1n1" 00:06:46.866 }, 00:06:46.866 { 00:06:46.866 "nbd_device": "/dev/nbd2", 00:06:46.866 "bdev_name": "Nvme2n1" 00:06:46.866 }, 00:06:46.866 { 00:06:46.866 "nbd_device": "/dev/nbd3", 00:06:46.866 "bdev_name": "Nvme2n2" 00:06:46.866 }, 00:06:46.866 { 00:06:46.866 "nbd_device": "/dev/nbd4", 00:06:46.866 "bdev_name": "Nvme2n3" 00:06:46.866 }, 00:06:46.866 { 00:06:46.866 "nbd_device": "/dev/nbd5", 00:06:46.866 "bdev_name": "Nvme3n1" 00:06:46.866 } 00:06:46.866 ]' 00:06:46.866 04:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:06:46.866 04:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:06:46.866 { 00:06:46.866 "nbd_device": "/dev/nbd0", 00:06:46.866 "bdev_name": "Nvme0n1" 00:06:46.866 }, 00:06:46.866 { 00:06:46.866 "nbd_device": "/dev/nbd1", 00:06:46.866 "bdev_name": "Nvme1n1" 00:06:46.866 }, 00:06:46.866 { 00:06:46.866 "nbd_device": "/dev/nbd2", 00:06:46.866 "bdev_name": "Nvme2n1" 00:06:46.866 }, 00:06:46.866 { 00:06:46.866 "nbd_device": "/dev/nbd3", 00:06:46.866 "bdev_name": "Nvme2n2" 00:06:46.866 }, 00:06:46.866 { 00:06:46.866 "nbd_device": "/dev/nbd4", 00:06:46.866 "bdev_name": "Nvme2n3" 00:06:46.866 }, 00:06:46.866 { 00:06:46.866 "nbd_device": "/dev/nbd5", 00:06:46.866 "bdev_name": "Nvme3n1" 00:06:46.866 } 00:06:46.866 ]' 00:06:46.866 04:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:06:47.126 04:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:06:47.126 04:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:47.126 04:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:06:47.126 04:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:47.126 04:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:47.126 04:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:47.126 04:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:47.126 04:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:47.126 04:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:47.126 04:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:47.126 04:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:47.126 04:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:47.126 04:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:47.126 04:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:47.126 04:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:47.126 04:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:47.126 04:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:47.384 04:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:47.384 04:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:47.384 04:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:47.384 04:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:47.384 04:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:47.384 04:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:47.384 04:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:47.384 04:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:47.384 04:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:47.384 04:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:06:47.642 04:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:06:47.642 04:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:06:47.642 04:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:06:47.642 04:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:47.642 04:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:47.642 04:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:06:47.642 04:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:47.642 04:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:47.642 04:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:47.642 04:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:06:47.903 04:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:06:47.903 04:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:06:47.903 04:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:06:47.903 04:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:47.903 04:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:47.903 04:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:06:47.903 04:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:47.903 04:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:47.903 04:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:47.903 04:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:06:48.163 04:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:06:48.164 04:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:06:48.164 04:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:06:48.164 04:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:48.164 04:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:48.164 04:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:06:48.164 04:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:48.164 04:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:48.164 04:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:48.164 04:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:06:48.425 04:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:06:48.425 04:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:06:48.425 04:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:06:48.425 04:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:48.425 04:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:48.425 04:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:06:48.425 04:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:48.425 04:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:48.425 04:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:48.425 04:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:48.425 04:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:48.686 04:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:48.687 04:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:48.687 04:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:48.687 04:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:48.687 04:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:48.687 04:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:48.687 04:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:48.687 04:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:48.687 04:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:48.687 04:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:06:48.687 04:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:06:48.687 04:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:06:48.687 04:54:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:48.687 04:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:48.687 04:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:48.687 04:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:48.687 04:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:48.687 04:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:48.687 04:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:48.687 04:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:48.687 04:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:48.687 04:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:48.687 04:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:48.687 04:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:48.687 04:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:06:48.687 04:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:48.687 04:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:48.687 04:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:06:48.948 /dev/nbd0 00:06:48.948 04:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:48.948 04:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:48.948 04:54:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:48.948 04:54:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:48.948 04:54:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:48.948 04:54:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:48.948 04:54:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:48.948 04:54:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:48.948 04:54:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:48.948 04:54:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:48.948 04:54:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:48.948 1+0 records in 00:06:48.948 1+0 records out 00:06:48.949 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000865126 s, 4.7 MB/s 00:06:48.949 04:54:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:48.949 04:54:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:48.949 04:54:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:48.949 04:54:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:48.949 04:54:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:48.949 04:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:48.949 04:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:48.949 04:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:06:48.949 /dev/nbd1 00:06:49.211 04:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:49.211 04:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:49.211 04:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:49.211 04:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:49.211 04:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:49.211 04:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:49.211 04:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:49.211 04:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:49.211 04:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:49.211 04:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:49.211 04:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:49.211 1+0 records in 00:06:49.211 1+0 records out 00:06:49.211 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000907631 s, 4.5 MB/s 00:06:49.211 04:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:49.211 04:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:49.211 04:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:49.211 04:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:49.211 04:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:49.211 04:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:49.211 04:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:49.211 04:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:06:49.211 /dev/nbd10 00:06:49.211 04:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:06:49.211 04:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:06:49.211 04:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:06:49.211 04:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:49.211 04:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:49.211 04:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:49.211 04:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:06:49.211 04:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:49.211 04:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:49.211 04:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:49.211 04:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:49.472 1+0 records in 00:06:49.473 1+0 records out 00:06:49.473 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00063413 s, 6.5 MB/s 00:06:49.473 04:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:49.473 04:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:49.473 04:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:49.473 04:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:49.473 04:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:49.473 04:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:49.473 04:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:49.473 04:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:06:49.473 /dev/nbd11 00:06:49.473 04:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:06:49.473 04:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:06:49.473 04:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:06:49.473 04:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:49.473 04:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:49.473 04:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:49.473 04:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:06:49.473 04:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:49.473 04:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:49.473 04:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:49.473 04:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:49.473 1+0 records in 00:06:49.473 1+0 records out 00:06:49.473 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000760009 s, 5.4 MB/s 00:06:49.473 04:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:49.473 04:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:49.473 04:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:49.473 04:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:49.473 04:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:49.473 04:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:49.473 04:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:49.473 04:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:06:49.733 /dev/nbd12 00:06:49.733 04:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:06:49.733 04:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:06:49.733 04:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:06:49.733 04:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:49.733 04:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:49.733 04:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:49.733 04:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:06:49.733 04:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:49.733 04:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:49.733 04:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:49.733 04:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:49.733 1+0 records in 00:06:49.733 1+0 records out 00:06:49.733 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00115153 s, 3.6 MB/s 00:06:49.733 04:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:49.733 04:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:49.733 04:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:49.733 04:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:49.733 04:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:49.733 04:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:49.733 04:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:49.733 04:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:06:49.994 /dev/nbd13 00:06:49.994 04:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:06:49.994 04:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:06:49.995 04:54:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:06:49.995 04:54:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:49.995 04:54:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:49.995 04:54:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:49.995 04:54:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:06:49.995 04:54:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:49.995 04:54:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:49.995 04:54:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:49.995 04:54:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:49.995 1+0 records in 00:06:49.995 1+0 records out 00:06:49.995 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00137322 s, 3.0 MB/s 00:06:49.995 04:54:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:49.995 04:54:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:49.995 04:54:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:49.995 04:54:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:49.995 04:54:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:49.995 04:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:49.995 04:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:49.995 04:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:49.995 04:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:49.995 04:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:50.256 04:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:50.256 { 00:06:50.256 "nbd_device": "/dev/nbd0", 00:06:50.256 "bdev_name": "Nvme0n1" 00:06:50.256 }, 00:06:50.256 { 00:06:50.256 "nbd_device": "/dev/nbd1", 00:06:50.256 "bdev_name": "Nvme1n1" 00:06:50.256 }, 00:06:50.256 { 00:06:50.256 "nbd_device": "/dev/nbd10", 00:06:50.256 "bdev_name": "Nvme2n1" 00:06:50.256 }, 00:06:50.256 { 00:06:50.256 "nbd_device": "/dev/nbd11", 00:06:50.256 "bdev_name": "Nvme2n2" 00:06:50.256 }, 00:06:50.256 { 00:06:50.256 "nbd_device": "/dev/nbd12", 00:06:50.256 "bdev_name": "Nvme2n3" 00:06:50.256 }, 00:06:50.256 { 00:06:50.256 "nbd_device": "/dev/nbd13", 00:06:50.256 "bdev_name": "Nvme3n1" 00:06:50.256 } 00:06:50.256 ]' 00:06:50.256 04:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:50.256 { 00:06:50.256 "nbd_device": "/dev/nbd0", 00:06:50.256 "bdev_name": "Nvme0n1" 00:06:50.256 }, 00:06:50.256 { 00:06:50.256 "nbd_device": "/dev/nbd1", 00:06:50.256 "bdev_name": "Nvme1n1" 00:06:50.256 }, 00:06:50.256 { 00:06:50.256 "nbd_device": "/dev/nbd10", 00:06:50.256 "bdev_name": "Nvme2n1" 00:06:50.256 }, 00:06:50.256 { 00:06:50.256 "nbd_device": "/dev/nbd11", 00:06:50.256 "bdev_name": "Nvme2n2" 00:06:50.256 }, 00:06:50.256 { 00:06:50.256 "nbd_device": "/dev/nbd12", 00:06:50.256 "bdev_name": "Nvme2n3" 00:06:50.256 }, 00:06:50.256 { 00:06:50.256 "nbd_device": "/dev/nbd13", 00:06:50.256 "bdev_name": "Nvme3n1" 00:06:50.256 } 00:06:50.256 ]' 00:06:50.256 04:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:50.256 04:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:50.256 /dev/nbd1 00:06:50.256 /dev/nbd10 00:06:50.256 /dev/nbd11 00:06:50.256 /dev/nbd12 00:06:50.256 /dev/nbd13' 00:06:50.256 04:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:50.256 /dev/nbd1 00:06:50.256 /dev/nbd10 00:06:50.256 /dev/nbd11 00:06:50.256 /dev/nbd12 00:06:50.256 /dev/nbd13' 00:06:50.256 04:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:50.256 04:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:06:50.256 04:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:06:50.256 04:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:06:50.256 04:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:06:50.256 04:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:06:50.256 04:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:50.256 04:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:50.256 04:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:50.256 04:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:50.256 04:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:50.256 04:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:06:50.256 256+0 records in 00:06:50.256 256+0 records out 00:06:50.256 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00796683 s, 132 MB/s 00:06:50.256 04:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:50.256 04:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:50.517 256+0 records in 00:06:50.517 256+0 records out 00:06:50.517 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.182476 s, 5.7 MB/s 00:06:50.518 04:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:50.518 04:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:50.779 256+0 records in 00:06:50.779 256+0 records out 00:06:50.779 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.161444 s, 6.5 MB/s 00:06:50.779 04:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:50.779 04:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:06:50.779 256+0 records in 00:06:50.779 256+0 records out 00:06:50.779 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.181168 s, 5.8 MB/s 00:06:50.779 04:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:50.779 04:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:06:51.038 256+0 records in 00:06:51.038 256+0 records out 00:06:51.038 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.215323 s, 4.9 MB/s 00:06:51.038 04:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:51.038 04:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:06:51.038 256+0 records in 00:06:51.038 256+0 records out 00:06:51.038 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.100536 s, 10.4 MB/s 00:06:51.038 04:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:51.038 04:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:06:51.300 256+0 records in 00:06:51.300 256+0 records out 00:06:51.300 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.125596 s, 8.3 MB/s 00:06:51.300 04:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:06:51.300 04:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:51.300 04:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:51.300 04:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:51.300 04:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:51.300 04:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:51.300 04:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:51.300 04:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:51.300 04:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:06:51.300 04:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:51.300 04:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:06:51.300 04:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:51.300 04:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:06:51.300 04:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:51.300 04:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:06:51.300 04:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:51.300 04:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:06:51.300 04:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:51.300 04:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:06:51.300 04:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:51.300 04:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:51.300 04:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.300 04:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:51.300 04:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:51.300 04:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:51.300 04:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:51.300 04:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:51.562 04:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:51.562 04:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:51.562 04:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:51.562 04:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:51.562 04:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:51.562 04:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:51.562 04:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:51.562 04:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:51.562 04:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:51.562 04:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:51.823 04:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:51.823 04:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:51.823 04:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:51.823 04:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:51.823 04:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:51.823 04:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:51.823 04:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:51.823 04:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:51.823 04:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:51.823 04:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:06:52.084 04:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:06:52.084 04:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:06:52.084 04:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:06:52.084 04:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:52.084 04:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:52.084 04:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:06:52.084 04:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:52.084 04:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:52.084 04:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:52.084 04:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:06:52.084 04:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:06:52.084 04:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:06:52.084 04:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:06:52.084 04:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:52.084 04:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:52.084 04:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:06:52.345 04:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:52.345 04:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:52.345 04:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:52.345 04:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:06:52.345 04:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:06:52.345 04:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:06:52.345 04:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:06:52.345 04:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:52.345 04:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:52.345 04:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:06:52.345 04:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:52.345 04:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:52.345 04:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:52.345 04:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:06:52.606 04:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:06:52.606 04:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:06:52.606 04:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:06:52.606 04:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:52.606 04:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:52.606 04:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:06:52.606 04:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:52.606 04:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:52.606 04:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:52.606 04:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.606 04:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:52.868 04:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:52.868 04:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:52.868 04:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:52.868 04:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:52.868 04:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:52.868 04:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:52.868 04:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:52.868 04:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:52.868 04:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:52.868 04:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:06:52.868 04:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:52.868 04:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:06:52.868 04:54:30 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:52.868 04:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.868 04:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:06:52.868 04:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:06:53.128 malloc_lvol_verify 00:06:53.128 04:54:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:06:53.389 7dc93ead-de24-4c78-8791-d4a23649ab63 00:06:53.389 04:54:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:06:53.389 89c3f277-f6fb-4c8c-a28b-1a707339a395 00:06:53.389 04:54:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:06:53.649 /dev/nbd0 00:06:53.649 04:54:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:06:53.649 04:54:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:06:53.649 04:54:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:06:53.649 04:54:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:06:53.649 04:54:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:06:53.649 mke2fs 1.47.0 (5-Feb-2023) 00:06:53.649 Discarding device blocks: 0/4096 done 00:06:53.649 Creating filesystem with 4096 1k blocks and 1024 inodes 00:06:53.649 00:06:53.649 Allocating group tables: 0/1 done 00:06:53.649 Writing inode tables: 0/1 done 00:06:53.649 Creating journal (1024 blocks): done 00:06:53.649 Writing superblocks and filesystem accounting information: 0/1 done 00:06:53.649 00:06:53.649 04:54:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:53.649 04:54:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:53.649 04:54:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:53.649 04:54:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:53.649 04:54:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:53.649 04:54:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:53.649 04:54:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:53.952 04:54:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:53.952 04:54:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:53.952 04:54:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:53.952 04:54:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:53.952 04:54:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:53.952 04:54:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:53.952 04:54:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:53.952 04:54:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:53.952 04:54:32 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 72023 00:06:53.952 04:54:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 72023 ']' 00:06:53.952 04:54:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 72023 00:06:53.952 04:54:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:06:53.952 04:54:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:53.952 04:54:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72023 00:06:53.952 04:54:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:53.952 04:54:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:53.952 killing process with pid 72023 00:06:53.952 04:54:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72023' 00:06:53.952 04:54:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 72023 00:06:53.952 04:54:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 72023 00:06:54.219 04:54:32 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:06:54.219 00:06:54.219 real 0m9.779s 00:06:54.219 user 0m13.878s 00:06:54.219 sys 0m3.294s 00:06:54.219 04:54:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:54.219 ************************************ 00:06:54.219 04:54:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:54.219 END TEST bdev_nbd 00:06:54.219 ************************************ 00:06:54.219 skipping fio tests on NVMe due to multi-ns failures. 00:06:54.219 04:54:32 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:06:54.219 04:54:32 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:06:54.219 04:54:32 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:06:54.219 04:54:32 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:54.219 04:54:32 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:54.219 04:54:32 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:06:54.219 04:54:32 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:54.219 04:54:32 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:54.219 ************************************ 00:06:54.219 START TEST bdev_verify 00:06:54.219 ************************************ 00:06:54.219 04:54:32 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:54.219 [2024-12-06 04:54:32.413288] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:54.219 [2024-12-06 04:54:32.413454] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72391 ] 00:06:54.480 [2024-12-06 04:54:32.550416] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:54.480 [2024-12-06 04:54:32.585834] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.480 [2024-12-06 04:54:32.585908] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:55.051 Running I/O for 5 seconds... 00:06:56.937 19200.00 IOPS, 75.00 MiB/s [2024-12-06T04:54:36.539Z] 19808.00 IOPS, 77.38 MiB/s [2024-12-06T04:54:37.476Z] 20821.33 IOPS, 81.33 MiB/s [2024-12-06T04:54:38.413Z] 20800.00 IOPS, 81.25 MiB/s [2024-12-06T04:54:38.413Z] 20876.80 IOPS, 81.55 MiB/s 00:07:00.181 Latency(us) 00:07:00.181 [2024-12-06T04:54:38.413Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:00.181 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:00.181 Verification LBA range: start 0x0 length 0xbd0bd 00:07:00.181 Nvme0n1 : 5.08 1713.24 6.69 0.00 0.00 74529.51 13913.80 79046.50 00:07:00.181 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:00.181 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:00.181 Nvme0n1 : 5.04 1728.01 6.75 0.00 0.00 73751.09 14317.10 78239.90 00:07:00.181 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:00.181 Verification LBA range: start 0x0 length 0xa0000 00:07:00.181 Nvme1n1 : 5.08 1712.76 6.69 0.00 0.00 74274.07 15123.69 64931.05 00:07:00.181 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:00.181 Verification LBA range: start 0xa0000 length 0xa0000 00:07:00.181 Nvme1n1 : 5.07 1728.58 6.75 0.00 0.00 73578.53 9477.51 70173.93 00:07:00.181 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:00.181 Verification LBA range: start 0x0 length 0x80000 00:07:00.181 Nvme2n1 : 5.08 1712.28 6.69 0.00 0.00 74103.05 13812.97 65737.65 00:07:00.181 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:00.181 Verification LBA range: start 0x80000 length 0x80000 00:07:00.181 Nvme2n1 : 5.08 1737.25 6.79 0.00 0.00 73276.13 8469.27 66544.25 00:07:00.181 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:00.181 Verification LBA range: start 0x0 length 0x80000 00:07:00.181 Nvme2n2 : 5.09 1710.83 6.68 0.00 0.00 73980.73 16434.41 68157.44 00:07:00.181 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:00.181 Verification LBA range: start 0x80000 length 0x80000 00:07:00.181 Nvme2n2 : 5.09 1736.58 6.78 0.00 0.00 73139.52 9427.10 64931.05 00:07:00.181 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:00.181 Verification LBA range: start 0x0 length 0x80000 00:07:00.181 Nvme2n3 : 5.09 1709.61 6.68 0.00 0.00 73885.59 13913.80 72593.72 00:07:00.181 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:00.181 Verification LBA range: start 0x80000 length 0x80000 00:07:00.181 Nvme2n3 : 5.09 1736.05 6.78 0.00 0.00 73005.20 9628.75 65737.65 00:07:00.181 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:00.181 Verification LBA range: start 0x0 length 0x20000 00:07:00.181 Nvme3n1 : 5.09 1708.77 6.67 0.00 0.00 73822.60 11090.71 71383.83 00:07:00.181 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:00.181 Verification LBA range: start 0x20000 length 0x20000 00:07:00.181 Nvme3n1 : 5.09 1734.89 6.78 0.00 0.00 72895.71 12149.37 66544.25 00:07:00.181 [2024-12-06T04:54:38.413Z] =================================================================================================================== 00:07:00.181 [2024-12-06T04:54:38.413Z] Total : 20668.84 80.74 0.00 0.00 73683.78 8469.27 79046.50 00:07:00.749 00:07:00.749 real 0m6.371s 00:07:00.749 user 0m12.025s 00:07:00.749 sys 0m0.197s 00:07:00.749 ************************************ 00:07:00.749 END TEST bdev_verify 00:07:00.749 ************************************ 00:07:00.749 04:54:38 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:00.749 04:54:38 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:00.749 04:54:38 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:00.749 04:54:38 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:07:00.749 04:54:38 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:00.749 04:54:38 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:00.749 ************************************ 00:07:00.749 START TEST bdev_verify_big_io 00:07:00.749 ************************************ 00:07:00.749 04:54:38 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:00.749 [2024-12-06 04:54:38.851851] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:00.749 [2024-12-06 04:54:38.851977] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72484 ] 00:07:01.009 [2024-12-06 04:54:38.981857] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:01.009 [2024-12-06 04:54:39.036790] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:01.009 [2024-12-06 04:54:39.036798] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.580 Running I/O for 5 seconds... 00:07:06.761 1941.00 IOPS, 121.31 MiB/s [2024-12-06T04:54:45.558Z] 2983.00 IOPS, 186.44 MiB/s [2024-12-06T04:54:45.558Z] 3597.33 IOPS, 224.83 MiB/s 00:07:07.326 Latency(us) 00:07:07.326 [2024-12-06T04:54:45.558Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:07.326 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:07.326 Verification LBA range: start 0x0 length 0xbd0b 00:07:07.326 Nvme0n1 : 5.60 132.66 8.29 0.00 0.00 928528.69 16333.59 942105.21 00:07:07.326 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:07.326 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:07.326 Nvme0n1 : 5.67 157.24 9.83 0.00 0.00 782882.00 32667.18 922746.88 00:07:07.326 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:07.326 Verification LBA range: start 0x0 length 0xa000 00:07:07.326 Nvme1n1 : 5.60 137.08 8.57 0.00 0.00 882213.81 81869.59 774333.05 00:07:07.326 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:07.326 Verification LBA range: start 0xa000 length 0xa000 00:07:07.326 Nvme1n1 : 5.57 160.93 10.06 0.00 0.00 751001.15 81869.59 767880.27 00:07:07.326 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:07.326 Verification LBA range: start 0x0 length 0x8000 00:07:07.326 Nvme2n1 : 5.60 137.03 8.56 0.00 0.00 855087.26 82676.18 722710.84 00:07:07.326 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:07.326 Verification LBA range: start 0x8000 length 0x8000 00:07:07.326 Nvme2n1 : 5.67 161.95 10.12 0.00 0.00 721628.65 104857.60 674315.03 00:07:07.326 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:07.326 Verification LBA range: start 0x0 length 0x8000 00:07:07.326 Nvme2n2 : 5.75 144.69 9.04 0.00 0.00 785214.83 42749.64 764653.88 00:07:07.326 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:07.326 Verification LBA range: start 0x8000 length 0x8000 00:07:07.326 Nvme2n2 : 5.82 172.30 10.77 0.00 0.00 665493.68 22282.24 693673.35 00:07:07.326 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:07.326 Verification LBA range: start 0x0 length 0x8000 00:07:07.326 Nvme2n3 : 5.82 147.03 9.19 0.00 0.00 752716.53 28029.24 1819682.66 00:07:07.326 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:07.326 Verification LBA range: start 0x8000 length 0x8000 00:07:07.326 Nvme2n3 : 5.82 176.03 11.00 0.00 0.00 636348.75 49807.36 713031.68 00:07:07.326 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:07.326 Verification LBA range: start 0x0 length 0x2000 00:07:07.326 Nvme3n1 : 5.84 171.78 10.74 0.00 0.00 629354.58 1978.68 1871304.86 00:07:07.326 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:07.326 Verification LBA range: start 0x2000 length 0x2000 00:07:07.326 Nvme3n1 : 5.82 186.84 11.68 0.00 0.00 584757.89 1764.43 732390.01 00:07:07.326 [2024-12-06T04:54:45.558Z] =================================================================================================================== 00:07:07.326 [2024-12-06T04:54:45.558Z] Total : 1885.57 117.85 0.00 0.00 735981.01 1764.43 1871304.86 00:07:08.260 00:07:08.260 real 0m7.431s 00:07:08.260 user 0m14.097s 00:07:08.260 sys 0m0.262s 00:07:08.260 04:54:46 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:08.260 04:54:46 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:08.260 ************************************ 00:07:08.260 END TEST bdev_verify_big_io 00:07:08.260 ************************************ 00:07:08.260 04:54:46 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:08.260 04:54:46 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:08.260 04:54:46 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:08.260 04:54:46 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:08.260 ************************************ 00:07:08.260 START TEST bdev_write_zeroes 00:07:08.260 ************************************ 00:07:08.260 04:54:46 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:08.260 [2024-12-06 04:54:46.330160] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:08.260 [2024-12-06 04:54:46.330278] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72588 ] 00:07:08.260 [2024-12-06 04:54:46.464556] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.518 [2024-12-06 04:54:46.492386] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.775 Running I/O for 1 seconds... 00:07:09.707 79488.00 IOPS, 310.50 MiB/s 00:07:09.707 Latency(us) 00:07:09.707 [2024-12-06T04:54:47.939Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:09.707 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:09.707 Nvme0n1 : 1.02 13123.53 51.26 0.00 0.00 9736.46 6755.25 26819.35 00:07:09.707 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:09.707 Nvme1n1 : 1.03 13112.04 51.22 0.00 0.00 9736.88 7007.31 26617.70 00:07:09.707 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:09.707 Nvme2n1 : 1.03 13100.51 51.17 0.00 0.00 9727.62 6805.66 25105.33 00:07:09.707 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:09.707 Nvme2n2 : 1.03 13089.23 51.13 0.00 0.00 9714.02 6755.25 25710.28 00:07:09.707 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:09.707 Nvme2n3 : 1.03 13077.99 51.09 0.00 0.00 9680.42 5116.85 26617.70 00:07:09.707 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:09.707 Nvme3n1 : 1.03 13066.76 51.04 0.00 0.00 9676.31 4789.17 27424.30 00:07:09.707 [2024-12-06T04:54:47.939Z] =================================================================================================================== 00:07:09.707 [2024-12-06T04:54:47.939Z] Total : 78570.07 306.91 0.00 0.00 9711.95 4789.17 27424.30 00:07:09.965 00:07:09.965 real 0m1.771s 00:07:09.965 user 0m1.522s 00:07:09.965 sys 0m0.140s 00:07:09.965 04:54:48 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:09.965 ************************************ 00:07:09.965 END TEST bdev_write_zeroes 00:07:09.965 ************************************ 00:07:09.965 04:54:48 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:09.966 04:54:48 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:09.966 04:54:48 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:09.966 04:54:48 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:09.966 04:54:48 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:09.966 ************************************ 00:07:09.966 START TEST bdev_json_nonenclosed 00:07:09.966 ************************************ 00:07:09.966 04:54:48 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:09.966 [2024-12-06 04:54:48.141900] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:09.966 [2024-12-06 04:54:48.141998] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72620 ] 00:07:10.223 [2024-12-06 04:54:48.275778] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.223 [2024-12-06 04:54:48.304168] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.223 [2024-12-06 04:54:48.304241] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:10.223 [2024-12-06 04:54:48.304253] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:10.223 [2024-12-06 04:54:48.304261] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:10.223 00:07:10.223 real 0m0.279s 00:07:10.223 user 0m0.118s 00:07:10.223 sys 0m0.059s 00:07:10.223 04:54:48 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:10.223 04:54:48 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:10.223 ************************************ 00:07:10.224 END TEST bdev_json_nonenclosed 00:07:10.224 ************************************ 00:07:10.224 04:54:48 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:10.224 04:54:48 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:10.224 04:54:48 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:10.224 04:54:48 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:10.224 ************************************ 00:07:10.224 START TEST bdev_json_nonarray 00:07:10.224 ************************************ 00:07:10.224 04:54:48 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:10.482 [2024-12-06 04:54:48.456130] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:10.482 [2024-12-06 04:54:48.456323] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72651 ] 00:07:10.482 [2024-12-06 04:54:48.590246] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.482 [2024-12-06 04:54:48.618508] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.482 [2024-12-06 04:54:48.618587] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:10.482 [2024-12-06 04:54:48.618600] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:10.482 [2024-12-06 04:54:48.618609] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:10.482 00:07:10.482 real 0m0.284s 00:07:10.482 user 0m0.102s 00:07:10.482 sys 0m0.079s 00:07:10.482 04:54:48 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:10.482 04:54:48 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:10.482 ************************************ 00:07:10.482 END TEST bdev_json_nonarray 00:07:10.482 ************************************ 00:07:10.741 04:54:48 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:07:10.741 04:54:48 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:07:10.741 04:54:48 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:07:10.741 04:54:48 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:07:10.741 04:54:48 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:07:10.741 04:54:48 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:10.741 04:54:48 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:10.741 04:54:48 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:07:10.741 04:54:48 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:07:10.741 04:54:48 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:07:10.741 04:54:48 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:07:10.741 00:07:10.741 real 0m30.608s 00:07:10.741 user 0m47.993s 00:07:10.741 sys 0m5.317s 00:07:10.741 04:54:48 blockdev_nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:10.741 04:54:48 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:10.741 ************************************ 00:07:10.741 END TEST blockdev_nvme 00:07:10.741 ************************************ 00:07:10.741 04:54:48 -- spdk/autotest.sh@209 -- # uname -s 00:07:10.741 04:54:48 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:07:10.741 04:54:48 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:10.741 04:54:48 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:10.741 04:54:48 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:10.741 04:54:48 -- common/autotest_common.sh@10 -- # set +x 00:07:10.741 ************************************ 00:07:10.741 START TEST blockdev_nvme_gpt 00:07:10.741 ************************************ 00:07:10.741 04:54:48 blockdev_nvme_gpt -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:10.741 * Looking for test storage... 00:07:10.741 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:10.741 04:54:48 blockdev_nvme_gpt -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:10.741 04:54:48 blockdev_nvme_gpt -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:10.741 04:54:48 blockdev_nvme_gpt -- common/autotest_common.sh@1681 -- # lcov --version 00:07:10.741 04:54:48 blockdev_nvme_gpt -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:10.741 04:54:48 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:10.741 04:54:48 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:10.741 04:54:48 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:10.741 04:54:48 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:07:10.741 04:54:48 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:07:10.741 04:54:48 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:07:10.741 04:54:48 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:07:10.741 04:54:48 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:07:10.741 04:54:48 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:07:10.741 04:54:48 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:07:10.741 04:54:48 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:10.741 04:54:48 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:07:10.741 04:54:48 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:07:10.741 04:54:48 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:10.741 04:54:48 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:10.741 04:54:48 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:07:10.741 04:54:48 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:07:10.742 04:54:48 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:10.742 04:54:48 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:07:10.742 04:54:48 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:07:10.742 04:54:48 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:07:10.742 04:54:48 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:07:10.742 04:54:48 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:10.742 04:54:48 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:07:10.742 04:54:48 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:07:10.742 04:54:48 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:10.742 04:54:48 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:10.742 04:54:48 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:07:10.742 04:54:48 blockdev_nvme_gpt -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:10.742 04:54:48 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:10.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.742 --rc genhtml_branch_coverage=1 00:07:10.742 --rc genhtml_function_coverage=1 00:07:10.742 --rc genhtml_legend=1 00:07:10.742 --rc geninfo_all_blocks=1 00:07:10.742 --rc geninfo_unexecuted_blocks=1 00:07:10.742 00:07:10.742 ' 00:07:10.742 04:54:48 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:10.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.742 --rc genhtml_branch_coverage=1 00:07:10.742 --rc genhtml_function_coverage=1 00:07:10.742 --rc genhtml_legend=1 00:07:10.742 --rc geninfo_all_blocks=1 00:07:10.742 --rc geninfo_unexecuted_blocks=1 00:07:10.742 00:07:10.742 ' 00:07:10.742 04:54:48 blockdev_nvme_gpt -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:10.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.742 --rc genhtml_branch_coverage=1 00:07:10.742 --rc genhtml_function_coverage=1 00:07:10.742 --rc genhtml_legend=1 00:07:10.742 --rc geninfo_all_blocks=1 00:07:10.742 --rc geninfo_unexecuted_blocks=1 00:07:10.742 00:07:10.742 ' 00:07:10.742 04:54:48 blockdev_nvme_gpt -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:10.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.742 --rc genhtml_branch_coverage=1 00:07:10.742 --rc genhtml_function_coverage=1 00:07:10.742 --rc genhtml_legend=1 00:07:10.742 --rc geninfo_all_blocks=1 00:07:10.742 --rc geninfo_unexecuted_blocks=1 00:07:10.742 00:07:10.742 ' 00:07:10.742 04:54:48 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:10.742 04:54:48 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:07:10.742 04:54:48 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:10.742 04:54:48 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:10.742 04:54:48 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:10.742 04:54:48 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:10.742 04:54:48 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:07:10.742 04:54:48 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:07:10.742 04:54:48 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:07:10.742 04:54:48 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:07:10.742 04:54:48 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:07:10.742 04:54:48 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:07:10.742 04:54:48 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:07:10.742 04:54:48 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:07:10.742 04:54:48 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:07:10.742 04:54:48 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:07:10.742 04:54:48 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:07:10.742 04:54:48 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:07:10.742 04:54:48 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:07:10.742 04:54:48 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:07:10.742 04:54:48 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:07:10.742 04:54:48 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:07:10.742 04:54:48 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:07:10.742 04:54:48 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:07:10.742 04:54:48 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=72724 00:07:10.742 04:54:48 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:10.742 04:54:48 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 72724 00:07:10.742 04:54:48 blockdev_nvme_gpt -- common/autotest_common.sh@831 -- # '[' -z 72724 ']' 00:07:10.742 04:54:48 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:10.742 04:54:48 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.742 04:54:48 blockdev_nvme_gpt -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:10.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.742 04:54:48 blockdev_nvme_gpt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.742 04:54:48 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:10.742 04:54:48 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:11.001 [2024-12-06 04:54:48.994513] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:11.001 [2024-12-06 04:54:48.994702] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72724 ] 00:07:11.001 [2024-12-06 04:54:49.136746] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.001 [2024-12-06 04:54:49.165450] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.934 04:54:49 blockdev_nvme_gpt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:11.934 04:54:49 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # return 0 00:07:11.934 04:54:49 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:07:11.934 04:54:49 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:07:11.934 04:54:49 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:11.934 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:12.195 Waiting for block devices as requested 00:07:12.195 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:12.195 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:12.195 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:12.456 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:17.726 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:17.726 04:54:55 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:07:17.726 04:54:55 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:07:17.726 04:54:55 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:07:17.726 04:54:55 blockdev_nvme_gpt -- common/autotest_common.sh@1656 -- # local nvme bdf 00:07:17.726 04:54:55 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:17.726 04:54:55 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:07:17.726 04:54:55 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:07:17.726 04:54:55 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:17.726 04:54:55 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:17.726 04:54:55 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:17.726 04:54:55 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:07:17.726 04:54:55 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:07:17.726 04:54:55 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:07:17.726 04:54:55 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:17.726 04:54:55 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:17.726 04:54:55 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:07:17.726 04:54:55 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:07:17.726 04:54:55 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:07:17.726 04:54:55 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:17.726 04:54:55 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:17.726 04:54:55 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:07:17.726 04:54:55 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:07:17.726 04:54:55 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:07:17.726 04:54:55 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:17.726 04:54:55 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:17.726 04:54:55 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:07:17.726 04:54:55 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:07:17.726 04:54:55 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:07:17.726 04:54:55 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:17.726 04:54:55 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:17.726 04:54:55 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:07:17.726 04:54:55 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:07:17.726 04:54:55 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:07:17.726 04:54:55 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:17.726 04:54:55 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:17.726 04:54:55 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:07:17.726 04:54:55 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:07:17.726 04:54:55 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:07:17.726 04:54:55 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:17.726 04:54:55 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:07:17.726 04:54:55 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:07:17.726 04:54:55 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:07:17.726 04:54:55 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:07:17.726 04:54:55 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:07:17.726 04:54:55 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:07:17.726 04:54:55 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:07:17.726 04:54:55 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:07:17.726 BYT; 00:07:17.726 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:07:17.726 04:54:55 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:07:17.726 BYT; 00:07:17.726 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:07:17.726 04:54:55 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:07:17.726 04:54:55 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:07:17.726 04:54:55 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:07:17.726 04:54:55 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:07:17.726 04:54:55 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:17.726 04:54:55 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:07:17.726 04:54:55 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:07:17.726 04:54:55 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:07:17.726 04:54:55 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:17.726 04:54:55 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:17.726 04:54:55 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:07:17.726 04:54:55 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:07:17.726 04:54:55 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:17.726 04:54:55 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:07:17.726 04:54:55 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:17.726 04:54:55 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:17.726 04:54:55 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:17.726 04:54:55 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:07:17.726 04:54:55 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:07:17.726 04:54:55 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:17.726 04:54:55 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:17.726 04:54:55 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:07:17.726 04:54:55 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:07:17.726 04:54:55 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:17.726 04:54:55 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:07:17.726 04:54:55 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:17.726 04:54:55 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:17.726 04:54:55 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:17.726 04:54:55 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:07:18.702 The operation has completed successfully. 00:07:18.702 04:54:56 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:07:19.637 The operation has completed successfully. 00:07:19.637 04:54:57 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:19.895 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:20.153 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:20.412 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:20.412 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:20.412 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:20.412 04:54:58 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:07:20.412 04:54:58 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.412 04:54:58 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:20.412 [] 00:07:20.412 04:54:58 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.412 04:54:58 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:07:20.412 04:54:58 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:07:20.412 04:54:58 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:07:20.412 04:54:58 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:20.412 04:54:58 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:07:20.412 04:54:58 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.412 04:54:58 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:20.670 04:54:58 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.670 04:54:58 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:07:20.670 04:54:58 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.670 04:54:58 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:20.670 04:54:58 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.670 04:54:58 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:07:20.670 04:54:58 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:07:20.670 04:54:58 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.670 04:54:58 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:20.670 04:54:58 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.670 04:54:58 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:07:20.670 04:54:58 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.670 04:54:58 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:20.930 04:54:58 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.930 04:54:58 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:20.930 04:54:58 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.930 04:54:58 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:20.930 04:54:58 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.930 04:54:58 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:07:20.930 04:54:58 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:07:20.930 04:54:58 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:07:20.930 04:54:58 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.930 04:54:58 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:20.930 04:54:58 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.930 04:54:58 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:07:20.930 04:54:58 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:07:20.931 04:54:58 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "1e6c38c1-213a-4283-a62c-dffe6235df73"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "1e6c38c1-213a-4283-a62c-dffe6235df73",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "91d9a3cb-461e-43c8-8c38-40006724e72e"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "91d9a3cb-461e-43c8-8c38-40006724e72e",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "16d81d29-e98c-4a0e-93ca-dd6b3ee43dae"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "16d81d29-e98c-4a0e-93ca-dd6b3ee43dae",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "b1f0c6dc-46df-45dc-b63f-d9f0fa7ba120"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "b1f0c6dc-46df-45dc-b63f-d9f0fa7ba120",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "a6add9ac-5842-4853-a3e9-bdbaa0e4e3ab"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "a6add9ac-5842-4853-a3e9-bdbaa0e4e3ab",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:20.931 04:54:58 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:07:20.931 04:54:58 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:07:20.931 04:54:58 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:07:20.931 04:54:58 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 72724 00:07:20.931 04:54:58 blockdev_nvme_gpt -- common/autotest_common.sh@950 -- # '[' -z 72724 ']' 00:07:20.931 04:54:58 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # kill -0 72724 00:07:20.931 04:54:58 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # uname 00:07:20.931 04:54:59 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:20.931 04:54:59 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72724 00:07:20.931 killing process with pid 72724 00:07:20.931 04:54:59 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:20.931 04:54:59 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:20.931 04:54:59 blockdev_nvme_gpt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72724' 00:07:20.931 04:54:59 blockdev_nvme_gpt -- common/autotest_common.sh@969 -- # kill 72724 00:07:20.931 04:54:59 blockdev_nvme_gpt -- common/autotest_common.sh@974 -- # wait 72724 00:07:21.192 04:54:59 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:21.192 04:54:59 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:21.192 04:54:59 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:07:21.192 04:54:59 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:21.192 04:54:59 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:21.192 ************************************ 00:07:21.192 START TEST bdev_hello_world 00:07:21.192 ************************************ 00:07:21.192 04:54:59 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:21.192 [2024-12-06 04:54:59.355433] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:21.192 [2024-12-06 04:54:59.355542] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73331 ] 00:07:21.454 [2024-12-06 04:54:59.490535] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.454 [2024-12-06 04:54:59.523906] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.716 [2024-12-06 04:54:59.894471] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:21.716 [2024-12-06 04:54:59.894515] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:21.716 [2024-12-06 04:54:59.894544] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:21.716 [2024-12-06 04:54:59.896657] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:21.716 [2024-12-06 04:54:59.897875] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:21.716 [2024-12-06 04:54:59.897906] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:21.716 [2024-12-06 04:54:59.898621] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:21.716 00:07:21.716 [2024-12-06 04:54:59.898681] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:21.976 00:07:21.976 real 0m0.861s 00:07:21.976 user 0m0.594s 00:07:21.976 sys 0m0.164s 00:07:21.976 04:55:00 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:21.976 04:55:00 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:21.976 ************************************ 00:07:21.976 END TEST bdev_hello_world 00:07:21.976 ************************************ 00:07:21.976 04:55:00 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:07:21.976 04:55:00 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:21.976 04:55:00 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:21.976 04:55:00 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:22.237 ************************************ 00:07:22.237 START TEST bdev_bounds 00:07:22.237 ************************************ 00:07:22.238 04:55:00 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:07:22.238 04:55:00 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=73362 00:07:22.238 Process bdevio pid: 73362 00:07:22.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:22.238 04:55:00 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:22.238 04:55:00 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 73362' 00:07:22.238 04:55:00 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 73362 00:07:22.238 04:55:00 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 73362 ']' 00:07:22.238 04:55:00 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:22.238 04:55:00 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:22.238 04:55:00 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:22.238 04:55:00 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:22.238 04:55:00 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:22.238 04:55:00 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:22.238 [2024-12-06 04:55:00.270992] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:22.238 [2024-12-06 04:55:00.271241] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73362 ] 00:07:22.238 [2024-12-06 04:55:00.410628] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:22.238 [2024-12-06 04:55:00.445877] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.238 [2024-12-06 04:55:00.445800] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:22.238 [2024-12-06 04:55:00.445947] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:23.181 04:55:01 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:23.181 04:55:01 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:07:23.181 04:55:01 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:23.181 I/O targets: 00:07:23.181 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:23.181 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:07:23.181 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:07:23.181 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:23.181 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:23.181 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:23.181 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:23.181 00:07:23.181 00:07:23.181 CUnit - A unit testing framework for C - Version 2.1-3 00:07:23.181 http://cunit.sourceforge.net/ 00:07:23.181 00:07:23.181 00:07:23.181 Suite: bdevio tests on: Nvme3n1 00:07:23.181 Test: blockdev write read block ...passed 00:07:23.181 Test: blockdev write zeroes read block ...passed 00:07:23.181 Test: blockdev write zeroes read no split ...passed 00:07:23.181 Test: blockdev write zeroes read split ...passed 00:07:23.181 Test: blockdev write zeroes read split partial ...passed 00:07:23.181 Test: blockdev reset ...[2024-12-06 04:55:01.226817] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:07:23.181 passed 00:07:23.181 Test: blockdev write read 8 blocks ...[2024-12-06 04:55:01.230246] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:23.181 passed 00:07:23.181 Test: blockdev write read size > 128k ...passed 00:07:23.181 Test: blockdev write read invalid size ...passed 00:07:23.181 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:23.181 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:23.181 Test: blockdev write read max offset ...passed 00:07:23.181 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:23.181 Test: blockdev writev readv 8 blocks ...passed 00:07:23.181 Test: blockdev writev readv 30 x 1block ...passed 00:07:23.181 Test: blockdev writev readv block ...passed 00:07:23.181 Test: blockdev writev readv size > 128k ...passed 00:07:23.181 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:23.181 Test: blockdev comparev and writev ...[2024-12-06 04:55:01.246417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bb40a000 len:0x1000 00:07:23.181 [2024-12-06 04:55:01.246470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:23.181 passed 00:07:23.181 Test: blockdev nvme passthru rw ...passed 00:07:23.181 Test: blockdev nvme passthru vendor specific ...passed 00:07:23.181 Test: blockdev nvme admin passthru ...[2024-12-06 04:55:01.248662] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:23.181 [2024-12-06 04:55:01.248704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:23.181 passed 00:07:23.181 Test: blockdev copy ...passed 00:07:23.181 Suite: bdevio tests on: Nvme2n3 00:07:23.181 Test: blockdev write read block ...passed 00:07:23.181 Test: blockdev write zeroes read block ...passed 00:07:23.181 Test: blockdev write zeroes read no split ...passed 00:07:23.181 Test: blockdev write zeroes read split ...passed 00:07:23.181 Test: blockdev write zeroes read split partial ...passed 00:07:23.181 Test: blockdev reset ...[2024-12-06 04:55:01.277242] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:23.181 [2024-12-06 04:55:01.280358] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:23.181 passed 00:07:23.181 Test: blockdev write read 8 blocks ...passed 00:07:23.181 Test: blockdev write read size > 128k ...passed 00:07:23.181 Test: blockdev write read invalid size ...passed 00:07:23.181 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:23.181 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:23.181 Test: blockdev write read max offset ...passed 00:07:23.181 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:23.181 Test: blockdev writev readv 8 blocks ...passed 00:07:23.181 Test: blockdev writev readv 30 x 1block ...passed 00:07:23.181 Test: blockdev writev readv block ...passed 00:07:23.181 Test: blockdev writev readv size > 128k ...passed 00:07:23.181 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:23.181 Test: blockdev comparev and writev ...[2024-12-06 04:55:01.290036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 passed 00:07:23.181 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2a7404000 len:0x1000 00:07:23.181 [2024-12-06 04:55:01.290154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:23.181 passed 00:07:23.181 Test: blockdev nvme passthru vendor specific ...[2024-12-06 04:55:01.291524] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:23.181 [2024-12-06 04:55:01.291557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:23.181 passed 00:07:23.181 Test: blockdev nvme admin passthru ...passed 00:07:23.181 Test: blockdev copy ...passed 00:07:23.181 Suite: bdevio tests on: Nvme2n2 00:07:23.181 Test: blockdev write read block ...passed 00:07:23.181 Test: blockdev write zeroes read block ...passed 00:07:23.181 Test: blockdev write zeroes read no split ...passed 00:07:23.181 Test: blockdev write zeroes read split ...passed 00:07:23.181 Test: blockdev write zeroes read split partial ...passed 00:07:23.181 Test: blockdev reset ...[2024-12-06 04:55:01.315054] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:23.181 [2024-12-06 04:55:01.317033] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:23.182 passed 00:07:23.182 Test: blockdev write read 8 blocks ...passed 00:07:23.182 Test: blockdev write read size > 128k ...passed 00:07:23.182 Test: blockdev write read invalid size ...passed 00:07:23.182 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:23.182 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:23.182 Test: blockdev write read max offset ...passed 00:07:23.182 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:23.182 Test: blockdev writev readv 8 blocks ...passed 00:07:23.182 Test: blockdev writev readv 30 x 1block ...passed 00:07:23.182 Test: blockdev writev readv block ...passed 00:07:23.182 Test: blockdev writev readv size > 128k ...passed 00:07:23.182 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:23.182 Test: blockdev comparev and writev ...[2024-12-06 04:55:01.326856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2a7404000 len:0x1000 00:07:23.182 [2024-12-06 04:55:01.326894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:23.182 passed 00:07:23.182 Test: blockdev nvme passthru rw ...passed 00:07:23.182 Test: blockdev nvme passthru vendor specific ...[2024-12-06 04:55:01.328241] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:23.182 [2024-12-06 04:55:01.328274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:23.182 passed 00:07:23.182 Test: blockdev nvme admin passthru ...passed 00:07:23.182 Test: blockdev copy ...passed 00:07:23.182 Suite: bdevio tests on: Nvme2n1 00:07:23.182 Test: blockdev write read block ...passed 00:07:23.182 Test: blockdev write zeroes read block ...passed 00:07:23.182 Test: blockdev write zeroes read no split ...passed 00:07:23.182 Test: blockdev write zeroes read split ...passed 00:07:23.182 Test: blockdev write zeroes read split partial ...passed 00:07:23.182 Test: blockdev reset ...[2024-12-06 04:55:01.350256] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:23.182 [2024-12-06 04:55:01.352164] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:23.182 passed 00:07:23.182 Test: blockdev write read 8 blocks ...passed 00:07:23.182 Test: blockdev write read size > 128k ...passed 00:07:23.182 Test: blockdev write read invalid size ...passed 00:07:23.182 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:23.182 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:23.182 Test: blockdev write read max offset ...passed 00:07:23.182 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:23.182 Test: blockdev writev readv 8 blocks ...passed 00:07:23.182 Test: blockdev writev readv 30 x 1block ...passed 00:07:23.182 Test: blockdev writev readv block ...passed 00:07:23.182 Test: blockdev writev readv size > 128k ...passed 00:07:23.182 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:23.182 Test: blockdev comparev and writev ...[2024-12-06 04:55:01.363277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2a7406000 len:0x1000 00:07:23.182 [2024-12-06 04:55:01.363323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:23.182 passed 00:07:23.182 Test: blockdev nvme passthru rw ...passed 00:07:23.182 Test: blockdev nvme passthru vendor specific ...[2024-12-06 04:55:01.365295] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:23.182 [2024-12-06 04:55:01.365338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:23.182 passed 00:07:23.182 Test: blockdev nvme admin passthru ...passed 00:07:23.182 Test: blockdev copy ...passed 00:07:23.182 Suite: bdevio tests on: Nvme1n1p2 00:07:23.182 Test: blockdev write read block ...passed 00:07:23.182 Test: blockdev write zeroes read block ...passed 00:07:23.182 Test: blockdev write zeroes read no split ...passed 00:07:23.182 Test: blockdev write zeroes read split ...passed 00:07:23.182 Test: blockdev write zeroes read split partial ...passed 00:07:23.182 Test: blockdev reset ...[2024-12-06 04:55:01.390656] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:07:23.182 [2024-12-06 04:55:01.393257] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:23.182 passed 00:07:23.182 Test: blockdev write read 8 blocks ...passed 00:07:23.182 Test: blockdev write read size > 128k ...passed 00:07:23.182 Test: blockdev write read invalid size ...passed 00:07:23.182 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:23.182 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:23.182 Test: blockdev write read max offset ...passed 00:07:23.182 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:23.182 Test: blockdev writev readv 8 blocks ...passed 00:07:23.182 Test: blockdev writev readv 30 x 1block ...passed 00:07:23.182 Test: blockdev writev readv block ...passed 00:07:23.182 Test: blockdev writev readv size > 128k ...passed 00:07:23.182 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:23.182 Test: blockdev comparev and writev ...[2024-12-06 04:55:01.408153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2a7402000 len:0x1000 00:07:23.182 [2024-12-06 04:55:01.408198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:23.182 passed 00:07:23.182 Test: blockdev nvme passthru rw ...passed 00:07:23.182 Test: blockdev nvme passthru vendor specific ...passed 00:07:23.182 Test: blockdev nvme admin passthru ...passed 00:07:23.182 Test: blockdev copy ...passed 00:07:23.182 Suite: bdevio tests on: Nvme1n1p1 00:07:23.441 Test: blockdev write read block ...passed 00:07:23.441 Test: blockdev write zeroes read block ...passed 00:07:23.441 Test: blockdev write zeroes read no split ...passed 00:07:23.441 Test: blockdev write zeroes read split ...passed 00:07:23.441 Test: blockdev write zeroes read split partial ...passed 00:07:23.441 Test: blockdev reset ...[2024-12-06 04:55:01.426883] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:07:23.441 [2024-12-06 04:55:01.428448] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:23.441 passed 00:07:23.441 Test: blockdev write read 8 blocks ...passed 00:07:23.441 Test: blockdev write read size > 128k ...passed 00:07:23.441 Test: blockdev write read invalid size ...passed 00:07:23.442 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:23.442 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:23.442 Test: blockdev write read max offset ...passed 00:07:23.442 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:23.442 Test: blockdev writev readv 8 blocks ...passed 00:07:23.442 Test: blockdev writev readv 30 x 1block ...passed 00:07:23.442 Test: blockdev writev readv block ...passed 00:07:23.442 Test: blockdev writev readv size > 128k ...passed 00:07:23.442 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:23.442 Test: blockdev comparev and writev ...[2024-12-06 04:55:01.436929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2bee3b000 len:0x1000 00:07:23.442 [2024-12-06 04:55:01.436968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:23.442 passed 00:07:23.442 Test: blockdev nvme passthru rw ...passed 00:07:23.442 Test: blockdev nvme passthru vendor specific ...passed 00:07:23.442 Test: blockdev nvme admin passthru ...passed 00:07:23.442 Test: blockdev copy ...passed 00:07:23.442 Suite: bdevio tests on: Nvme0n1 00:07:23.442 Test: blockdev write read block ...passed 00:07:23.442 Test: blockdev write zeroes read block ...passed 00:07:23.442 Test: blockdev write zeroes read no split ...passed 00:07:23.442 Test: blockdev write zeroes read split ...passed 00:07:23.442 Test: blockdev write zeroes read split partial ...passed 00:07:23.442 Test: blockdev reset ...[2024-12-06 04:55:01.452463] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:07:23.442 [2024-12-06 04:55:01.455036] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:23.442 passed 00:07:23.442 Test: blockdev write read 8 blocks ...passed 00:07:23.442 Test: blockdev write read size > 128k ...passed 00:07:23.442 Test: blockdev write read invalid size ...passed 00:07:23.442 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:23.442 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:23.442 Test: blockdev write read max offset ...passed 00:07:23.442 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:23.442 Test: blockdev writev readv 8 blocks ...passed 00:07:23.442 Test: blockdev writev readv 30 x 1block ...passed 00:07:23.442 Test: blockdev writev readv block ...passed 00:07:23.442 Test: blockdev writev readv size > 128k ...passed 00:07:23.442 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:23.442 Test: blockdev comparev and writev ...passed 00:07:23.442 Test: blockdev nvme passthru rw ...[2024-12-06 04:55:01.467915] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:23.442 separate metadata which is not supported yet. 00:07:23.442 passed 00:07:23.442 Test: blockdev nvme passthru vendor specific ...[2024-12-06 04:55:01.469342] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:23.442 [2024-12-06 04:55:01.469381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:23.442 passed 00:07:23.442 Test: blockdev nvme admin passthru ...passed 00:07:23.442 Test: blockdev copy ...passed 00:07:23.442 00:07:23.442 Run Summary: Type Total Ran Passed Failed Inactive 00:07:23.442 suites 7 7 n/a 0 0 00:07:23.442 tests 161 161 161 0 0 00:07:23.442 asserts 1025 1025 1025 0 n/a 00:07:23.442 00:07:23.442 Elapsed time = 0.597 seconds 00:07:23.442 0 00:07:23.442 04:55:01 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 73362 00:07:23.442 04:55:01 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 73362 ']' 00:07:23.442 04:55:01 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 73362 00:07:23.442 04:55:01 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:07:23.442 04:55:01 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:23.442 04:55:01 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73362 00:07:23.442 04:55:01 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:23.442 04:55:01 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:23.442 killing process with pid 73362 00:07:23.442 04:55:01 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73362' 00:07:23.442 04:55:01 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@969 -- # kill 73362 00:07:23.442 04:55:01 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@974 -- # wait 73362 00:07:23.442 04:55:01 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:23.442 00:07:23.442 real 0m1.450s 00:07:23.442 user 0m3.638s 00:07:23.442 sys 0m0.256s 00:07:23.442 ************************************ 00:07:23.442 END TEST bdev_bounds 00:07:23.442 ************************************ 00:07:23.442 04:55:01 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:23.442 04:55:01 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:23.700 04:55:01 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:23.700 04:55:01 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:23.700 04:55:01 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:23.700 04:55:01 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:23.700 ************************************ 00:07:23.700 START TEST bdev_nbd 00:07:23.700 ************************************ 00:07:23.700 04:55:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:23.700 04:55:01 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:23.700 04:55:01 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:23.700 04:55:01 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:23.700 04:55:01 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:23.700 04:55:01 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:23.700 04:55:01 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:23.700 04:55:01 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:07:23.700 04:55:01 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:23.700 04:55:01 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:23.700 04:55:01 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:23.700 04:55:01 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:07:23.700 04:55:01 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:23.700 04:55:01 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:23.700 04:55:01 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:23.700 04:55:01 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:23.700 04:55:01 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=73411 00:07:23.700 04:55:01 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:23.700 04:55:01 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 73411 /var/tmp/spdk-nbd.sock 00:07:23.700 04:55:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 73411 ']' 00:07:23.700 04:55:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:23.700 04:55:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:23.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:23.700 04:55:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:23.700 04:55:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:23.700 04:55:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:23.700 04:55:01 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:23.700 [2024-12-06 04:55:01.768174] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:23.700 [2024-12-06 04:55:01.768284] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:23.700 [2024-12-06 04:55:01.903191] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.958 [2024-12-06 04:55:01.933917] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.523 04:55:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:24.524 04:55:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:07:24.524 04:55:02 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:24.524 04:55:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:24.524 04:55:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:24.524 04:55:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:24.524 04:55:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:24.524 04:55:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:24.524 04:55:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:24.524 04:55:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:24.524 04:55:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:24.524 04:55:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:24.524 04:55:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:24.524 04:55:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:24.524 04:55:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:24.782 04:55:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:24.782 04:55:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:24.782 04:55:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:24.782 04:55:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:24.782 04:55:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:24.782 04:55:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:24.782 04:55:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:24.782 04:55:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:24.782 04:55:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:24.782 04:55:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:24.782 04:55:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:24.782 04:55:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:24.782 1+0 records in 00:07:24.782 1+0 records out 00:07:24.782 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000391436 s, 10.5 MB/s 00:07:24.782 04:55:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:24.782 04:55:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:24.782 04:55:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:24.782 04:55:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:24.782 04:55:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:24.782 04:55:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:24.782 04:55:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:24.782 04:55:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:07:24.782 04:55:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:24.782 04:55:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:24.782 04:55:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:24.782 04:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:24.782 04:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:24.782 04:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:24.782 04:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:24.782 04:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:25.039 04:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:25.039 04:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:25.039 04:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:25.039 04:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:25.039 1+0 records in 00:07:25.039 1+0 records out 00:07:25.039 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000547447 s, 7.5 MB/s 00:07:25.039 04:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:25.039 04:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:25.039 04:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:25.039 04:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:25.039 04:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:25.039 04:55:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:25.039 04:55:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:25.039 04:55:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:07:25.039 04:55:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:25.039 04:55:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:25.039 04:55:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:25.039 04:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:07:25.039 04:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:25.039 04:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:25.040 04:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:25.040 04:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:07:25.040 04:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:25.040 04:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:25.040 04:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:25.040 04:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:25.040 1+0 records in 00:07:25.040 1+0 records out 00:07:25.040 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000464481 s, 8.8 MB/s 00:07:25.040 04:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:25.040 04:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:25.040 04:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:25.040 04:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:25.040 04:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:25.040 04:55:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:25.040 04:55:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:25.040 04:55:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:25.296 04:55:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:25.296 04:55:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:25.296 04:55:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:25.296 04:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:07:25.296 04:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:25.296 04:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:25.296 04:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:25.296 04:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:07:25.296 04:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:25.296 04:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:25.296 04:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:25.296 04:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:25.296 1+0 records in 00:07:25.296 1+0 records out 00:07:25.296 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000510971 s, 8.0 MB/s 00:07:25.296 04:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:25.296 04:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:25.296 04:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:25.296 04:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:25.296 04:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:25.297 04:55:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:25.297 04:55:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:25.297 04:55:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:25.553 04:55:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:25.553 04:55:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:25.553 04:55:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:25.553 04:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:07:25.553 04:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:25.553 04:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:25.553 04:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:25.553 04:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:07:25.554 04:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:25.554 04:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:25.554 04:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:25.554 04:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:25.554 1+0 records in 00:07:25.554 1+0 records out 00:07:25.554 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000386957 s, 10.6 MB/s 00:07:25.554 04:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:25.554 04:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:25.554 04:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:25.554 04:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:25.554 04:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:25.554 04:55:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:25.554 04:55:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:25.554 04:55:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:25.811 04:55:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:25.811 04:55:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:25.811 04:55:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:25.811 04:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:07:25.811 04:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:25.811 04:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:25.811 04:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:25.811 04:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:07:25.811 04:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:25.811 04:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:25.811 04:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:25.811 04:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:25.811 1+0 records in 00:07:25.811 1+0 records out 00:07:25.811 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00054403 s, 7.5 MB/s 00:07:25.811 04:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:25.811 04:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:25.811 04:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:25.811 04:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:25.811 04:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:25.811 04:55:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:25.811 04:55:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:25.811 04:55:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:26.067 04:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:07:26.067 04:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:07:26.067 04:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:07:26.067 04:55:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd6 00:07:26.067 04:55:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:26.067 04:55:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:26.067 04:55:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:26.067 04:55:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd6 /proc/partitions 00:07:26.067 04:55:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:26.067 04:55:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:26.067 04:55:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:26.067 04:55:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:26.067 1+0 records in 00:07:26.067 1+0 records out 00:07:26.067 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000366257 s, 11.2 MB/s 00:07:26.067 04:55:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:26.067 04:55:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:26.067 04:55:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:26.067 04:55:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:26.067 04:55:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:26.067 04:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:26.067 04:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:26.067 04:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:26.324 04:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:26.324 { 00:07:26.324 "nbd_device": "/dev/nbd0", 00:07:26.324 "bdev_name": "Nvme0n1" 00:07:26.324 }, 00:07:26.324 { 00:07:26.324 "nbd_device": "/dev/nbd1", 00:07:26.324 "bdev_name": "Nvme1n1p1" 00:07:26.324 }, 00:07:26.324 { 00:07:26.324 "nbd_device": "/dev/nbd2", 00:07:26.324 "bdev_name": "Nvme1n1p2" 00:07:26.324 }, 00:07:26.324 { 00:07:26.324 "nbd_device": "/dev/nbd3", 00:07:26.324 "bdev_name": "Nvme2n1" 00:07:26.324 }, 00:07:26.324 { 00:07:26.324 "nbd_device": "/dev/nbd4", 00:07:26.324 "bdev_name": "Nvme2n2" 00:07:26.324 }, 00:07:26.324 { 00:07:26.324 "nbd_device": "/dev/nbd5", 00:07:26.324 "bdev_name": "Nvme2n3" 00:07:26.324 }, 00:07:26.324 { 00:07:26.324 "nbd_device": "/dev/nbd6", 00:07:26.324 "bdev_name": "Nvme3n1" 00:07:26.324 } 00:07:26.324 ]' 00:07:26.324 04:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:26.324 04:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:26.324 { 00:07:26.324 "nbd_device": "/dev/nbd0", 00:07:26.324 "bdev_name": "Nvme0n1" 00:07:26.324 }, 00:07:26.324 { 00:07:26.324 "nbd_device": "/dev/nbd1", 00:07:26.324 "bdev_name": "Nvme1n1p1" 00:07:26.324 }, 00:07:26.324 { 00:07:26.324 "nbd_device": "/dev/nbd2", 00:07:26.324 "bdev_name": "Nvme1n1p2" 00:07:26.324 }, 00:07:26.324 { 00:07:26.324 "nbd_device": "/dev/nbd3", 00:07:26.324 "bdev_name": "Nvme2n1" 00:07:26.324 }, 00:07:26.324 { 00:07:26.324 "nbd_device": "/dev/nbd4", 00:07:26.324 "bdev_name": "Nvme2n2" 00:07:26.324 }, 00:07:26.324 { 00:07:26.324 "nbd_device": "/dev/nbd5", 00:07:26.324 "bdev_name": "Nvme2n3" 00:07:26.324 }, 00:07:26.324 { 00:07:26.324 "nbd_device": "/dev/nbd6", 00:07:26.324 "bdev_name": "Nvme3n1" 00:07:26.324 } 00:07:26.324 ]' 00:07:26.324 04:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:26.324 04:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:07:26.324 04:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:26.324 04:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:07:26.324 04:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:26.324 04:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:26.324 04:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:26.324 04:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:26.585 04:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:26.585 04:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:26.585 04:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:26.585 04:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:26.585 04:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:26.585 04:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:26.585 04:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:26.585 04:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:26.585 04:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:26.585 04:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:26.585 04:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:26.585 04:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:26.585 04:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:26.585 04:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:26.585 04:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:26.585 04:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:26.585 04:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:26.585 04:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:26.585 04:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:26.585 04:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:26.842 04:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:26.842 04:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:26.842 04:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:26.842 04:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:26.842 04:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:26.842 04:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:26.843 04:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:26.843 04:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:26.843 04:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:26.843 04:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:27.102 04:55:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:27.102 04:55:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:27.102 04:55:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:27.102 04:55:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:27.102 04:55:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:27.102 04:55:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:27.102 04:55:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:27.102 04:55:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:27.102 04:55:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:27.102 04:55:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:27.363 04:55:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:27.363 04:55:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:27.363 04:55:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:27.363 04:55:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:27.363 04:55:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:27.363 04:55:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:27.363 04:55:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:27.363 04:55:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:27.363 04:55:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:27.363 04:55:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:27.363 04:55:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:27.363 04:55:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:27.363 04:55:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:27.363 04:55:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:27.363 04:55:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:27.363 04:55:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:27.363 04:55:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:27.363 04:55:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:27.363 04:55:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:27.363 04:55:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:07:27.622 04:55:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:07:27.622 04:55:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:07:27.622 04:55:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:07:27.622 04:55:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:27.622 04:55:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:27.622 04:55:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:07:27.622 04:55:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:27.622 04:55:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:27.622 04:55:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:27.622 04:55:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:27.622 04:55:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:27.881 04:55:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:27.881 04:55:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:27.881 04:55:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:27.881 04:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:27.881 04:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:27.881 04:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:27.881 04:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:27.881 04:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:27.881 04:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:27.881 04:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:27.881 04:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:27.881 04:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:27.881 04:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:27.881 04:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:27.881 04:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:27.881 04:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:27.881 04:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:27.881 04:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:27.881 04:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:27.881 04:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:27.881 04:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:27.881 04:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:27.881 04:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:27.881 04:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:27.881 04:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:27.881 04:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:27.881 04:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:27.881 04:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:28.139 /dev/nbd0 00:07:28.139 04:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:28.139 04:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:28.139 04:55:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:28.139 04:55:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:28.139 04:55:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:28.139 04:55:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:28.139 04:55:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:28.139 04:55:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:28.139 04:55:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:28.139 04:55:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:28.139 04:55:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:28.139 1+0 records in 00:07:28.139 1+0 records out 00:07:28.139 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00119732 s, 3.4 MB/s 00:07:28.139 04:55:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:28.139 04:55:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:28.139 04:55:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:28.139 04:55:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:28.139 04:55:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:28.139 04:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:28.139 04:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:28.139 04:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:07:28.400 /dev/nbd1 00:07:28.400 04:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:28.400 04:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:28.400 04:55:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:28.400 04:55:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:28.400 04:55:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:28.400 04:55:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:28.400 04:55:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:28.400 04:55:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:28.400 04:55:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:28.400 04:55:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:28.400 04:55:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:28.400 1+0 records in 00:07:28.400 1+0 records out 00:07:28.400 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0010227 s, 4.0 MB/s 00:07:28.400 04:55:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:28.400 04:55:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:28.400 04:55:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:28.400 04:55:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:28.400 04:55:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:28.400 04:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:28.400 04:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:28.400 04:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:07:28.661 /dev/nbd10 00:07:28.661 04:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:28.661 04:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:28.661 04:55:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:07:28.661 04:55:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:28.661 04:55:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:28.661 04:55:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:28.661 04:55:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:07:28.661 04:55:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:28.661 04:55:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:28.661 04:55:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:28.661 04:55:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:28.661 1+0 records in 00:07:28.661 1+0 records out 00:07:28.661 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0010259 s, 4.0 MB/s 00:07:28.661 04:55:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:28.661 04:55:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:28.661 04:55:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:28.661 04:55:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:28.661 04:55:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:28.661 04:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:28.661 04:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:28.661 04:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:07:28.921 /dev/nbd11 00:07:28.921 04:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:28.921 04:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:28.921 04:55:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:07:28.921 04:55:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:28.921 04:55:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:28.921 04:55:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:28.921 04:55:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:07:28.921 04:55:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:28.921 04:55:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:28.921 04:55:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:28.921 04:55:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:28.921 1+0 records in 00:07:28.921 1+0 records out 00:07:28.921 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000746594 s, 5.5 MB/s 00:07:28.921 04:55:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:28.921 04:55:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:28.921 04:55:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:28.921 04:55:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:28.921 04:55:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:28.921 04:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:28.921 04:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:28.921 04:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:07:29.181 /dev/nbd12 00:07:29.181 04:55:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:29.181 04:55:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:29.181 04:55:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:07:29.181 04:55:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:29.181 04:55:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:29.181 04:55:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:29.181 04:55:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:07:29.181 04:55:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:29.181 04:55:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:29.181 04:55:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:29.181 04:55:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:29.181 1+0 records in 00:07:29.181 1+0 records out 00:07:29.181 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00174925 s, 2.3 MB/s 00:07:29.181 04:55:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:29.181 04:55:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:29.181 04:55:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:29.181 04:55:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:29.181 04:55:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:29.181 04:55:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:29.181 04:55:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:29.181 04:55:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:07:29.442 /dev/nbd13 00:07:29.442 04:55:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:29.442 04:55:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:29.442 04:55:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:07:29.442 04:55:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:29.442 04:55:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:29.442 04:55:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:29.442 04:55:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:07:29.442 04:55:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:29.442 04:55:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:29.442 04:55:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:29.442 04:55:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:29.442 1+0 records in 00:07:29.442 1+0 records out 00:07:29.442 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00126178 s, 3.2 MB/s 00:07:29.442 04:55:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:29.442 04:55:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:29.442 04:55:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:29.442 04:55:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:29.442 04:55:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:29.442 04:55:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:29.442 04:55:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:29.442 04:55:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:07:29.442 /dev/nbd14 00:07:29.442 04:55:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:07:29.442 04:55:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:07:29.442 04:55:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd14 00:07:29.442 04:55:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:29.442 04:55:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:29.442 04:55:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:29.442 04:55:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd14 /proc/partitions 00:07:29.442 04:55:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:29.442 04:55:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:29.442 04:55:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:29.442 04:55:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:29.701 1+0 records in 00:07:29.701 1+0 records out 00:07:29.701 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00105543 s, 3.9 MB/s 00:07:29.701 04:55:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:29.701 04:55:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:29.701 04:55:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:29.701 04:55:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:29.701 04:55:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:29.701 04:55:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:29.701 04:55:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:29.701 04:55:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:29.702 04:55:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:29.702 04:55:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:29.702 04:55:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:29.702 { 00:07:29.702 "nbd_device": "/dev/nbd0", 00:07:29.702 "bdev_name": "Nvme0n1" 00:07:29.702 }, 00:07:29.702 { 00:07:29.702 "nbd_device": "/dev/nbd1", 00:07:29.702 "bdev_name": "Nvme1n1p1" 00:07:29.702 }, 00:07:29.702 { 00:07:29.702 "nbd_device": "/dev/nbd10", 00:07:29.702 "bdev_name": "Nvme1n1p2" 00:07:29.702 }, 00:07:29.702 { 00:07:29.702 "nbd_device": "/dev/nbd11", 00:07:29.702 "bdev_name": "Nvme2n1" 00:07:29.702 }, 00:07:29.702 { 00:07:29.702 "nbd_device": "/dev/nbd12", 00:07:29.702 "bdev_name": "Nvme2n2" 00:07:29.702 }, 00:07:29.702 { 00:07:29.702 "nbd_device": "/dev/nbd13", 00:07:29.702 "bdev_name": "Nvme2n3" 00:07:29.702 }, 00:07:29.702 { 00:07:29.702 "nbd_device": "/dev/nbd14", 00:07:29.702 "bdev_name": "Nvme3n1" 00:07:29.702 } 00:07:29.702 ]' 00:07:29.702 04:55:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:29.702 04:55:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:29.702 { 00:07:29.702 "nbd_device": "/dev/nbd0", 00:07:29.702 "bdev_name": "Nvme0n1" 00:07:29.702 }, 00:07:29.702 { 00:07:29.702 "nbd_device": "/dev/nbd1", 00:07:29.702 "bdev_name": "Nvme1n1p1" 00:07:29.702 }, 00:07:29.702 { 00:07:29.702 "nbd_device": "/dev/nbd10", 00:07:29.702 "bdev_name": "Nvme1n1p2" 00:07:29.702 }, 00:07:29.702 { 00:07:29.702 "nbd_device": "/dev/nbd11", 00:07:29.702 "bdev_name": "Nvme2n1" 00:07:29.702 }, 00:07:29.702 { 00:07:29.702 "nbd_device": "/dev/nbd12", 00:07:29.702 "bdev_name": "Nvme2n2" 00:07:29.702 }, 00:07:29.702 { 00:07:29.702 "nbd_device": "/dev/nbd13", 00:07:29.702 "bdev_name": "Nvme2n3" 00:07:29.702 }, 00:07:29.702 { 00:07:29.702 "nbd_device": "/dev/nbd14", 00:07:29.702 "bdev_name": "Nvme3n1" 00:07:29.702 } 00:07:29.702 ]' 00:07:29.702 04:55:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:29.702 /dev/nbd1 00:07:29.702 /dev/nbd10 00:07:29.702 /dev/nbd11 00:07:29.702 /dev/nbd12 00:07:29.702 /dev/nbd13 00:07:29.702 /dev/nbd14' 00:07:29.702 04:55:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:29.702 /dev/nbd1 00:07:29.702 /dev/nbd10 00:07:29.702 /dev/nbd11 00:07:29.702 /dev/nbd12 00:07:29.702 /dev/nbd13 00:07:29.702 /dev/nbd14' 00:07:29.702 04:55:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:29.702 04:55:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:07:29.702 04:55:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:07:29.702 04:55:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:07:29.702 04:55:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:07:29.702 04:55:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:07:29.702 04:55:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:29.702 04:55:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:29.702 04:55:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:29.702 04:55:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:29.702 04:55:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:29.702 04:55:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:29.962 256+0 records in 00:07:29.962 256+0 records out 00:07:29.962 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00760599 s, 138 MB/s 00:07:29.962 04:55:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:29.962 04:55:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:29.962 256+0 records in 00:07:29.962 256+0 records out 00:07:29.962 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.190465 s, 5.5 MB/s 00:07:29.962 04:55:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:29.962 04:55:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:30.223 256+0 records in 00:07:30.223 256+0 records out 00:07:30.223 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.196714 s, 5.3 MB/s 00:07:30.223 04:55:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:30.223 04:55:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:30.223 256+0 records in 00:07:30.223 256+0 records out 00:07:30.223 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.117072 s, 9.0 MB/s 00:07:30.482 04:55:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:30.482 04:55:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:30.482 256+0 records in 00:07:30.482 256+0 records out 00:07:30.482 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.164833 s, 6.4 MB/s 00:07:30.482 04:55:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:30.482 04:55:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:30.742 256+0 records in 00:07:30.742 256+0 records out 00:07:30.742 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.131196 s, 8.0 MB/s 00:07:30.742 04:55:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:30.742 04:55:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:30.742 256+0 records in 00:07:30.742 256+0 records out 00:07:30.742 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.149812 s, 7.0 MB/s 00:07:30.742 04:55:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:30.743 04:55:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:07:31.003 256+0 records in 00:07:31.003 256+0 records out 00:07:31.003 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0971941 s, 10.8 MB/s 00:07:31.003 04:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:07:31.003 04:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:31.003 04:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:31.003 04:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:31.003 04:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:31.003 04:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:31.003 04:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:31.003 04:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:31.003 04:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:31.003 04:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:31.003 04:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:31.003 04:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:31.003 04:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:31.003 04:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:31.003 04:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:31.003 04:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:31.003 04:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:31.003 04:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:31.003 04:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:31.003 04:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:31.003 04:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:07:31.003 04:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:31.003 04:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:31.003 04:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:31.003 04:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:31.003 04:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:31.003 04:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:31.003 04:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:31.003 04:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:31.264 04:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:31.264 04:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:31.264 04:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:31.264 04:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:31.264 04:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:31.264 04:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:31.265 04:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:31.265 04:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:31.265 04:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:31.265 04:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:31.265 04:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:31.265 04:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:31.265 04:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:31.265 04:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:31.265 04:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:31.265 04:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:31.265 04:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:31.265 04:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:31.265 04:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:31.265 04:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:31.526 04:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:31.526 04:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:31.526 04:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:31.526 04:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:31.526 04:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:31.526 04:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:31.526 04:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:31.526 04:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:31.526 04:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:31.526 04:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:31.787 04:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:31.787 04:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:31.787 04:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:31.787 04:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:31.787 04:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:31.787 04:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:31.787 04:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:31.787 04:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:31.787 04:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:31.787 04:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:32.092 04:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:32.092 04:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:32.092 04:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:32.092 04:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:32.092 04:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:32.092 04:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:32.092 04:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:32.092 04:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:32.092 04:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:32.092 04:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:32.355 04:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:32.355 04:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:32.355 04:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:32.355 04:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:32.355 04:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:32.355 04:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:32.355 04:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:32.355 04:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:32.355 04:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:32.355 04:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:07:32.355 04:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:07:32.355 04:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:07:32.355 04:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:07:32.355 04:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:32.355 04:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:32.355 04:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:07:32.355 04:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:32.355 04:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:32.355 04:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:32.355 04:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:32.355 04:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:32.615 04:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:32.615 04:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:32.615 04:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:32.615 04:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:32.615 04:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:32.615 04:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:32.615 04:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:32.615 04:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:32.615 04:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:32.615 04:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:32.615 04:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:32.615 04:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:32.615 04:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:32.615 04:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:32.615 04:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:32.615 04:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:32.875 malloc_lvol_verify 00:07:32.875 04:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:33.136 df3f84fb-856b-4657-8250-738b218936d9 00:07:33.136 04:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:33.395 1660aefe-611e-4faf-b920-e6259cb7d04c 00:07:33.396 04:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:33.396 /dev/nbd0 00:07:33.396 04:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:33.396 04:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:33.396 04:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:33.396 04:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:33.396 04:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:33.396 mke2fs 1.47.0 (5-Feb-2023) 00:07:33.396 Discarding device blocks: 0/4096 done 00:07:33.396 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:33.396 00:07:33.396 Allocating group tables: 0/1 done 00:07:33.396 Writing inode tables: 0/1 done 00:07:33.396 Creating journal (1024 blocks): done 00:07:33.396 Writing superblocks and filesystem accounting information: 0/1 done 00:07:33.396 00:07:33.396 04:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:33.396 04:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:33.396 04:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:33.396 04:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:33.396 04:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:33.396 04:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:33.396 04:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:33.657 04:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:33.657 04:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:33.657 04:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:33.657 04:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:33.657 04:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:33.657 04:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:33.657 04:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:33.657 04:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:33.657 04:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 73411 00:07:33.657 04:55:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 73411 ']' 00:07:33.657 04:55:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 73411 00:07:33.657 04:55:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:07:33.657 04:55:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:33.657 04:55:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73411 00:07:33.657 04:55:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:33.657 killing process with pid 73411 00:07:33.657 04:55:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:33.657 04:55:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73411' 00:07:33.657 04:55:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@969 -- # kill 73411 00:07:33.657 04:55:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@974 -- # wait 73411 00:07:33.919 04:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:33.919 00:07:33.919 real 0m10.355s 00:07:33.919 user 0m14.592s 00:07:33.919 sys 0m3.598s 00:07:33.919 04:55:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:33.919 04:55:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:33.919 ************************************ 00:07:33.919 END TEST bdev_nbd 00:07:33.919 ************************************ 00:07:33.919 04:55:12 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:07:33.919 04:55:12 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:07:33.919 skipping fio tests on NVMe due to multi-ns failures. 00:07:33.919 04:55:12 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:07:33.919 04:55:12 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:33.919 04:55:12 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:33.919 04:55:12 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:33.919 04:55:12 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:07:33.919 04:55:12 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:33.919 04:55:12 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:33.919 ************************************ 00:07:33.919 START TEST bdev_verify 00:07:33.919 ************************************ 00:07:33.919 04:55:12 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:34.178 [2024-12-06 04:55:12.170760] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:34.178 [2024-12-06 04:55:12.170874] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73818 ] 00:07:34.178 [2024-12-06 04:55:12.306713] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:34.178 [2024-12-06 04:55:12.339903] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:34.178 [2024-12-06 04:55:12.339972] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.745 Running I/O for 5 seconds... 00:07:37.071 20992.00 IOPS, 82.00 MiB/s [2024-12-06T04:55:16.244Z] 20544.00 IOPS, 80.25 MiB/s [2024-12-06T04:55:17.187Z] 19840.00 IOPS, 77.50 MiB/s [2024-12-06T04:55:18.126Z] 19760.00 IOPS, 77.19 MiB/s [2024-12-06T04:55:18.126Z] 19622.40 IOPS, 76.65 MiB/s 00:07:39.894 Latency(us) 00:07:39.894 [2024-12-06T04:55:18.126Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:39.894 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:39.894 Verification LBA range: start 0x0 length 0xbd0bd 00:07:39.894 Nvme0n1 : 5.08 1410.59 5.51 0.00 0.00 90408.75 16333.59 83886.08 00:07:39.894 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:39.894 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:39.894 Nvme0n1 : 5.08 1360.78 5.32 0.00 0.00 92871.50 19660.80 86305.87 00:07:39.894 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:39.894 Verification LBA range: start 0x0 length 0x4ff80 00:07:39.894 Nvme1n1p1 : 5.09 1409.55 5.51 0.00 0.00 90309.89 17140.18 78643.20 00:07:39.894 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:39.894 Verification LBA range: start 0x4ff80 length 0x4ff80 00:07:39.894 Nvme1n1p1 : 5.09 1370.04 5.35 0.00 0.00 92073.22 3856.54 89532.26 00:07:39.894 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:39.894 Verification LBA range: start 0x0 length 0x4ff7f 00:07:39.894 Nvme1n1p2 : 5.09 1408.68 5.50 0.00 0.00 90173.38 18551.73 78239.90 00:07:39.894 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:39.894 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:07:39.894 Nvme1n1p2 : 5.11 1377.83 5.38 0.00 0.00 91415.40 9779.99 87515.77 00:07:39.894 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:39.894 Verification LBA range: start 0x0 length 0x80000 00:07:39.894 Nvme2n1 : 5.09 1408.29 5.50 0.00 0.00 90022.01 20467.40 75013.51 00:07:39.894 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:39.894 Verification LBA range: start 0x80000 length 0x80000 00:07:39.894 Nvme2n1 : 5.07 1362.39 5.32 0.00 0.00 93655.53 15728.64 90338.86 00:07:39.894 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:39.894 Verification LBA range: start 0x0 length 0x80000 00:07:39.894 Nvme2n2 : 5.09 1407.91 5.50 0.00 0.00 89862.33 20971.52 74610.22 00:07:39.894 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:39.894 Verification LBA range: start 0x80000 length 0x80000 00:07:39.894 Nvme2n2 : 5.07 1362.01 5.32 0.00 0.00 93419.00 19559.98 84289.38 00:07:39.894 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:39.894 Verification LBA range: start 0x0 length 0x80000 00:07:39.894 Nvme2n3 : 5.09 1407.34 5.50 0.00 0.00 89702.34 17946.78 78239.90 00:07:39.894 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:39.894 Verification LBA range: start 0x80000 length 0x80000 00:07:39.894 Nvme2n3 : 5.08 1361.60 5.32 0.00 0.00 93249.30 21778.12 79853.10 00:07:39.894 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:39.894 Verification LBA range: start 0x0 length 0x20000 00:07:39.894 Nvme3n1 : 5.10 1417.97 5.54 0.00 0.00 88979.82 3604.48 81869.59 00:07:39.894 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:39.894 Verification LBA range: start 0x20000 length 0x20000 00:07:39.895 Nvme3n1 : 5.08 1361.19 5.32 0.00 0.00 93069.27 22080.59 82676.18 00:07:39.895 [2024-12-06T04:55:18.127Z] =================================================================================================================== 00:07:39.895 [2024-12-06T04:55:18.127Z] Total : 19426.16 75.88 0.00 0.00 91344.94 3604.48 90338.86 00:07:40.467 00:07:40.467 real 0m6.369s 00:07:40.467 user 0m12.021s 00:07:40.467 sys 0m0.191s 00:07:40.467 04:55:18 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:40.467 ************************************ 00:07:40.467 END TEST bdev_verify 00:07:40.467 ************************************ 00:07:40.467 04:55:18 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:40.467 04:55:18 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:40.467 04:55:18 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:07:40.467 04:55:18 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:40.467 04:55:18 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:40.467 ************************************ 00:07:40.467 START TEST bdev_verify_big_io 00:07:40.467 ************************************ 00:07:40.467 04:55:18 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:40.467 [2024-12-06 04:55:18.586399] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:40.467 [2024-12-06 04:55:18.586527] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73905 ] 00:07:40.727 [2024-12-06 04:55:18.718613] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:40.727 [2024-12-06 04:55:18.753786] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.727 [2024-12-06 04:55:18.753812] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:40.986 Running I/O for 5 seconds... 00:07:47.295 1723.00 IOPS, 107.69 MiB/s [2024-12-06T04:55:25.527Z] 3190.50 IOPS, 199.41 MiB/s [2024-12-06T04:55:25.527Z] 2939.33 IOPS, 183.71 MiB/s 00:07:47.295 Latency(us) 00:07:47.295 [2024-12-06T04:55:25.527Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:47.295 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:47.295 Verification LBA range: start 0x0 length 0xbd0b 00:07:47.295 Nvme0n1 : 5.79 95.40 5.96 0.00 0.00 1275955.51 10788.23 1832588.21 00:07:47.295 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:47.295 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:47.295 Nvme0n1 : 6.10 68.18 4.26 0.00 0.00 1724831.90 70577.23 2193943.63 00:07:47.295 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:47.295 Verification LBA range: start 0x0 length 0x4ff8 00:07:47.295 Nvme1n1p1 : 5.92 98.86 6.18 0.00 0.00 1199780.87 30852.33 1858399.31 00:07:47.295 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:47.295 Verification LBA range: start 0x4ff8 length 0x4ff8 00:07:47.295 Nvme1n1p1 : 6.15 70.28 4.39 0.00 0.00 1633397.98 160512.79 2219754.73 00:07:47.295 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:47.295 Verification LBA range: start 0x0 length 0x4ff7 00:07:47.295 Nvme1n1p2 : 5.92 99.50 6.22 0.00 0.00 1153754.73 47387.57 1897115.96 00:07:47.295 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:47.295 Verification LBA range: start 0x4ff7 length 0x4ff7 00:07:47.295 Nvme1n1p2 : 6.03 111.20 6.95 0.00 0.00 1014034.48 82676.18 1187310.67 00:07:47.295 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:47.295 Verification LBA range: start 0x0 length 0x8000 00:07:47.295 Nvme2n1 : 6.04 110.30 6.89 0.00 0.00 1011145.24 69770.63 1303460.63 00:07:47.295 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:47.295 Verification LBA range: start 0x8000 length 0x8000 00:07:47.295 Nvme2n1 : 6.00 112.04 7.00 0.00 0.00 975782.73 82676.18 1213121.77 00:07:47.295 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:47.295 Verification LBA range: start 0x0 length 0x8000 00:07:47.295 Nvme2n2 : 6.07 108.18 6.76 0.00 0.00 994449.60 44362.83 1974549.27 00:07:47.295 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:47.295 Verification LBA range: start 0x8000 length 0x8000 00:07:47.295 Nvme2n2 : 6.11 121.36 7.58 0.00 0.00 872554.39 21072.34 1232480.10 00:07:47.295 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:47.295 Verification LBA range: start 0x0 length 0x8000 00:07:47.296 Nvme2n3 : 6.12 116.20 7.26 0.00 0.00 897300.51 43152.94 1987454.82 00:07:47.296 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:47.296 Verification LBA range: start 0x8000 length 0x8000 00:07:47.296 Nvme2n3 : 6.16 141.68 8.85 0.00 0.00 730784.61 6125.10 1258291.20 00:07:47.296 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:47.296 Verification LBA range: start 0x0 length 0x2000 00:07:47.296 Nvme3n1 : 6.21 152.37 9.52 0.00 0.00 669465.29 535.63 2013265.92 00:07:47.296 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:47.296 Verification LBA range: start 0x2000 length 0x2000 00:07:47.296 Nvme3n1 : 5.91 104.07 6.50 0.00 0.00 1173480.07 23391.31 1271196.75 00:07:47.296 [2024-12-06T04:55:25.528Z] =================================================================================================================== 00:07:47.296 [2024-12-06T04:55:25.528Z] Total : 1509.63 94.35 0.00 0.00 1035432.92 535.63 2219754.73 00:07:48.238 00:07:48.238 real 0m7.647s 00:07:48.238 user 0m14.580s 00:07:48.239 sys 0m0.209s 00:07:48.239 04:55:26 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:48.239 ************************************ 00:07:48.239 END TEST bdev_verify_big_io 00:07:48.239 ************************************ 00:07:48.239 04:55:26 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:48.239 04:55:26 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:48.239 04:55:26 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:48.239 04:55:26 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:48.239 04:55:26 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:48.239 ************************************ 00:07:48.239 START TEST bdev_write_zeroes 00:07:48.239 ************************************ 00:07:48.239 04:55:26 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:48.239 [2024-12-06 04:55:26.289770] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:48.239 [2024-12-06 04:55:26.289898] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74014 ] 00:07:48.239 [2024-12-06 04:55:26.426135] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.239 [2024-12-06 04:55:26.459748] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.811 Running I/O for 1 seconds... 00:07:49.752 57792.00 IOPS, 225.75 MiB/s 00:07:49.752 Latency(us) 00:07:49.752 [2024-12-06T04:55:27.984Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:49.752 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:49.752 Nvme0n1 : 1.03 8236.37 32.17 0.00 0.00 15506.33 11846.89 26012.75 00:07:49.752 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:49.752 Nvme1n1p1 : 1.03 8226.31 32.13 0.00 0.00 15503.60 11544.42 25508.63 00:07:49.752 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:49.752 Nvme1n1p2 : 1.03 8216.24 32.09 0.00 0.00 15415.66 10989.88 24197.91 00:07:49.752 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:49.752 Nvme2n1 : 1.03 8207.02 32.06 0.00 0.00 15384.42 9376.69 24399.56 00:07:49.752 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:49.752 Nvme2n2 : 1.03 8197.77 32.02 0.00 0.00 15376.97 9074.22 24601.21 00:07:49.752 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:49.752 Nvme2n3 : 1.03 8188.55 31.99 0.00 0.00 15369.29 8620.50 24802.86 00:07:49.752 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:49.752 Nvme3n1 : 1.03 8179.40 31.95 0.00 0.00 15362.26 8368.44 26617.70 00:07:49.752 [2024-12-06T04:55:27.984Z] =================================================================================================================== 00:07:49.752 [2024-12-06T04:55:27.984Z] Total : 57451.65 224.42 0.00 0.00 15416.93 8368.44 26617.70 00:07:50.014 00:07:50.014 real 0m1.843s 00:07:50.014 user 0m1.568s 00:07:50.014 sys 0m0.162s 00:07:50.014 ************************************ 00:07:50.014 END TEST bdev_write_zeroes 00:07:50.014 ************************************ 00:07:50.014 04:55:28 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:50.014 04:55:28 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:50.014 04:55:28 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:50.014 04:55:28 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:50.014 04:55:28 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:50.014 04:55:28 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:50.014 ************************************ 00:07:50.014 START TEST bdev_json_nonenclosed 00:07:50.014 ************************************ 00:07:50.014 04:55:28 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:50.014 [2024-12-06 04:55:28.195157] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:50.014 [2024-12-06 04:55:28.195270] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74056 ] 00:07:50.275 [2024-12-06 04:55:28.331483] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.275 [2024-12-06 04:55:28.365203] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.275 [2024-12-06 04:55:28.365292] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:50.275 [2024-12-06 04:55:28.365308] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:50.275 [2024-12-06 04:55:28.365319] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:50.275 00:07:50.275 real 0m0.309s 00:07:50.275 user 0m0.116s 00:07:50.275 sys 0m0.090s 00:07:50.275 04:55:28 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:50.275 ************************************ 00:07:50.275 END TEST bdev_json_nonenclosed 00:07:50.275 ************************************ 00:07:50.275 04:55:28 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:50.275 04:55:28 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:50.275 04:55:28 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:50.275 04:55:28 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:50.275 04:55:28 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:50.537 ************************************ 00:07:50.537 START TEST bdev_json_nonarray 00:07:50.537 ************************************ 00:07:50.537 04:55:28 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:50.537 [2024-12-06 04:55:28.566780] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:50.537 [2024-12-06 04:55:28.566895] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74076 ] 00:07:50.537 [2024-12-06 04:55:28.699723] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.537 [2024-12-06 04:55:28.734328] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.537 [2024-12-06 04:55:28.734422] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:50.537 [2024-12-06 04:55:28.734439] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:50.537 [2024-12-06 04:55:28.734449] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:50.797 00:07:50.797 real 0m0.309s 00:07:50.797 user 0m0.128s 00:07:50.797 sys 0m0.078s 00:07:50.797 04:55:28 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:50.797 04:55:28 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:50.797 ************************************ 00:07:50.797 END TEST bdev_json_nonarray 00:07:50.797 ************************************ 00:07:50.797 04:55:28 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:07:50.797 04:55:28 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:07:50.797 04:55:28 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:07:50.797 04:55:28 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:50.797 04:55:28 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:50.797 04:55:28 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:50.797 ************************************ 00:07:50.797 START TEST bdev_gpt_uuid 00:07:50.797 ************************************ 00:07:50.797 04:55:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1125 -- # bdev_gpt_uuid 00:07:50.797 04:55:28 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:07:50.797 04:55:28 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:07:50.797 04:55:28 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=74096 00:07:50.797 04:55:28 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:50.797 04:55:28 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 74096 00:07:50.798 04:55:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@831 -- # '[' -z 74096 ']' 00:07:50.798 04:55:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:50.798 04:55:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:50.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:50.798 04:55:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:50.798 04:55:28 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:50.798 04:55:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:50.798 04:55:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:50.798 [2024-12-06 04:55:28.951151] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:50.798 [2024-12-06 04:55:28.951276] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74096 ] 00:07:51.059 [2024-12-06 04:55:29.081824] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.059 [2024-12-06 04:55:29.132325] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.631 04:55:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:51.631 04:55:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # return 0 00:07:51.631 04:55:29 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:51.631 04:55:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.631 04:55:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:51.892 Some configs were skipped because the RPC state that can call them passed over. 00:07:51.892 04:55:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.892 04:55:30 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:07:51.892 04:55:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.892 04:55:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:51.892 04:55:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.892 04:55:30 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:07:51.892 04:55:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.892 04:55:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:52.153 04:55:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.153 04:55:30 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:07:52.153 { 00:07:52.153 "name": "Nvme1n1p1", 00:07:52.153 "aliases": [ 00:07:52.153 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:07:52.153 ], 00:07:52.153 "product_name": "GPT Disk", 00:07:52.153 "block_size": 4096, 00:07:52.153 "num_blocks": 655104, 00:07:52.153 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:52.153 "assigned_rate_limits": { 00:07:52.153 "rw_ios_per_sec": 0, 00:07:52.153 "rw_mbytes_per_sec": 0, 00:07:52.153 "r_mbytes_per_sec": 0, 00:07:52.153 "w_mbytes_per_sec": 0 00:07:52.153 }, 00:07:52.153 "claimed": false, 00:07:52.153 "zoned": false, 00:07:52.153 "supported_io_types": { 00:07:52.153 "read": true, 00:07:52.153 "write": true, 00:07:52.153 "unmap": true, 00:07:52.153 "flush": true, 00:07:52.153 "reset": true, 00:07:52.153 "nvme_admin": false, 00:07:52.153 "nvme_io": false, 00:07:52.153 "nvme_io_md": false, 00:07:52.153 "write_zeroes": true, 00:07:52.153 "zcopy": false, 00:07:52.153 "get_zone_info": false, 00:07:52.153 "zone_management": false, 00:07:52.153 "zone_append": false, 00:07:52.153 "compare": true, 00:07:52.153 "compare_and_write": false, 00:07:52.153 "abort": true, 00:07:52.153 "seek_hole": false, 00:07:52.153 "seek_data": false, 00:07:52.153 "copy": true, 00:07:52.153 "nvme_iov_md": false 00:07:52.153 }, 00:07:52.153 "driver_specific": { 00:07:52.153 "gpt": { 00:07:52.153 "base_bdev": "Nvme1n1", 00:07:52.153 "offset_blocks": 256, 00:07:52.153 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:07:52.153 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:52.153 "partition_name": "SPDK_TEST_first" 00:07:52.153 } 00:07:52.153 } 00:07:52.153 } 00:07:52.153 ]' 00:07:52.153 04:55:30 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:07:52.153 04:55:30 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:07:52.153 04:55:30 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:07:52.153 04:55:30 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:52.153 04:55:30 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:52.153 04:55:30 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:52.153 04:55:30 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:52.153 04:55:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.153 04:55:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:52.153 04:55:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.153 04:55:30 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:07:52.153 { 00:07:52.153 "name": "Nvme1n1p2", 00:07:52.153 "aliases": [ 00:07:52.153 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:07:52.153 ], 00:07:52.153 "product_name": "GPT Disk", 00:07:52.153 "block_size": 4096, 00:07:52.153 "num_blocks": 655103, 00:07:52.153 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:52.153 "assigned_rate_limits": { 00:07:52.153 "rw_ios_per_sec": 0, 00:07:52.153 "rw_mbytes_per_sec": 0, 00:07:52.153 "r_mbytes_per_sec": 0, 00:07:52.153 "w_mbytes_per_sec": 0 00:07:52.153 }, 00:07:52.153 "claimed": false, 00:07:52.153 "zoned": false, 00:07:52.153 "supported_io_types": { 00:07:52.153 "read": true, 00:07:52.153 "write": true, 00:07:52.153 "unmap": true, 00:07:52.153 "flush": true, 00:07:52.153 "reset": true, 00:07:52.153 "nvme_admin": false, 00:07:52.153 "nvme_io": false, 00:07:52.153 "nvme_io_md": false, 00:07:52.153 "write_zeroes": true, 00:07:52.153 "zcopy": false, 00:07:52.153 "get_zone_info": false, 00:07:52.153 "zone_management": false, 00:07:52.153 "zone_append": false, 00:07:52.154 "compare": true, 00:07:52.154 "compare_and_write": false, 00:07:52.154 "abort": true, 00:07:52.154 "seek_hole": false, 00:07:52.154 "seek_data": false, 00:07:52.154 "copy": true, 00:07:52.154 "nvme_iov_md": false 00:07:52.154 }, 00:07:52.154 "driver_specific": { 00:07:52.154 "gpt": { 00:07:52.154 "base_bdev": "Nvme1n1", 00:07:52.154 "offset_blocks": 655360, 00:07:52.154 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:07:52.154 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:52.154 "partition_name": "SPDK_TEST_second" 00:07:52.154 } 00:07:52.154 } 00:07:52.154 } 00:07:52.154 ]' 00:07:52.154 04:55:30 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:07:52.154 04:55:30 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:07:52.154 04:55:30 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:07:52.154 04:55:30 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:52.154 04:55:30 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:52.154 04:55:30 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:52.154 04:55:30 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 74096 00:07:52.154 04:55:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@950 -- # '[' -z 74096 ']' 00:07:52.154 04:55:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # kill -0 74096 00:07:52.154 04:55:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # uname 00:07:52.154 04:55:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:52.154 04:55:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74096 00:07:52.154 04:55:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:52.154 killing process with pid 74096 00:07:52.154 04:55:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:52.154 04:55:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74096' 00:07:52.154 04:55:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@969 -- # kill 74096 00:07:52.154 04:55:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@974 -- # wait 74096 00:07:52.415 00:07:52.415 real 0m1.743s 00:07:52.415 user 0m1.897s 00:07:52.415 sys 0m0.338s 00:07:52.415 04:55:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:52.415 ************************************ 00:07:52.415 END TEST bdev_gpt_uuid 00:07:52.415 ************************************ 00:07:52.415 04:55:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:52.676 04:55:30 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:07:52.676 04:55:30 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:07:52.676 04:55:30 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:07:52.676 04:55:30 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:52.676 04:55:30 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:52.676 04:55:30 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:07:52.676 04:55:30 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:07:52.676 04:55:30 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:07:52.676 04:55:30 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:52.937 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:52.937 Waiting for block devices as requested 00:07:53.197 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:53.198 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:53.198 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:53.198 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:58.519 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:58.519 04:55:36 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:07:58.519 04:55:36 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:07:58.777 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:07:58.777 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:07:58.777 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:07:58.777 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:07:58.777 04:55:36 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:07:58.777 00:07:58.777 real 0m48.005s 00:07:58.777 user 1m0.608s 00:07:58.777 sys 0m7.520s 00:07:58.777 04:55:36 blockdev_nvme_gpt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:58.777 04:55:36 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:58.777 ************************************ 00:07:58.777 END TEST blockdev_nvme_gpt 00:07:58.777 ************************************ 00:07:58.777 04:55:36 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:58.777 04:55:36 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:58.777 04:55:36 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:58.777 04:55:36 -- common/autotest_common.sh@10 -- # set +x 00:07:58.777 ************************************ 00:07:58.777 START TEST nvme 00:07:58.777 ************************************ 00:07:58.777 04:55:36 nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:58.777 * Looking for test storage... 00:07:58.777 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:07:58.777 04:55:36 nvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:58.777 04:55:36 nvme -- common/autotest_common.sh@1681 -- # lcov --version 00:07:58.777 04:55:36 nvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:58.777 04:55:36 nvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:58.777 04:55:36 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:58.777 04:55:36 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:58.777 04:55:36 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:58.777 04:55:36 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:07:58.777 04:55:36 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:07:58.777 04:55:36 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:07:58.777 04:55:36 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:07:58.777 04:55:36 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:07:58.777 04:55:36 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:07:58.777 04:55:36 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:07:58.777 04:55:36 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:58.777 04:55:36 nvme -- scripts/common.sh@344 -- # case "$op" in 00:07:58.777 04:55:36 nvme -- scripts/common.sh@345 -- # : 1 00:07:58.777 04:55:36 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:58.777 04:55:36 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:58.777 04:55:36 nvme -- scripts/common.sh@365 -- # decimal 1 00:07:58.777 04:55:36 nvme -- scripts/common.sh@353 -- # local d=1 00:07:58.777 04:55:36 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:58.777 04:55:36 nvme -- scripts/common.sh@355 -- # echo 1 00:07:58.777 04:55:36 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:07:58.777 04:55:36 nvme -- scripts/common.sh@366 -- # decimal 2 00:07:58.777 04:55:36 nvme -- scripts/common.sh@353 -- # local d=2 00:07:58.777 04:55:36 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:58.777 04:55:36 nvme -- scripts/common.sh@355 -- # echo 2 00:07:58.777 04:55:36 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:07:58.777 04:55:36 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:58.777 04:55:36 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:58.777 04:55:36 nvme -- scripts/common.sh@368 -- # return 0 00:07:58.777 04:55:36 nvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:58.777 04:55:36 nvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:58.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.777 --rc genhtml_branch_coverage=1 00:07:58.777 --rc genhtml_function_coverage=1 00:07:58.777 --rc genhtml_legend=1 00:07:58.777 --rc geninfo_all_blocks=1 00:07:58.777 --rc geninfo_unexecuted_blocks=1 00:07:58.777 00:07:58.778 ' 00:07:58.778 04:55:36 nvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:58.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.778 --rc genhtml_branch_coverage=1 00:07:58.778 --rc genhtml_function_coverage=1 00:07:58.778 --rc genhtml_legend=1 00:07:58.778 --rc geninfo_all_blocks=1 00:07:58.778 --rc geninfo_unexecuted_blocks=1 00:07:58.778 00:07:58.778 ' 00:07:58.778 04:55:36 nvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:58.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.778 --rc genhtml_branch_coverage=1 00:07:58.778 --rc genhtml_function_coverage=1 00:07:58.778 --rc genhtml_legend=1 00:07:58.778 --rc geninfo_all_blocks=1 00:07:58.778 --rc geninfo_unexecuted_blocks=1 00:07:58.778 00:07:58.778 ' 00:07:58.778 04:55:36 nvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:58.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.778 --rc genhtml_branch_coverage=1 00:07:58.778 --rc genhtml_function_coverage=1 00:07:58.778 --rc genhtml_legend=1 00:07:58.778 --rc geninfo_all_blocks=1 00:07:58.778 --rc geninfo_unexecuted_blocks=1 00:07:58.778 00:07:58.778 ' 00:07:58.778 04:55:36 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:59.350 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:59.920 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:59.920 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:59.920 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:59.920 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:59.920 04:55:37 nvme -- nvme/nvme.sh@79 -- # uname 00:07:59.920 04:55:37 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:07:59.920 04:55:37 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:07:59.920 04:55:37 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:07:59.920 04:55:37 nvme -- common/autotest_common.sh@1082 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:07:59.920 04:55:37 nvme -- common/autotest_common.sh@1068 -- # _randomize_va_space=2 00:07:59.920 04:55:37 nvme -- common/autotest_common.sh@1069 -- # echo 0 00:07:59.920 Waiting for stub to ready for secondary processes... 00:07:59.920 04:55:37 nvme -- common/autotest_common.sh@1071 -- # stubpid=74719 00:07:59.920 04:55:37 nvme -- common/autotest_common.sh@1072 -- # echo Waiting for stub to ready for secondary processes... 00:07:59.920 04:55:37 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:59.920 04:55:37 nvme -- common/autotest_common.sh@1075 -- # [[ -e /proc/74719 ]] 00:07:59.920 04:55:37 nvme -- common/autotest_common.sh@1076 -- # sleep 1s 00:07:59.920 04:55:37 nvme -- common/autotest_common.sh@1070 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:07:59.920 [2024-12-06 04:55:38.028369] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:59.920 [2024-12-06 04:55:38.028494] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:08:00.863 [2024-12-06 04:55:38.760046] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:00.863 [2024-12-06 04:55:38.780014] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:08:00.863 [2024-12-06 04:55:38.780599] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:00.863 [2024-12-06 04:55:38.780748] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:08:00.863 [2024-12-06 04:55:38.791467] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:08:00.863 [2024-12-06 04:55:38.791500] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:00.863 [2024-12-06 04:55:38.801482] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:08:00.863 [2024-12-06 04:55:38.801695] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:08:00.863 [2024-12-06 04:55:38.802470] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:00.863 [2024-12-06 04:55:38.802682] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:08:00.863 [2024-12-06 04:55:38.802759] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:08:00.863 [2024-12-06 04:55:38.803197] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:00.863 [2024-12-06 04:55:38.803344] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:08:00.863 [2024-12-06 04:55:38.803391] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:08:00.863 [2024-12-06 04:55:38.804845] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:00.863 [2024-12-06 04:55:38.804995] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:08:00.863 [2024-12-06 04:55:38.805043] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:08:00.863 [2024-12-06 04:55:38.805091] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:08:00.863 [2024-12-06 04:55:38.805150] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:08:00.863 done. 00:08:00.863 04:55:38 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:00.863 04:55:38 nvme -- common/autotest_common.sh@1078 -- # echo done. 00:08:00.863 04:55:38 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:08:00.863 04:55:39 nvme -- common/autotest_common.sh@1101 -- # '[' 10 -le 1 ']' 00:08:00.863 04:55:39 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:00.863 04:55:39 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:00.863 ************************************ 00:08:00.863 START TEST nvme_reset 00:08:00.863 ************************************ 00:08:00.863 04:55:39 nvme.nvme_reset -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:08:01.124 Initializing NVMe Controllers 00:08:01.124 Skipping QEMU NVMe SSD at 0000:00:13.0 00:08:01.124 Skipping QEMU NVMe SSD at 0000:00:10.0 00:08:01.124 Skipping QEMU NVMe SSD at 0000:00:11.0 00:08:01.124 Skipping QEMU NVMe SSD at 0000:00:12.0 00:08:01.124 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:08:01.124 00:08:01.124 real 0m0.176s 00:08:01.124 user 0m0.046s 00:08:01.125 sys 0m0.079s 00:08:01.125 04:55:39 nvme.nvme_reset -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:01.125 04:55:39 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:08:01.125 ************************************ 00:08:01.125 END TEST nvme_reset 00:08:01.125 ************************************ 00:08:01.125 04:55:39 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:08:01.125 04:55:39 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:01.125 04:55:39 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:01.125 04:55:39 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:01.125 ************************************ 00:08:01.125 START TEST nvme_identify 00:08:01.125 ************************************ 00:08:01.125 04:55:39 nvme.nvme_identify -- common/autotest_common.sh@1125 -- # nvme_identify 00:08:01.125 04:55:39 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:08:01.125 04:55:39 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:08:01.125 04:55:39 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:08:01.125 04:55:39 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:08:01.125 04:55:39 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # bdfs=() 00:08:01.125 04:55:39 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # local bdfs 00:08:01.125 04:55:39 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:01.125 04:55:39 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:01.125 04:55:39 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:08:01.125 04:55:39 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:08:01.125 04:55:39 nvme.nvme_identify -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:01.125 04:55:39 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:08:01.388 ===================================================== 00:08:01.388 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:01.388 ===================================================== 00:08:01.388 Controller Capabilities/Features 00:08:01.388 ================================ 00:08:01.388 Vendor ID: 1b36 00:08:01.388 Subsystem Vendor ID: 1af4 00:08:01.388 Serial Number: 12343 00:08:01.388 Model Number: QEMU NVMe Ctrl 00:08:01.388 Firmware Version: 8.0.0 00:08:01.388 Recommended Arb Burst: 6 00:08:01.388 IEEE OUI Identifier: 00 54 52 00:08:01.388 Multi-path I/O 00:08:01.388 May have multiple subsystem ports: No 00:08:01.388 May have multiple controllers: Yes 00:08:01.388 Associated with SR-IOV VF: No 00:08:01.388 Max Data Transfer Size: 524288 00:08:01.388 Max Number of Namespaces: 256 00:08:01.388 Max Number of I/O Queues: 64 00:08:01.388 NVMe Specification Version (VS): 1.4 00:08:01.388 NVMe Specification Version (Identify): 1.4 00:08:01.388 Maximum Queue Entries: 2048 00:08:01.388 Contiguous Queues Required: Yes 00:08:01.388 Arbitration Mechanisms Supported 00:08:01.388 Weighted Round Robin: Not Supported 00:08:01.388 Vendor Specific: Not Supported 00:08:01.388 Reset Timeout: 7500 ms 00:08:01.388 Doorbell Stride: 4 bytes 00:08:01.388 NVM Subsystem Reset: Not Supported 00:08:01.388 Command Sets Supported 00:08:01.388 NVM Command Set: Supported 00:08:01.388 Boot Partition: Not Supported 00:08:01.388 Memory Page Size Minimum: 4096 bytes 00:08:01.388 Memory Page Size Maximum: 65536 bytes 00:08:01.388 Persistent Memory Region: Not Supported 00:08:01.388 Optional Asynchronous Events Supported 00:08:01.388 Namespace Attribute Notices: Supported 00:08:01.388 Firmware Activation Notices: Not Supported 00:08:01.388 ANA Change Notices: Not Supported 00:08:01.388 PLE Aggregate Log Change Notices: Not Supported 00:08:01.388 LBA Status Info Alert Notices: Not Supported 00:08:01.388 EGE Aggregate Log Change Notices: Not Supported 00:08:01.388 Normal NVM Subsystem Shutdown event: Not Supported 00:08:01.388 Zone Descriptor Change Notices: Not Supported 00:08:01.388 Discovery Log Change Notices: Not Supported 00:08:01.388 Controller Attributes 00:08:01.388 128-bit Host Identifier: Not Supported 00:08:01.389 Non-Operational Permissive Mode: Not Supported 00:08:01.389 NVM Sets: Not Supported 00:08:01.389 Read Recovery Levels: Not Supported 00:08:01.389 Endurance Groups: Supported 00:08:01.389 Predictable Latency Mode: Not Supported 00:08:01.389 Traffic Based Keep ALive: Not Supported 00:08:01.389 Namespace Granularity: Not Supported 00:08:01.389 SQ Associations: Not Supported 00:08:01.389 UUID List: Not Supported 00:08:01.389 Multi-Domain Subsystem: Not Supported 00:08:01.389 Fixed Capacity Management: Not Supported 00:08:01.389 Variable Capacity Management: Not Supported 00:08:01.389 Delete Endurance Group: Not Supported 00:08:01.389 Delete NVM Set: Not Supported 00:08:01.389 Extended LBA Formats Supported: Supported 00:08:01.389 Flexible Data Placement Supported: Supported 00:08:01.389 00:08:01.389 Controller Memory Buffer Support 00:08:01.389 ================================ 00:08:01.389 Supported: No 00:08:01.389 00:08:01.389 Persistent Memory Region Support 00:08:01.389 ================================ 00:08:01.389 Supported: No 00:08:01.389 00:08:01.389 Admin Command Set Attributes 00:08:01.389 ============================ 00:08:01.389 Security Send/Receive: Not Supported 00:08:01.389 Format NVM: Supported 00:08:01.389 Firmware Activate/Download: Not Supported 00:08:01.389 Namespace Management: Supported 00:08:01.389 Device Self-Test: Not Supported 00:08:01.389 Directives: Supported 00:08:01.389 NVMe-MI: Not Supported 00:08:01.389 Virtualization Management: Not Supported 00:08:01.389 Doorbell Buffer Config: Supported 00:08:01.389 Get LBA Status Capability: Not Supported 00:08:01.389 Command & Feature Lockdown Capability: Not Supported 00:08:01.389 Abort Command Limit: 4 00:08:01.389 Async Event Request Limit: 4 00:08:01.389 Number of Firmware Slots: N/A 00:08:01.389 Firmware Slot 1 Read-Only: N/A 00:08:01.389 Firmware Activation Without Reset: N/A 00:08:01.389 Multiple Update Detection Support: N/A 00:08:01.389 Firmware Update Granularity: No Information Provided 00:08:01.389 Per-Namespace SMART Log: Yes 00:08:01.389 Asymmetric Namespace Access Log Page: Not Supported 00:08:01.389 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:08:01.389 Command Effects Log Page: Supported 00:08:01.389 Get Log Page Extended Data: Supported 00:08:01.389 Telemetry Log Pages: Not Supported 00:08:01.389 Persistent Event Log Pages: Not Supported 00:08:01.389 Supported Log Pages Log Page: May Support 00:08:01.389 Commands Supported & Effects Log Page: Not Supported 00:08:01.389 Feature Identifiers & Effects Log Page:May Support 00:08:01.389 NVMe-MI Commands & Effects Log Page: May Support 00:08:01.389 Data Area 4 for Telemetry Log: Not Supported 00:08:01.389 Error Log Page Entries Supported: 1 00:08:01.389 Keep Alive: Not Supported 00:08:01.389 00:08:01.389 NVM Command Set Attributes 00:08:01.389 ========================== 00:08:01.389 Submission Queue Entry Size 00:08:01.389 Max: 64 00:08:01.389 Min: 64 00:08:01.389 Completion Queue Entry Size 00:08:01.389 Max: 16 00:08:01.389 Min: 16 00:08:01.389 Number of Namespaces: 256 00:08:01.389 Compare Command: Supported 00:08:01.389 Write Uncorrectable Command: Not Supported 00:08:01.389 Dataset Management Command: Supported 00:08:01.389 Write Zeroes Command: Supported 00:08:01.389 Set Features Save Field: Supported 00:08:01.389 Reservations: Not Supported 00:08:01.389 Timestamp: Supported 00:08:01.389 Copy: Supported 00:08:01.389 Volatile Write Cache: Present 00:08:01.389 Atomic Write Unit (Normal): 1 00:08:01.389 Atomic Write Unit (PFail): 1 00:08:01.389 Atomic Compare & Write Unit: 1 00:08:01.389 Fused Compare & Write: Not Supported 00:08:01.389 Scatter-Gather List 00:08:01.389 SGL Command Set: Supported 00:08:01.389 SGL Keyed: Not Supported 00:08:01.389 SGL Bit Bucket Descriptor: Not Supported 00:08:01.389 SGL Metadata Pointer: Not Supported 00:08:01.389 Oversized SGL: Not Supported 00:08:01.389 SGL Metadata Address: Not Supported 00:08:01.389 SGL Offset: Not Supported 00:08:01.389 Transport SGL Data Block: Not Supported 00:08:01.389 Replay Protected Memory Block: Not Supported 00:08:01.389 00:08:01.389 Firmware Slot Information 00:08:01.389 ========================= 00:08:01.389 Active slot: 1 00:08:01.389 Slot 1 Firmware Revision: 1.0 00:08:01.389 00:08:01.389 00:08:01.389 Commands Supported and Effects 00:08:01.389 ============================== 00:08:01.389 Admin Commands 00:08:01.389 -------------- 00:08:01.389 Delete I/O Submission Queue (00h): Supported 00:08:01.389 Create I/O Submission Queue (01h): Supported 00:08:01.389 Get Log Page (02h): Supported 00:08:01.389 Delete I/O Completion Queue (04h): Supported 00:08:01.389 Create I/O Completion Queue (05h): Supported 00:08:01.389 Identify (06h): Supported 00:08:01.389 Abort (08h): Supported 00:08:01.389 Set Features (09h): Supported 00:08:01.389 Get Features (0Ah): Supported 00:08:01.389 Asynchronous Event Request (0Ch): Supported 00:08:01.389 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:01.389 Directive Send (19h): Supported 00:08:01.389 Directive Receive (1Ah): Supported 00:08:01.389 Virtualization Management (1Ch): Supported 00:08:01.389 Doorbell Buffer Config (7Ch): Supported 00:08:01.389 Format NVM (80h): Supported LBA-Change 00:08:01.389 I/O Commands 00:08:01.389 ------------ 00:08:01.389 Flush (00h): Supported LBA-Change 00:08:01.389 Write (01h): Supported LBA-Change 00:08:01.389 Read (02h): Supported 00:08:01.389 Compare (05h): Supported 00:08:01.389 Write Zeroes (08h): Supported LBA-Change 00:08:01.389 Dataset Management (09h): Supported LBA-Change 00:08:01.389 Unknown (0Ch): Supported 00:08:01.389 Unknown (12h): Supported 00:08:01.389 Copy (19h): Supported LBA-Change 00:08:01.389 Unknown (1Dh): Supported LBA-Change 00:08:01.389 00:08:01.389 Error Log 00:08:01.389 ========= 00:08:01.389 00:08:01.389 Arbitration 00:08:01.389 =========== 00:08:01.389 Arbitration Burst: no limit 00:08:01.389 00:08:01.389 Power Management 00:08:01.389 ================ 00:08:01.389 Number of Power States: 1 00:08:01.389 Current Power State: Power State #0 00:08:01.389 Power State #0: 00:08:01.389 Max Power: 25.00 W 00:08:01.389 Non-Operational State: Operational 00:08:01.389 Entry Latency: 16 microseconds 00:08:01.389 Exit Latency: 4 microseconds 00:08:01.389 Relative Read Throughput: 0 00:08:01.389 Relative Read Latency: 0 00:08:01.389 Relative Write Throughput: 0 00:08:01.389 Relative Write Latency: 0 00:08:01.389 Idle Power:[2024-12-06 04:55:39.445533] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0] process 74740 terminated unexpected 00:08:01.389 Not Reported 00:08:01.389 Active Power: Not Reported 00:08:01.389 Non-Operational Permissive Mode: Not Supported 00:08:01.389 00:08:01.389 Health Information 00:08:01.389 ================== 00:08:01.389 Critical Warnings: 00:08:01.389 Available Spare Space: OK 00:08:01.389 Temperature: OK 00:08:01.389 Device Reliability: OK 00:08:01.389 Read Only: No 00:08:01.389 Volatile Memory Backup: OK 00:08:01.389 Current Temperature: 323 Kelvin (50 Celsius) 00:08:01.389 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:01.389 Available Spare: 0% 00:08:01.389 Available Spare Threshold: 0% 00:08:01.389 Life Percentage Used: 0% 00:08:01.389 Data Units Read: 801 00:08:01.389 Data Units Written: 730 00:08:01.389 Host Read Commands: 36456 00:08:01.389 Host Write Commands: 35880 00:08:01.389 Controller Busy Time: 0 minutes 00:08:01.389 Power Cycles: 0 00:08:01.389 Power On Hours: 0 hours 00:08:01.389 Unsafe Shutdowns: 0 00:08:01.389 Unrecoverable Media Errors: 0 00:08:01.389 Lifetime Error Log Entries: 0 00:08:01.389 Warning Temperature Time: 0 minutes 00:08:01.389 Critical Temperature Time: 0 minutes 00:08:01.389 00:08:01.389 Number of Queues 00:08:01.389 ================ 00:08:01.389 Number of I/O Submission Queues: 64 00:08:01.389 Number of I/O Completion Queues: 64 00:08:01.389 00:08:01.389 ZNS Specific Controller Data 00:08:01.389 ============================ 00:08:01.389 Zone Append Size Limit: 0 00:08:01.389 00:08:01.389 00:08:01.389 Active Namespaces 00:08:01.389 ================= 00:08:01.389 Namespace ID:1 00:08:01.389 Error Recovery Timeout: Unlimited 00:08:01.389 Command Set Identifier: NVM (00h) 00:08:01.389 Deallocate: Supported 00:08:01.389 Deallocated/Unwritten Error: Supported 00:08:01.389 Deallocated Read Value: All 0x00 00:08:01.389 Deallocate in Write Zeroes: Not Supported 00:08:01.389 Deallocated Guard Field: 0xFFFF 00:08:01.389 Flush: Supported 00:08:01.389 Reservation: Not Supported 00:08:01.389 Namespace Sharing Capabilities: Multiple Controllers 00:08:01.389 Size (in LBAs): 262144 (1GiB) 00:08:01.390 Capacity (in LBAs): 262144 (1GiB) 00:08:01.390 Utilization (in LBAs): 262144 (1GiB) 00:08:01.390 Thin Provisioning: Not Supported 00:08:01.390 Per-NS Atomic Units: No 00:08:01.390 Maximum Single Source Range Length: 128 00:08:01.390 Maximum Copy Length: 128 00:08:01.390 Maximum Source Range Count: 128 00:08:01.390 NGUID/EUI64 Never Reused: No 00:08:01.390 Namespace Write Protected: No 00:08:01.390 Endurance group ID: 1 00:08:01.390 Number of LBA Formats: 8 00:08:01.390 Current LBA Format: LBA Format #04 00:08:01.390 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:01.390 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:01.390 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:01.390 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:01.390 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:01.390 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:01.390 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:01.390 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:01.390 00:08:01.390 Get Feature FDP: 00:08:01.390 ================ 00:08:01.390 Enabled: Yes 00:08:01.390 FDP configuration index: 0 00:08:01.390 00:08:01.390 FDP configurations log page 00:08:01.390 =========================== 00:08:01.390 Number of FDP configurations: 1 00:08:01.390 Version: 0 00:08:01.390 Size: 112 00:08:01.390 FDP Configuration Descriptor: 0 00:08:01.390 Descriptor Size: 96 00:08:01.390 Reclaim Group Identifier format: 2 00:08:01.390 FDP Volatile Write Cache: Not Present 00:08:01.390 FDP Configuration: Valid 00:08:01.390 Vendor Specific Size: 0 00:08:01.390 Number of Reclaim Groups: 2 00:08:01.390 Number of Recalim Unit Handles: 8 00:08:01.390 Max Placement Identifiers: 128 00:08:01.390 Number of Namespaces Suppprted: 256 00:08:01.390 Reclaim unit Nominal Size: 6000000 bytes 00:08:01.390 Estimated Reclaim Unit Time Limit: Not Reported 00:08:01.390 RUH Desc #000: RUH Type: Initially Isolated 00:08:01.390 RUH Desc #001: RUH Type: Initially Isolated 00:08:01.390 RUH Desc #002: RUH Type: Initially Isolated 00:08:01.390 RUH Desc #003: RUH Type: Initially Isolated 00:08:01.390 RUH Desc #004: RUH Type: Initially Isolated 00:08:01.390 RUH Desc #005: RUH Type: Initially Isolated 00:08:01.390 RUH Desc #006: RUH Type: Initially Isolated 00:08:01.390 RUH Desc #007: RUH Type: Initially Isolated 00:08:01.390 00:08:01.390 FDP reclaim unit handle usage log page 00:08:01.390 ==================================[2024-12-06 04:55:39.448292] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0] process 74740 terminated unexpected 00:08:01.390 ==== 00:08:01.390 Number of Reclaim Unit Handles: 8 00:08:01.390 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:01.390 RUH Usage Desc #001: RUH Attributes: Unused 00:08:01.390 RUH Usage Desc #002: RUH Attributes: Unused 00:08:01.390 RUH Usage Desc #003: RUH Attributes: Unused 00:08:01.390 RUH Usage Desc #004: RUH Attributes: Unused 00:08:01.390 RUH Usage Desc #005: RUH Attributes: Unused 00:08:01.390 RUH Usage Desc #006: RUH Attributes: Unused 00:08:01.390 RUH Usage Desc #007: RUH Attributes: Unused 00:08:01.390 00:08:01.390 FDP statistics log page 00:08:01.390 ======================= 00:08:01.390 Host bytes with metadata written: 478978048 00:08:01.390 Media bytes with metadata written: 479031296 00:08:01.390 Media bytes erased: 0 00:08:01.390 00:08:01.390 FDP events log page 00:08:01.390 =================== 00:08:01.390 Number of FDP events: 0 00:08:01.390 00:08:01.390 NVM Specific Namespace Data 00:08:01.390 =========================== 00:08:01.390 Logical Block Storage Tag Mask: 0 00:08:01.390 Protection Information Capabilities: 00:08:01.390 16b Guard Protection Information Storage Tag Support: No 00:08:01.390 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:01.390 Storage Tag Check Read Support: No 00:08:01.390 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.390 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.390 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.390 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.390 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.390 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.390 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.390 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.390 ===================================================== 00:08:01.390 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:01.390 ===================================================== 00:08:01.390 Controller Capabilities/Features 00:08:01.390 ================================ 00:08:01.390 Vendor ID: 1b36 00:08:01.390 Subsystem Vendor ID: 1af4 00:08:01.390 Serial Number: 12340 00:08:01.390 Model Number: QEMU NVMe Ctrl 00:08:01.390 Firmware Version: 8.0.0 00:08:01.390 Recommended Arb Burst: 6 00:08:01.390 IEEE OUI Identifier: 00 54 52 00:08:01.390 Multi-path I/O 00:08:01.390 May have multiple subsystem ports: No 00:08:01.390 May have multiple controllers: No 00:08:01.390 Associated with SR-IOV VF: No 00:08:01.390 Max Data Transfer Size: 524288 00:08:01.390 Max Number of Namespaces: 256 00:08:01.390 Max Number of I/O Queues: 64 00:08:01.390 NVMe Specification Version (VS): 1.4 00:08:01.390 NVMe Specification Version (Identify): 1.4 00:08:01.390 Maximum Queue Entries: 2048 00:08:01.390 Contiguous Queues Required: Yes 00:08:01.390 Arbitration Mechanisms Supported 00:08:01.390 Weighted Round Robin: Not Supported 00:08:01.390 Vendor Specific: Not Supported 00:08:01.390 Reset Timeout: 7500 ms 00:08:01.390 Doorbell Stride: 4 bytes 00:08:01.390 NVM Subsystem Reset: Not Supported 00:08:01.390 Command Sets Supported 00:08:01.390 NVM Command Set: Supported 00:08:01.390 Boot Partition: Not Supported 00:08:01.390 Memory Page Size Minimum: 4096 bytes 00:08:01.390 Memory Page Size Maximum: 65536 bytes 00:08:01.390 Persistent Memory Region: Not Supported 00:08:01.390 Optional Asynchronous Events Supported 00:08:01.390 Namespace Attribute Notices: Supported 00:08:01.390 Firmware Activation Notices: Not Supported 00:08:01.390 ANA Change Notices: Not Supported 00:08:01.390 PLE Aggregate Log Change Notices: Not Supported 00:08:01.390 LBA Status Info Alert Notices: Not Supported 00:08:01.390 EGE Aggregate Log Change Notices: Not Supported 00:08:01.390 Normal NVM Subsystem Shutdown event: Not Supported 00:08:01.390 Zone Descriptor Change Notices: Not Supported 00:08:01.390 Discovery Log Change Notices: Not Supported 00:08:01.390 Controller Attributes 00:08:01.390 128-bit Host Identifier: Not Supported 00:08:01.390 Non-Operational Permissive Mode: Not Supported 00:08:01.390 NVM Sets: Not Supported 00:08:01.390 Read Recovery Levels: Not Supported 00:08:01.390 Endurance Groups: Not Supported 00:08:01.390 Predictable Latency Mode: Not Supported 00:08:01.390 Traffic Based Keep ALive: Not Supported 00:08:01.390 Namespace Granularity: Not Supported 00:08:01.390 SQ Associations: Not Supported 00:08:01.390 UUID List: Not Supported 00:08:01.390 Multi-Domain Subsystem: Not Supported 00:08:01.390 Fixed Capacity Management: Not Supported 00:08:01.390 Variable Capacity Management: Not Supported 00:08:01.390 Delete Endurance Group: Not Supported 00:08:01.390 Delete NVM Set: Not Supported 00:08:01.390 Extended LBA Formats Supported: Supported 00:08:01.390 Flexible Data Placement Supported: Not Supported 00:08:01.390 00:08:01.390 Controller Memory Buffer Support 00:08:01.390 ================================ 00:08:01.390 Supported: No 00:08:01.390 00:08:01.390 Persistent Memory Region Support 00:08:01.390 ================================ 00:08:01.390 Supported: No 00:08:01.390 00:08:01.390 Admin Command Set Attributes 00:08:01.390 ============================ 00:08:01.390 Security Send/Receive: Not Supported 00:08:01.390 Format NVM: Supported 00:08:01.390 Firmware Activate/Download: Not Supported 00:08:01.390 Namespace Management: Supported 00:08:01.390 Device Self-Test: Not Supported 00:08:01.390 Directives: Supported 00:08:01.390 NVMe-MI: Not Supported 00:08:01.390 Virtualization Management: Not Supported 00:08:01.390 Doorbell Buffer Config: Supported 00:08:01.390 Get LBA Status Capability: Not Supported 00:08:01.390 Command & Feature Lockdown Capability: Not Supported 00:08:01.390 Abort Command Limit: 4 00:08:01.390 Async Event Request Limit: 4 00:08:01.390 Number of Firmware Slots: N/A 00:08:01.390 Firmware Slot 1 Read-Only: N/A 00:08:01.390 Firmware Activation Without Reset: N/A 00:08:01.390 Multiple Update Detection Support: N/A 00:08:01.390 Firmware Update Granularity: No Information Provided 00:08:01.390 Per-Namespace SMART Log: Yes 00:08:01.390 Asymmetric Namespace Access Log Page: Not Supported 00:08:01.390 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:08:01.390 Command Effects Log Page: Supported 00:08:01.390 Get Log Page Extended Data: Supported 00:08:01.391 Telemetry Log Pages: Not Supported 00:08:01.391 Persistent Event Log Pages: Not Supported 00:08:01.391 Supported Log Pages Log Page: May Support 00:08:01.391 Commands Supported & Effects Log Page: Not Supported 00:08:01.391 Feature Identifiers & Effects Log Page:May Support 00:08:01.391 NVMe-MI Commands & Effects Log Page: May Support 00:08:01.391 Data Area 4 for Telemetry Log: Not Supported 00:08:01.391 Error Log Page Entries Supported: 1 00:08:01.391 Keep Alive: Not Supported 00:08:01.391 00:08:01.391 NVM Command Set Attributes 00:08:01.391 ========================== 00:08:01.391 Submission Queue Entry Size 00:08:01.391 Max: 64 00:08:01.391 Min: 64 00:08:01.391 Completion Queue Entry Size 00:08:01.391 Max: 16 00:08:01.391 Min: 16 00:08:01.391 Number of Namespaces: 256 00:08:01.391 Compare Command: Supported 00:08:01.391 Write Uncorrectable Command: Not Supported 00:08:01.391 Dataset Management Command: Supported 00:08:01.391 Write Zeroes Command: Supported 00:08:01.391 Set Features Save Field: Supported 00:08:01.391 Reservations: Not Supported 00:08:01.391 Timestamp: Supported 00:08:01.391 Copy: Supported 00:08:01.391 Volatile Write Cache: Present 00:08:01.391 Atomic Write Unit (Normal): 1 00:08:01.391 Atomic Write Unit (PFail): 1 00:08:01.391 Atomic Compare & Write Unit: 1 00:08:01.391 Fused Compare & Write: Not Supported 00:08:01.391 Scatter-Gather List 00:08:01.391 SGL Command Set: Supported 00:08:01.391 SGL Keyed: Not Supported 00:08:01.391 SGL Bit Bucket Descriptor: Not Supported 00:08:01.391 SGL Metadata Pointer: Not Supported 00:08:01.391 Oversized SGL: Not Supported 00:08:01.391 SGL Metadata Address: Not Supported 00:08:01.391 SGL Offset: Not Supported 00:08:01.391 Transport SGL Data Block: Not Supported 00:08:01.391 Replay Protected Memory Block: Not Supported 00:08:01.391 00:08:01.391 Firmware Slot Information 00:08:01.391 ========================= 00:08:01.391 Active slot: 1 00:08:01.391 Slot 1 Firmware Revision: 1.0 00:08:01.391 00:08:01.391 00:08:01.391 Commands Supported and Effects 00:08:01.391 ============================== 00:08:01.391 Admin Commands 00:08:01.391 -------------- 00:08:01.391 Delete I/O Submission Queue (00h): Supported 00:08:01.391 Create I/O Submission Queue (01h): Supported 00:08:01.391 Get Log Page (02h): Supported 00:08:01.391 Delete I/O Completion Queue (04h): Supported 00:08:01.391 Create I/O Completion Queue (05h): Supported 00:08:01.391 Identify (06h): Supported 00:08:01.391 Abort (08h): Supported 00:08:01.391 Set Features (09h): Supported 00:08:01.391 Get Features (0Ah): Supported 00:08:01.391 Asynchronous Event Request (0Ch): Supported 00:08:01.391 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:01.391 Directive Send (19h): Supported 00:08:01.391 Directive Receive (1Ah): Supported 00:08:01.391 Virtualization Management (1Ch): Supported 00:08:01.391 Doorbell Buffer Config (7Ch): Supported 00:08:01.391 Format NVM (80h): Supported LBA-Change 00:08:01.391 I/O Commands 00:08:01.391 ------------ 00:08:01.391 Flush (00h): Supported LBA-Change 00:08:01.391 Write (01h): Supported LBA-Change 00:08:01.391 Read (02h): Supported 00:08:01.391 Compare (05h): Supported 00:08:01.391 Write Zeroes (08h): Supported LBA-Change 00:08:01.391 Dataset Management (09h): Supported LBA-Change 00:08:01.391 Unknown (0Ch): Supported 00:08:01.391 Unknown (12h): Supported 00:08:01.391 Copy (19h): Supported LBA-Change 00:08:01.391 Unknown (1Dh): Supported LBA-Change 00:08:01.391 00:08:01.391 Error Log 00:08:01.391 ========= 00:08:01.391 00:08:01.391 Arbitration 00:08:01.391 =========== 00:08:01.391 Arbitration Burst: no limit 00:08:01.391 00:08:01.391 Power Management 00:08:01.391 ================ 00:08:01.391 Number of Power States: 1 00:08:01.391 Current Power State: Power State #0 00:08:01.391 Power State #0: 00:08:01.391 Max Power: 25.00 W 00:08:01.391 Non-Operational State: Operational 00:08:01.391 Entry Latency: 16 microseconds 00:08:01.391 Exit Latency: 4 microseconds 00:08:01.391 Relative Read Throughput: 0 00:08:01.391 Relative Read Latency: 0 00:08:01.391 Relative Write Throughput: 0 00:08:01.391 Relative Write Latency: 0 00:08:01.391 Idle Power: Not Reported 00:08:01.391 Active Power: Not Reported 00:08:01.391 Non-Operational Permissive Mode: Not Supported 00:08:01.391 00:08:01.391 Health Information 00:08:01.391 ================== 00:08:01.391 Critical Warnings: 00:08:01.391 Available Spare Space: OK 00:08:01.391 Temperature: OK 00:08:01.391 Device Reliability: OK 00:08:01.391 Read Only: No 00:08:01.391 Volatile Memory Backup: OK 00:08:01.391 Current Temperature: 323 Kelvin (50 Celsius) 00:08:01.391 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:01.391 Available Spare: 0% 00:08:01.391 Available Spare Threshold: 0% 00:08:01.391 Life Percentage Used: 0% 00:08:01.391 Data Units Read: 666 00:08:01.391 Data Units Written: 594 00:08:01.391 Host Read Commands: 35103 00:08:01.391 Host Write Commands: 34889 00:08:01.391 Controller Busy Time: 0 minutes 00:08:01.391 Power Cycles: 0 00:08:01.391 Power On Hours: 0 hours 00:08:01.391 Unsafe Shutdowns: 0 00:08:01.391 Unrecoverable Media Errors: 0 00:08:01.391 Lifetime Error Log Entries: 0 00:08:01.391 Warning Temperature Time: 0 minutes 00:08:01.391 Critical Temperature Time: 0 minutes 00:08:01.391 00:08:01.391 Number of Queues 00:08:01.391 ================ 00:08:01.391 Number of I/O Submission Queues: 64 00:08:01.391 Number of I/O Completion Queues: 64 00:08:01.391 00:08:01.391 ZNS Specific Controller Data 00:08:01.391 ============================ 00:08:01.391 Zone Append Size Limit: 0 00:08:01.391 00:08:01.391 00:08:01.391 Active Namespaces 00:08:01.391 ================= 00:08:01.391 Namespace ID:1 00:08:01.391 Error Recovery Timeout: Unlimited 00:08:01.391 Command Set Identifier: NVM (00h) 00:08:01.391 Deallocate: Supported 00:08:01.391 Deallocated/Unwritten Error: Supported 00:08:01.391 Deallocated Read Value: All 0x00 00:08:01.391 Deallocate in Write Zeroes: Not Supported 00:08:01.391 Deallocated Guard Field: 0xFFFF 00:08:01.391 Flush: Supported 00:08:01.391 Reservation: Not Supported 00:08:01.391 Metadata Transferred as: Separate Metadata Buffer 00:08:01.391 Namespace Sharing Capabilities: Private 00:08:01.391 Size (in LBAs): 1548666 (5GiB) 00:08:01.391 Capacity (in LBAs): 1548666 (5GiB) 00:08:01.391 Utilization (in LBAs): 1548666 (5GiB) 00:08:01.391 Thin Provisioning: Not Supported 00:08:01.391 Per-NS Atomic Units: No 00:08:01.391 Maximum Single Source Range Length: 128 00:08:01.391 Maximum Copy Length: 128 00:08:01.391 Maximum Source Range Count: 128 00:08:01.391 NGUID/EUI64 Never Reused: No 00:08:01.391 Namespace Write Protected: No 00:08:01.391 Number of LBA Formats: 8 00:08:01.391 Current LBA Format: [2024-12-06 04:55:39.449949] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0] process 74740 terminated unexpected 00:08:01.391 LBA Format #07 00:08:01.391 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:01.391 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:01.391 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:01.391 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:01.391 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:01.391 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:01.391 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:01.391 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:01.391 00:08:01.391 NVM Specific Namespace Data 00:08:01.391 =========================== 00:08:01.391 Logical Block Storage Tag Mask: 0 00:08:01.391 Protection Information Capabilities: 00:08:01.391 16b Guard Protection Information Storage Tag Support: No 00:08:01.391 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:01.391 Storage Tag Check Read Support: No 00:08:01.391 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.391 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.391 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.391 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.391 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.391 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.391 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.391 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.391 ===================================================== 00:08:01.391 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:01.391 ===================================================== 00:08:01.391 Controller Capabilities/Features 00:08:01.391 ================================ 00:08:01.391 Vendor ID: 1b36 00:08:01.391 Subsystem Vendor ID: 1af4 00:08:01.391 Serial Number: 12341 00:08:01.391 Model Number: QEMU NVMe Ctrl 00:08:01.391 Firmware Version: 8.0.0 00:08:01.392 Recommended Arb Burst: 6 00:08:01.392 IEEE OUI Identifier: 00 54 52 00:08:01.392 Multi-path I/O 00:08:01.392 May have multiple subsystem ports: No 00:08:01.392 May have multiple controllers: No 00:08:01.392 Associated with SR-IOV VF: No 00:08:01.392 Max Data Transfer Size: 524288 00:08:01.392 Max Number of Namespaces: 256 00:08:01.392 Max Number of I/O Queues: 64 00:08:01.392 NVMe Specification Version (VS): 1.4 00:08:01.392 NVMe Specification Version (Identify): 1.4 00:08:01.392 Maximum Queue Entries: 2048 00:08:01.392 Contiguous Queues Required: Yes 00:08:01.392 Arbitration Mechanisms Supported 00:08:01.392 Weighted Round Robin: Not Supported 00:08:01.392 Vendor Specific: Not Supported 00:08:01.392 Reset Timeout: 7500 ms 00:08:01.392 Doorbell Stride: 4 bytes 00:08:01.392 NVM Subsystem Reset: Not Supported 00:08:01.392 Command Sets Supported 00:08:01.392 NVM Command Set: Supported 00:08:01.392 Boot Partition: Not Supported 00:08:01.392 Memory Page Size Minimum: 4096 bytes 00:08:01.392 Memory Page Size Maximum: 65536 bytes 00:08:01.392 Persistent Memory Region: Not Supported 00:08:01.392 Optional Asynchronous Events Supported 00:08:01.392 Namespace Attribute Notices: Supported 00:08:01.392 Firmware Activation Notices: Not Supported 00:08:01.392 ANA Change Notices: Not Supported 00:08:01.392 PLE Aggregate Log Change Notices: Not Supported 00:08:01.392 LBA Status Info Alert Notices: Not Supported 00:08:01.392 EGE Aggregate Log Change Notices: Not Supported 00:08:01.392 Normal NVM Subsystem Shutdown event: Not Supported 00:08:01.392 Zone Descriptor Change Notices: Not Supported 00:08:01.392 Discovery Log Change Notices: Not Supported 00:08:01.392 Controller Attributes 00:08:01.392 128-bit Host Identifier: Not Supported 00:08:01.392 Non-Operational Permissive Mode: Not Supported 00:08:01.392 NVM Sets: Not Supported 00:08:01.392 Read Recovery Levels: Not Supported 00:08:01.392 Endurance Groups: Not Supported 00:08:01.392 Predictable Latency Mode: Not Supported 00:08:01.392 Traffic Based Keep ALive: Not Supported 00:08:01.392 Namespace Granularity: Not Supported 00:08:01.392 SQ Associations: Not Supported 00:08:01.392 UUID List: Not Supported 00:08:01.392 Multi-Domain Subsystem: Not Supported 00:08:01.392 Fixed Capacity Management: Not Supported 00:08:01.392 Variable Capacity Management: Not Supported 00:08:01.392 Delete Endurance Group: Not Supported 00:08:01.392 Delete NVM Set: Not Supported 00:08:01.392 Extended LBA Formats Supported: Supported 00:08:01.392 Flexible Data Placement Supported: Not Supported 00:08:01.392 00:08:01.392 Controller Memory Buffer Support 00:08:01.392 ================================ 00:08:01.392 Supported: No 00:08:01.392 00:08:01.392 Persistent Memory Region Support 00:08:01.392 ================================ 00:08:01.392 Supported: No 00:08:01.392 00:08:01.392 Admin Command Set Attributes 00:08:01.392 ============================ 00:08:01.392 Security Send/Receive: Not Supported 00:08:01.392 Format NVM: Supported 00:08:01.392 Firmware Activate/Download: Not Supported 00:08:01.392 Namespace Management: Supported 00:08:01.392 Device Self-Test: Not Supported 00:08:01.392 Directives: Supported 00:08:01.392 NVMe-MI: Not Supported 00:08:01.392 Virtualization Management: Not Supported 00:08:01.392 Doorbell Buffer Config: Supported 00:08:01.392 Get LBA Status Capability: Not Supported 00:08:01.392 Command & Feature Lockdown Capability: Not Supported 00:08:01.392 Abort Command Limit: 4 00:08:01.392 Async Event Request Limit: 4 00:08:01.392 Number of Firmware Slots: N/A 00:08:01.392 Firmware Slot 1 Read-Only: N/A 00:08:01.392 Firmware Activation Without Reset: N/A 00:08:01.392 Multiple Update Detection Support: N/A 00:08:01.392 Firmware Update Granularity: No Information Provided 00:08:01.392 Per-Namespace SMART Log: Yes 00:08:01.392 Asymmetric Namespace Access Log Page: Not Supported 00:08:01.392 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:08:01.392 Command Effects Log Page: Supported 00:08:01.392 Get Log Page Extended Data: Supported 00:08:01.392 Telemetry Log Pages: Not Supported 00:08:01.392 Persistent Event Log Pages: Not Supported 00:08:01.392 Supported Log Pages Log Page: May Support 00:08:01.392 Commands Supported & Effects Log Page: Not Supported 00:08:01.392 Feature Identifiers & Effects Log Page:May Support 00:08:01.392 NVMe-MI Commands & Effects Log Page: May Support 00:08:01.392 Data Area 4 for Telemetry Log: Not Supported 00:08:01.392 Error Log Page Entries Supported: 1 00:08:01.392 Keep Alive: Not Supported 00:08:01.392 00:08:01.392 NVM Command Set Attributes 00:08:01.392 ========================== 00:08:01.392 Submission Queue Entry Size 00:08:01.392 Max: 64 00:08:01.392 Min: 64 00:08:01.392 Completion Queue Entry Size 00:08:01.392 Max: 16 00:08:01.392 Min: 16 00:08:01.392 Number of Namespaces: 256 00:08:01.392 Compare Command: Supported 00:08:01.392 Write Uncorrectable Command: Not Supported 00:08:01.392 Dataset Management Command: Supported 00:08:01.392 Write Zeroes Command: Supported 00:08:01.392 Set Features Save Field: Supported 00:08:01.392 Reservations: Not Supported 00:08:01.392 Timestamp: Supported 00:08:01.392 Copy: Supported 00:08:01.392 Volatile Write Cache: Present 00:08:01.392 Atomic Write Unit (Normal): 1 00:08:01.392 Atomic Write Unit (PFail): 1 00:08:01.392 Atomic Compare & Write Unit: 1 00:08:01.392 Fused Compare & Write: Not Supported 00:08:01.392 Scatter-Gather List 00:08:01.392 SGL Command Set: Supported 00:08:01.392 SGL Keyed: Not Supported 00:08:01.392 SGL Bit Bucket Descriptor: Not Supported 00:08:01.392 SGL Metadata Pointer: Not Supported 00:08:01.392 Oversized SGL: Not Supported 00:08:01.392 SGL Metadata Address: Not Supported 00:08:01.392 SGL Offset: Not Supported 00:08:01.392 Transport SGL Data Block: Not Supported 00:08:01.392 Replay Protected Memory Block: Not Supported 00:08:01.392 00:08:01.392 Firmware Slot Information 00:08:01.392 ========================= 00:08:01.392 Active slot: 1 00:08:01.392 Slot 1 Firmware Revision: 1.0 00:08:01.392 00:08:01.392 00:08:01.392 Commands Supported and Effects 00:08:01.392 ============================== 00:08:01.392 Admin Commands 00:08:01.392 -------------- 00:08:01.392 Delete I/O Submission Queue (00h): Supported 00:08:01.392 Create I/O Submission Queue (01h): Supported 00:08:01.392 Get Log Page (02h): Supported 00:08:01.392 Delete I/O Completion Queue (04h): Supported 00:08:01.392 Create I/O Completion Queue (05h): Supported 00:08:01.392 Identify (06h): Supported 00:08:01.392 Abort (08h): Supported 00:08:01.392 Set Features (09h): Supported 00:08:01.392 Get Features (0Ah): Supported 00:08:01.392 Asynchronous Event Request (0Ch): Supported 00:08:01.392 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:01.392 Directive Send (19h): Supported 00:08:01.392 Directive Receive (1Ah): Supported 00:08:01.392 Virtualization Management (1Ch): Supported 00:08:01.392 Doorbell Buffer Config (7Ch): Supported 00:08:01.392 Format NVM (80h): Supported LBA-Change 00:08:01.392 I/O Commands 00:08:01.392 ------------ 00:08:01.392 Flush (00h): Supported LBA-Change 00:08:01.392 Write (01h): Supported LBA-Change 00:08:01.392 Read (02h): Supported 00:08:01.392 Compare (05h): Supported 00:08:01.392 Write Zeroes (08h): Supported LBA-Change 00:08:01.392 Dataset Management (09h): Supported LBA-Change 00:08:01.392 Unknown (0Ch): Supported 00:08:01.392 Unknown (12h): Supported 00:08:01.392 Copy (19h): Supported LBA-Change 00:08:01.392 Unknown (1Dh): Supported LBA-Change 00:08:01.392 00:08:01.392 Error Log 00:08:01.392 ========= 00:08:01.392 00:08:01.392 Arbitration 00:08:01.392 =========== 00:08:01.392 Arbitration Burst: no limit 00:08:01.392 00:08:01.392 Power Management 00:08:01.392 ================ 00:08:01.392 Number of Power States: 1 00:08:01.392 Current Power State: Power State #0 00:08:01.392 Power State #0: 00:08:01.392 Max Power: 25.00 W 00:08:01.392 Non-Operational State: Operational 00:08:01.392 Entry Latency: 16 microseconds 00:08:01.392 Exit Latency: 4 microseconds 00:08:01.392 Relative Read Throughput: 0 00:08:01.392 Relative Read Latency: 0 00:08:01.392 Relative Write Throughput: 0 00:08:01.392 Relative Write Latency: 0 00:08:01.392 Idle Power: Not Reported 00:08:01.392 Active Power: Not Reported 00:08:01.392 Non-Operational Permissive Mode: Not Supported 00:08:01.392 00:08:01.392 Health Information 00:08:01.392 ================== 00:08:01.392 Critical Warnings: 00:08:01.392 Available Spare Space: OK 00:08:01.392 Temperature: OK 00:08:01.392 Device Reliability: OK 00:08:01.392 Read Only: No 00:08:01.392 Volatile Memory Backup: OK 00:08:01.392 Current Temperature: 323 Kelvin (50 Celsius) 00:08:01.392 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:01.392 Available Spare: 0% 00:08:01.393 Available Spare Threshold: 0% 00:08:01.393 Life Percentage Used: 0% 00:08:01.393 Data Units Read: 1011 00:08:01.393 Data Units Written: 884 00:08:01.393 Host Read Commands: 51710 00:08:01.393 Host Write Commands: 50596 00:08:01.393 Controller Busy Time: 0 minutes 00:08:01.393 Power Cycles: 0 00:08:01.393 Power On Hours: 0 hours 00:08:01.393 Unsafe Shutdowns: 0 00:08:01.393 Unrecoverable Media Errors: 0 00:08:01.393 Lifetime Error Log Entries: 0 00:08:01.393 Warning Temperature Time: 0 minutes 00:08:01.393 Critical Temperature Time: 0 minutes 00:08:01.393 00:08:01.393 Number of Queues 00:08:01.393 ================ 00:08:01.393 Number of I/O Submission Queues: 64 00:08:01.393 Number of I/O Completion Queues: 64 00:08:01.393 00:08:01.393 ZNS Specific Controller Data 00:08:01.393 ============================ 00:08:01.393 Zone Append Size Limit: 0 00:08:01.393 00:08:01.393 00:08:01.393 Active Namespaces 00:08:01.393 ================= 00:08:01.393 Namespace ID:1 00:08:01.393 Error Recovery Timeout: Unlimited 00:08:01.393 Command Set Identifier: NVM (00h) 00:08:01.393 Deallocate: Supported 00:08:01.393 Deallocated/Unwritten Error: Supported 00:08:01.393 Deallocated Read Value: All 0x00 00:08:01.393 Deallocate in Write Zeroes: Not Supported 00:08:01.393 Deallocated Guard Field: 0xFFFF 00:08:01.393 Flush: Supported 00:08:01.393 Reservation: Not Supported 00:08:01.393 Namespace Sharing Capabilities: Private 00:08:01.393 Size (in LBAs): 1310720 (5GiB) 00:08:01.393 Capacity (in LBAs): 1310720 (5GiB) 00:08:01.393 Utilization (in LBAs): 1310720 (5GiB) 00:08:01.393 Thin Provisioning: Not Supported 00:08:01.393 Per-NS Atomic Units: No 00:08:01.393 Maximum Single Source Range Length: 128 00:08:01.393 Maximum Copy Length: 128 00:08:01.393 Maximum Source Range Count: 128 00:08:01.393 NGUID/EUI64 Never Reused: No 00:08:01.393 Namespace Write Protected: No 00:08:01.393 Number of LBA Formats: 8 00:08:01.393 Current LBA Format: LBA Format #04 00:08:01.393 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:01.393 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:01.393 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:01.393 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:01.393 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:01.393 LBA Forma[2024-12-06 04:55:39.451339] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0] process 74740 terminated unexpected 00:08:01.393 t #05: Data Size: 4096 Metadata Size: 8 00:08:01.393 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:01.393 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:01.393 00:08:01.393 NVM Specific Namespace Data 00:08:01.393 =========================== 00:08:01.393 Logical Block Storage Tag Mask: 0 00:08:01.393 Protection Information Capabilities: 00:08:01.393 16b Guard Protection Information Storage Tag Support: No 00:08:01.393 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:01.393 Storage Tag Check Read Support: No 00:08:01.393 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.393 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.393 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.393 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.393 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.393 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.393 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.393 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.393 ===================================================== 00:08:01.393 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:01.393 ===================================================== 00:08:01.393 Controller Capabilities/Features 00:08:01.393 ================================ 00:08:01.393 Vendor ID: 1b36 00:08:01.393 Subsystem Vendor ID: 1af4 00:08:01.393 Serial Number: 12342 00:08:01.393 Model Number: QEMU NVMe Ctrl 00:08:01.393 Firmware Version: 8.0.0 00:08:01.393 Recommended Arb Burst: 6 00:08:01.393 IEEE OUI Identifier: 00 54 52 00:08:01.393 Multi-path I/O 00:08:01.393 May have multiple subsystem ports: No 00:08:01.393 May have multiple controllers: No 00:08:01.393 Associated with SR-IOV VF: No 00:08:01.393 Max Data Transfer Size: 524288 00:08:01.393 Max Number of Namespaces: 256 00:08:01.393 Max Number of I/O Queues: 64 00:08:01.393 NVMe Specification Version (VS): 1.4 00:08:01.393 NVMe Specification Version (Identify): 1.4 00:08:01.393 Maximum Queue Entries: 2048 00:08:01.393 Contiguous Queues Required: Yes 00:08:01.393 Arbitration Mechanisms Supported 00:08:01.393 Weighted Round Robin: Not Supported 00:08:01.393 Vendor Specific: Not Supported 00:08:01.393 Reset Timeout: 7500 ms 00:08:01.393 Doorbell Stride: 4 bytes 00:08:01.393 NVM Subsystem Reset: Not Supported 00:08:01.393 Command Sets Supported 00:08:01.393 NVM Command Set: Supported 00:08:01.393 Boot Partition: Not Supported 00:08:01.393 Memory Page Size Minimum: 4096 bytes 00:08:01.393 Memory Page Size Maximum: 65536 bytes 00:08:01.393 Persistent Memory Region: Not Supported 00:08:01.393 Optional Asynchronous Events Supported 00:08:01.393 Namespace Attribute Notices: Supported 00:08:01.393 Firmware Activation Notices: Not Supported 00:08:01.393 ANA Change Notices: Not Supported 00:08:01.393 PLE Aggregate Log Change Notices: Not Supported 00:08:01.393 LBA Status Info Alert Notices: Not Supported 00:08:01.393 EGE Aggregate Log Change Notices: Not Supported 00:08:01.393 Normal NVM Subsystem Shutdown event: Not Supported 00:08:01.393 Zone Descriptor Change Notices: Not Supported 00:08:01.393 Discovery Log Change Notices: Not Supported 00:08:01.393 Controller Attributes 00:08:01.393 128-bit Host Identifier: Not Supported 00:08:01.393 Non-Operational Permissive Mode: Not Supported 00:08:01.393 NVM Sets: Not Supported 00:08:01.393 Read Recovery Levels: Not Supported 00:08:01.393 Endurance Groups: Not Supported 00:08:01.393 Predictable Latency Mode: Not Supported 00:08:01.393 Traffic Based Keep ALive: Not Supported 00:08:01.393 Namespace Granularity: Not Supported 00:08:01.393 SQ Associations: Not Supported 00:08:01.393 UUID List: Not Supported 00:08:01.393 Multi-Domain Subsystem: Not Supported 00:08:01.393 Fixed Capacity Management: Not Supported 00:08:01.393 Variable Capacity Management: Not Supported 00:08:01.393 Delete Endurance Group: Not Supported 00:08:01.393 Delete NVM Set: Not Supported 00:08:01.393 Extended LBA Formats Supported: Supported 00:08:01.393 Flexible Data Placement Supported: Not Supported 00:08:01.393 00:08:01.393 Controller Memory Buffer Support 00:08:01.393 ================================ 00:08:01.393 Supported: No 00:08:01.393 00:08:01.393 Persistent Memory Region Support 00:08:01.393 ================================ 00:08:01.393 Supported: No 00:08:01.393 00:08:01.393 Admin Command Set Attributes 00:08:01.393 ============================ 00:08:01.393 Security Send/Receive: Not Supported 00:08:01.393 Format NVM: Supported 00:08:01.393 Firmware Activate/Download: Not Supported 00:08:01.393 Namespace Management: Supported 00:08:01.393 Device Self-Test: Not Supported 00:08:01.393 Directives: Supported 00:08:01.393 NVMe-MI: Not Supported 00:08:01.393 Virtualization Management: Not Supported 00:08:01.393 Doorbell Buffer Config: Supported 00:08:01.393 Get LBA Status Capability: Not Supported 00:08:01.394 Command & Feature Lockdown Capability: Not Supported 00:08:01.394 Abort Command Limit: 4 00:08:01.394 Async Event Request Limit: 4 00:08:01.394 Number of Firmware Slots: N/A 00:08:01.394 Firmware Slot 1 Read-Only: N/A 00:08:01.394 Firmware Activation Without Reset: N/A 00:08:01.394 Multiple Update Detection Support: N/A 00:08:01.394 Firmware Update Granularity: No Information Provided 00:08:01.394 Per-Namespace SMART Log: Yes 00:08:01.394 Asymmetric Namespace Access Log Page: Not Supported 00:08:01.394 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:08:01.394 Command Effects Log Page: Supported 00:08:01.394 Get Log Page Extended Data: Supported 00:08:01.394 Telemetry Log Pages: Not Supported 00:08:01.394 Persistent Event Log Pages: Not Supported 00:08:01.394 Supported Log Pages Log Page: May Support 00:08:01.394 Commands Supported & Effects Log Page: Not Supported 00:08:01.394 Feature Identifiers & Effects Log Page:May Support 00:08:01.394 NVMe-MI Commands & Effects Log Page: May Support 00:08:01.394 Data Area 4 for Telemetry Log: Not Supported 00:08:01.394 Error Log Page Entries Supported: 1 00:08:01.394 Keep Alive: Not Supported 00:08:01.394 00:08:01.394 NVM Command Set Attributes 00:08:01.394 ========================== 00:08:01.394 Submission Queue Entry Size 00:08:01.394 Max: 64 00:08:01.394 Min: 64 00:08:01.394 Completion Queue Entry Size 00:08:01.394 Max: 16 00:08:01.394 Min: 16 00:08:01.394 Number of Namespaces: 256 00:08:01.394 Compare Command: Supported 00:08:01.394 Write Uncorrectable Command: Not Supported 00:08:01.394 Dataset Management Command: Supported 00:08:01.394 Write Zeroes Command: Supported 00:08:01.394 Set Features Save Field: Supported 00:08:01.394 Reservations: Not Supported 00:08:01.394 Timestamp: Supported 00:08:01.394 Copy: Supported 00:08:01.394 Volatile Write Cache: Present 00:08:01.394 Atomic Write Unit (Normal): 1 00:08:01.394 Atomic Write Unit (PFail): 1 00:08:01.394 Atomic Compare & Write Unit: 1 00:08:01.394 Fused Compare & Write: Not Supported 00:08:01.394 Scatter-Gather List 00:08:01.394 SGL Command Set: Supported 00:08:01.394 SGL Keyed: Not Supported 00:08:01.394 SGL Bit Bucket Descriptor: Not Supported 00:08:01.394 SGL Metadata Pointer: Not Supported 00:08:01.394 Oversized SGL: Not Supported 00:08:01.394 SGL Metadata Address: Not Supported 00:08:01.394 SGL Offset: Not Supported 00:08:01.394 Transport SGL Data Block: Not Supported 00:08:01.394 Replay Protected Memory Block: Not Supported 00:08:01.394 00:08:01.394 Firmware Slot Information 00:08:01.394 ========================= 00:08:01.394 Active slot: 1 00:08:01.394 Slot 1 Firmware Revision: 1.0 00:08:01.394 00:08:01.394 00:08:01.394 Commands Supported and Effects 00:08:01.394 ============================== 00:08:01.394 Admin Commands 00:08:01.394 -------------- 00:08:01.394 Delete I/O Submission Queue (00h): Supported 00:08:01.394 Create I/O Submission Queue (01h): Supported 00:08:01.394 Get Log Page (02h): Supported 00:08:01.394 Delete I/O Completion Queue (04h): Supported 00:08:01.394 Create I/O Completion Queue (05h): Supported 00:08:01.394 Identify (06h): Supported 00:08:01.394 Abort (08h): Supported 00:08:01.394 Set Features (09h): Supported 00:08:01.394 Get Features (0Ah): Supported 00:08:01.394 Asynchronous Event Request (0Ch): Supported 00:08:01.394 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:01.394 Directive Send (19h): Supported 00:08:01.394 Directive Receive (1Ah): Supported 00:08:01.394 Virtualization Management (1Ch): Supported 00:08:01.394 Doorbell Buffer Config (7Ch): Supported 00:08:01.394 Format NVM (80h): Supported LBA-Change 00:08:01.394 I/O Commands 00:08:01.394 ------------ 00:08:01.394 Flush (00h): Supported LBA-Change 00:08:01.394 Write (01h): Supported LBA-Change 00:08:01.394 Read (02h): Supported 00:08:01.394 Compare (05h): Supported 00:08:01.394 Write Zeroes (08h): Supported LBA-Change 00:08:01.394 Dataset Management (09h): Supported LBA-Change 00:08:01.394 Unknown (0Ch): Supported 00:08:01.394 Unknown (12h): Supported 00:08:01.394 Copy (19h): Supported LBA-Change 00:08:01.394 Unknown (1Dh): Supported LBA-Change 00:08:01.394 00:08:01.394 Error Log 00:08:01.394 ========= 00:08:01.394 00:08:01.394 Arbitration 00:08:01.394 =========== 00:08:01.394 Arbitration Burst: no limit 00:08:01.394 00:08:01.394 Power Management 00:08:01.394 ================ 00:08:01.394 Number of Power States: 1 00:08:01.394 Current Power State: Power State #0 00:08:01.394 Power State #0: 00:08:01.394 Max Power: 25.00 W 00:08:01.394 Non-Operational State: Operational 00:08:01.394 Entry Latency: 16 microseconds 00:08:01.394 Exit Latency: 4 microseconds 00:08:01.394 Relative Read Throughput: 0 00:08:01.394 Relative Read Latency: 0 00:08:01.394 Relative Write Throughput: 0 00:08:01.394 Relative Write Latency: 0 00:08:01.394 Idle Power: Not Reported 00:08:01.394 Active Power: Not Reported 00:08:01.394 Non-Operational Permissive Mode: Not Supported 00:08:01.394 00:08:01.394 Health Information 00:08:01.394 ================== 00:08:01.394 Critical Warnings: 00:08:01.394 Available Spare Space: OK 00:08:01.394 Temperature: OK 00:08:01.394 Device Reliability: OK 00:08:01.394 Read Only: No 00:08:01.394 Volatile Memory Backup: OK 00:08:01.394 Current Temperature: 323 Kelvin (50 Celsius) 00:08:01.394 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:01.394 Available Spare: 0% 00:08:01.394 Available Spare Threshold: 0% 00:08:01.394 Life Percentage Used: 0% 00:08:01.394 Data Units Read: 2242 00:08:01.394 Data Units Written: 2029 00:08:01.394 Host Read Commands: 107938 00:08:01.394 Host Write Commands: 106207 00:08:01.394 Controller Busy Time: 0 minutes 00:08:01.394 Power Cycles: 0 00:08:01.394 Power On Hours: 0 hours 00:08:01.394 Unsafe Shutdowns: 0 00:08:01.394 Unrecoverable Media Errors: 0 00:08:01.394 Lifetime Error Log Entries: 0 00:08:01.394 Warning Temperature Time: 0 minutes 00:08:01.394 Critical Temperature Time: 0 minutes 00:08:01.394 00:08:01.394 Number of Queues 00:08:01.394 ================ 00:08:01.394 Number of I/O Submission Queues: 64 00:08:01.394 Number of I/O Completion Queues: 64 00:08:01.394 00:08:01.394 ZNS Specific Controller Data 00:08:01.394 ============================ 00:08:01.394 Zone Append Size Limit: 0 00:08:01.394 00:08:01.394 00:08:01.394 Active Namespaces 00:08:01.394 ================= 00:08:01.394 Namespace ID:1 00:08:01.394 Error Recovery Timeout: Unlimited 00:08:01.394 Command Set Identifier: NVM (00h) 00:08:01.394 Deallocate: Supported 00:08:01.394 Deallocated/Unwritten Error: Supported 00:08:01.394 Deallocated Read Value: All 0x00 00:08:01.394 Deallocate in Write Zeroes: Not Supported 00:08:01.394 Deallocated Guard Field: 0xFFFF 00:08:01.394 Flush: Supported 00:08:01.394 Reservation: Not Supported 00:08:01.394 Namespace Sharing Capabilities: Private 00:08:01.394 Size (in LBAs): 1048576 (4GiB) 00:08:01.394 Capacity (in LBAs): 1048576 (4GiB) 00:08:01.394 Utilization (in LBAs): 1048576 (4GiB) 00:08:01.394 Thin Provisioning: Not Supported 00:08:01.394 Per-NS Atomic Units: No 00:08:01.394 Maximum Single Source Range Length: 128 00:08:01.394 Maximum Copy Length: 128 00:08:01.394 Maximum Source Range Count: 128 00:08:01.394 NGUID/EUI64 Never Reused: No 00:08:01.394 Namespace Write Protected: No 00:08:01.394 Number of LBA Formats: 8 00:08:01.394 Current LBA Format: LBA Format #04 00:08:01.394 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:01.394 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:01.394 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:01.394 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:01.394 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:01.394 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:01.394 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:01.394 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:01.394 00:08:01.394 NVM Specific Namespace Data 00:08:01.394 =========================== 00:08:01.394 Logical Block Storage Tag Mask: 0 00:08:01.394 Protection Information Capabilities: 00:08:01.394 16b Guard Protection Information Storage Tag Support: No 00:08:01.394 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:01.394 Storage Tag Check Read Support: No 00:08:01.394 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.394 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.394 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.394 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.394 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.394 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.394 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.394 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.394 Namespace ID:2 00:08:01.395 Error Recovery Timeout: Unlimited 00:08:01.395 Command Set Identifier: NVM (00h) 00:08:01.395 Deallocate: Supported 00:08:01.395 Deallocated/Unwritten Error: Supported 00:08:01.395 Deallocated Read Value: All 0x00 00:08:01.395 Deallocate in Write Zeroes: Not Supported 00:08:01.395 Deallocated Guard Field: 0xFFFF 00:08:01.395 Flush: Supported 00:08:01.395 Reservation: Not Supported 00:08:01.395 Namespace Sharing Capabilities: Private 00:08:01.395 Size (in LBAs): 1048576 (4GiB) 00:08:01.395 Capacity (in LBAs): 1048576 (4GiB) 00:08:01.395 Utilization (in LBAs): 1048576 (4GiB) 00:08:01.395 Thin Provisioning: Not Supported 00:08:01.395 Per-NS Atomic Units: No 00:08:01.395 Maximum Single Source Range Length: 128 00:08:01.395 Maximum Copy Length: 128 00:08:01.395 Maximum Source Range Count: 128 00:08:01.395 NGUID/EUI64 Never Reused: No 00:08:01.395 Namespace Write Protected: No 00:08:01.395 Number of LBA Formats: 8 00:08:01.395 Current LBA Format: LBA Format #04 00:08:01.395 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:01.395 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:01.395 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:01.395 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:01.395 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:01.395 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:01.395 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:01.395 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:01.395 00:08:01.395 NVM Specific Namespace Data 00:08:01.395 =========================== 00:08:01.395 Logical Block Storage Tag Mask: 0 00:08:01.395 Protection Information Capabilities: 00:08:01.395 16b Guard Protection Information Storage Tag Support: No 00:08:01.395 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:01.395 Storage Tag Check Read Support: No 00:08:01.395 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.395 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.395 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.395 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.395 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.395 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.395 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.395 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.395 Namespace ID:3 00:08:01.395 Error Recovery Timeout: Unlimited 00:08:01.395 Command Set Identifier: NVM (00h) 00:08:01.395 Deallocate: Supported 00:08:01.395 Deallocated/Unwritten Error: Supported 00:08:01.395 Deallocated Read Value: All 0x00 00:08:01.395 Deallocate in Write Zeroes: Not Supported 00:08:01.395 Deallocated Guard Field: 0xFFFF 00:08:01.395 Flush: Supported 00:08:01.395 Reservation: Not Supported 00:08:01.395 Namespace Sharing Capabilities: Private 00:08:01.395 Size (in LBAs): 1048576 (4GiB) 00:08:01.395 Capacity (in LBAs): 1048576 (4GiB) 00:08:01.395 Utilization (in LBAs): 1048576 (4GiB) 00:08:01.395 Thin Provisioning: Not Supported 00:08:01.395 Per-NS Atomic Units: No 00:08:01.395 Maximum Single Source Range Length: 128 00:08:01.395 Maximum Copy Length: 128 00:08:01.395 Maximum Source Range Count: 128 00:08:01.395 NGUID/EUI64 Never Reused: No 00:08:01.395 Namespace Write Protected: No 00:08:01.395 Number of LBA Formats: 8 00:08:01.395 Current LBA Format: LBA Format #04 00:08:01.395 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:01.395 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:01.395 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:01.395 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:01.395 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:01.395 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:01.395 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:01.395 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:01.395 00:08:01.395 NVM Specific Namespace Data 00:08:01.395 =========================== 00:08:01.395 Logical Block Storage Tag Mask: 0 00:08:01.395 Protection Information Capabilities: 00:08:01.395 16b Guard Protection Information Storage Tag Support: No 00:08:01.395 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:01.395 Storage Tag Check Read Support: No 00:08:01.395 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.395 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.395 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.395 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.395 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.395 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.395 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.395 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.395 04:55:39 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:01.395 04:55:39 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:08:01.658 ===================================================== 00:08:01.658 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:01.658 ===================================================== 00:08:01.658 Controller Capabilities/Features 00:08:01.658 ================================ 00:08:01.658 Vendor ID: 1b36 00:08:01.658 Subsystem Vendor ID: 1af4 00:08:01.658 Serial Number: 12340 00:08:01.658 Model Number: QEMU NVMe Ctrl 00:08:01.658 Firmware Version: 8.0.0 00:08:01.658 Recommended Arb Burst: 6 00:08:01.658 IEEE OUI Identifier: 00 54 52 00:08:01.658 Multi-path I/O 00:08:01.658 May have multiple subsystem ports: No 00:08:01.658 May have multiple controllers: No 00:08:01.658 Associated with SR-IOV VF: No 00:08:01.658 Max Data Transfer Size: 524288 00:08:01.658 Max Number of Namespaces: 256 00:08:01.658 Max Number of I/O Queues: 64 00:08:01.658 NVMe Specification Version (VS): 1.4 00:08:01.658 NVMe Specification Version (Identify): 1.4 00:08:01.658 Maximum Queue Entries: 2048 00:08:01.658 Contiguous Queues Required: Yes 00:08:01.658 Arbitration Mechanisms Supported 00:08:01.658 Weighted Round Robin: Not Supported 00:08:01.658 Vendor Specific: Not Supported 00:08:01.658 Reset Timeout: 7500 ms 00:08:01.658 Doorbell Stride: 4 bytes 00:08:01.658 NVM Subsystem Reset: Not Supported 00:08:01.658 Command Sets Supported 00:08:01.658 NVM Command Set: Supported 00:08:01.658 Boot Partition: Not Supported 00:08:01.658 Memory Page Size Minimum: 4096 bytes 00:08:01.658 Memory Page Size Maximum: 65536 bytes 00:08:01.658 Persistent Memory Region: Not Supported 00:08:01.658 Optional Asynchronous Events Supported 00:08:01.658 Namespace Attribute Notices: Supported 00:08:01.658 Firmware Activation Notices: Not Supported 00:08:01.658 ANA Change Notices: Not Supported 00:08:01.658 PLE Aggregate Log Change Notices: Not Supported 00:08:01.658 LBA Status Info Alert Notices: Not Supported 00:08:01.658 EGE Aggregate Log Change Notices: Not Supported 00:08:01.658 Normal NVM Subsystem Shutdown event: Not Supported 00:08:01.658 Zone Descriptor Change Notices: Not Supported 00:08:01.658 Discovery Log Change Notices: Not Supported 00:08:01.658 Controller Attributes 00:08:01.658 128-bit Host Identifier: Not Supported 00:08:01.658 Non-Operational Permissive Mode: Not Supported 00:08:01.658 NVM Sets: Not Supported 00:08:01.658 Read Recovery Levels: Not Supported 00:08:01.658 Endurance Groups: Not Supported 00:08:01.658 Predictable Latency Mode: Not Supported 00:08:01.658 Traffic Based Keep ALive: Not Supported 00:08:01.658 Namespace Granularity: Not Supported 00:08:01.658 SQ Associations: Not Supported 00:08:01.658 UUID List: Not Supported 00:08:01.658 Multi-Domain Subsystem: Not Supported 00:08:01.658 Fixed Capacity Management: Not Supported 00:08:01.658 Variable Capacity Management: Not Supported 00:08:01.658 Delete Endurance Group: Not Supported 00:08:01.658 Delete NVM Set: Not Supported 00:08:01.658 Extended LBA Formats Supported: Supported 00:08:01.658 Flexible Data Placement Supported: Not Supported 00:08:01.658 00:08:01.658 Controller Memory Buffer Support 00:08:01.658 ================================ 00:08:01.658 Supported: No 00:08:01.658 00:08:01.658 Persistent Memory Region Support 00:08:01.658 ================================ 00:08:01.658 Supported: No 00:08:01.658 00:08:01.658 Admin Command Set Attributes 00:08:01.658 ============================ 00:08:01.658 Security Send/Receive: Not Supported 00:08:01.658 Format NVM: Supported 00:08:01.658 Firmware Activate/Download: Not Supported 00:08:01.658 Namespace Management: Supported 00:08:01.658 Device Self-Test: Not Supported 00:08:01.658 Directives: Supported 00:08:01.658 NVMe-MI: Not Supported 00:08:01.658 Virtualization Management: Not Supported 00:08:01.658 Doorbell Buffer Config: Supported 00:08:01.658 Get LBA Status Capability: Not Supported 00:08:01.658 Command & Feature Lockdown Capability: Not Supported 00:08:01.658 Abort Command Limit: 4 00:08:01.658 Async Event Request Limit: 4 00:08:01.658 Number of Firmware Slots: N/A 00:08:01.658 Firmware Slot 1 Read-Only: N/A 00:08:01.658 Firmware Activation Without Reset: N/A 00:08:01.658 Multiple Update Detection Support: N/A 00:08:01.658 Firmware Update Granularity: No Information Provided 00:08:01.658 Per-Namespace SMART Log: Yes 00:08:01.658 Asymmetric Namespace Access Log Page: Not Supported 00:08:01.658 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:08:01.659 Command Effects Log Page: Supported 00:08:01.659 Get Log Page Extended Data: Supported 00:08:01.659 Telemetry Log Pages: Not Supported 00:08:01.659 Persistent Event Log Pages: Not Supported 00:08:01.659 Supported Log Pages Log Page: May Support 00:08:01.659 Commands Supported & Effects Log Page: Not Supported 00:08:01.659 Feature Identifiers & Effects Log Page:May Support 00:08:01.659 NVMe-MI Commands & Effects Log Page: May Support 00:08:01.659 Data Area 4 for Telemetry Log: Not Supported 00:08:01.659 Error Log Page Entries Supported: 1 00:08:01.659 Keep Alive: Not Supported 00:08:01.659 00:08:01.659 NVM Command Set Attributes 00:08:01.659 ========================== 00:08:01.659 Submission Queue Entry Size 00:08:01.659 Max: 64 00:08:01.659 Min: 64 00:08:01.659 Completion Queue Entry Size 00:08:01.659 Max: 16 00:08:01.659 Min: 16 00:08:01.659 Number of Namespaces: 256 00:08:01.659 Compare Command: Supported 00:08:01.659 Write Uncorrectable Command: Not Supported 00:08:01.659 Dataset Management Command: Supported 00:08:01.659 Write Zeroes Command: Supported 00:08:01.659 Set Features Save Field: Supported 00:08:01.659 Reservations: Not Supported 00:08:01.659 Timestamp: Supported 00:08:01.659 Copy: Supported 00:08:01.659 Volatile Write Cache: Present 00:08:01.659 Atomic Write Unit (Normal): 1 00:08:01.659 Atomic Write Unit (PFail): 1 00:08:01.659 Atomic Compare & Write Unit: 1 00:08:01.659 Fused Compare & Write: Not Supported 00:08:01.659 Scatter-Gather List 00:08:01.659 SGL Command Set: Supported 00:08:01.659 SGL Keyed: Not Supported 00:08:01.659 SGL Bit Bucket Descriptor: Not Supported 00:08:01.659 SGL Metadata Pointer: Not Supported 00:08:01.659 Oversized SGL: Not Supported 00:08:01.659 SGL Metadata Address: Not Supported 00:08:01.659 SGL Offset: Not Supported 00:08:01.659 Transport SGL Data Block: Not Supported 00:08:01.659 Replay Protected Memory Block: Not Supported 00:08:01.659 00:08:01.659 Firmware Slot Information 00:08:01.659 ========================= 00:08:01.659 Active slot: 1 00:08:01.659 Slot 1 Firmware Revision: 1.0 00:08:01.659 00:08:01.659 00:08:01.659 Commands Supported and Effects 00:08:01.659 ============================== 00:08:01.659 Admin Commands 00:08:01.659 -------------- 00:08:01.659 Delete I/O Submission Queue (00h): Supported 00:08:01.659 Create I/O Submission Queue (01h): Supported 00:08:01.659 Get Log Page (02h): Supported 00:08:01.659 Delete I/O Completion Queue (04h): Supported 00:08:01.659 Create I/O Completion Queue (05h): Supported 00:08:01.659 Identify (06h): Supported 00:08:01.659 Abort (08h): Supported 00:08:01.659 Set Features (09h): Supported 00:08:01.659 Get Features (0Ah): Supported 00:08:01.659 Asynchronous Event Request (0Ch): Supported 00:08:01.659 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:01.659 Directive Send (19h): Supported 00:08:01.659 Directive Receive (1Ah): Supported 00:08:01.659 Virtualization Management (1Ch): Supported 00:08:01.659 Doorbell Buffer Config (7Ch): Supported 00:08:01.659 Format NVM (80h): Supported LBA-Change 00:08:01.659 I/O Commands 00:08:01.659 ------------ 00:08:01.659 Flush (00h): Supported LBA-Change 00:08:01.659 Write (01h): Supported LBA-Change 00:08:01.659 Read (02h): Supported 00:08:01.659 Compare (05h): Supported 00:08:01.659 Write Zeroes (08h): Supported LBA-Change 00:08:01.659 Dataset Management (09h): Supported LBA-Change 00:08:01.659 Unknown (0Ch): Supported 00:08:01.659 Unknown (12h): Supported 00:08:01.659 Copy (19h): Supported LBA-Change 00:08:01.659 Unknown (1Dh): Supported LBA-Change 00:08:01.659 00:08:01.659 Error Log 00:08:01.659 ========= 00:08:01.659 00:08:01.659 Arbitration 00:08:01.659 =========== 00:08:01.659 Arbitration Burst: no limit 00:08:01.659 00:08:01.659 Power Management 00:08:01.659 ================ 00:08:01.659 Number of Power States: 1 00:08:01.659 Current Power State: Power State #0 00:08:01.659 Power State #0: 00:08:01.659 Max Power: 25.00 W 00:08:01.659 Non-Operational State: Operational 00:08:01.659 Entry Latency: 16 microseconds 00:08:01.659 Exit Latency: 4 microseconds 00:08:01.659 Relative Read Throughput: 0 00:08:01.659 Relative Read Latency: 0 00:08:01.659 Relative Write Throughput: 0 00:08:01.659 Relative Write Latency: 0 00:08:01.659 Idle Power: Not Reported 00:08:01.659 Active Power: Not Reported 00:08:01.659 Non-Operational Permissive Mode: Not Supported 00:08:01.659 00:08:01.659 Health Information 00:08:01.659 ================== 00:08:01.659 Critical Warnings: 00:08:01.659 Available Spare Space: OK 00:08:01.659 Temperature: OK 00:08:01.659 Device Reliability: OK 00:08:01.659 Read Only: No 00:08:01.659 Volatile Memory Backup: OK 00:08:01.659 Current Temperature: 323 Kelvin (50 Celsius) 00:08:01.659 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:01.659 Available Spare: 0% 00:08:01.659 Available Spare Threshold: 0% 00:08:01.659 Life Percentage Used: 0% 00:08:01.659 Data Units Read: 666 00:08:01.659 Data Units Written: 594 00:08:01.659 Host Read Commands: 35103 00:08:01.659 Host Write Commands: 34889 00:08:01.659 Controller Busy Time: 0 minutes 00:08:01.659 Power Cycles: 0 00:08:01.659 Power On Hours: 0 hours 00:08:01.659 Unsafe Shutdowns: 0 00:08:01.659 Unrecoverable Media Errors: 0 00:08:01.659 Lifetime Error Log Entries: 0 00:08:01.659 Warning Temperature Time: 0 minutes 00:08:01.659 Critical Temperature Time: 0 minutes 00:08:01.659 00:08:01.659 Number of Queues 00:08:01.659 ================ 00:08:01.659 Number of I/O Submission Queues: 64 00:08:01.659 Number of I/O Completion Queues: 64 00:08:01.659 00:08:01.659 ZNS Specific Controller Data 00:08:01.659 ============================ 00:08:01.659 Zone Append Size Limit: 0 00:08:01.659 00:08:01.659 00:08:01.659 Active Namespaces 00:08:01.659 ================= 00:08:01.659 Namespace ID:1 00:08:01.659 Error Recovery Timeout: Unlimited 00:08:01.659 Command Set Identifier: NVM (00h) 00:08:01.659 Deallocate: Supported 00:08:01.659 Deallocated/Unwritten Error: Supported 00:08:01.659 Deallocated Read Value: All 0x00 00:08:01.659 Deallocate in Write Zeroes: Not Supported 00:08:01.659 Deallocated Guard Field: 0xFFFF 00:08:01.659 Flush: Supported 00:08:01.659 Reservation: Not Supported 00:08:01.659 Metadata Transferred as: Separate Metadata Buffer 00:08:01.659 Namespace Sharing Capabilities: Private 00:08:01.659 Size (in LBAs): 1548666 (5GiB) 00:08:01.659 Capacity (in LBAs): 1548666 (5GiB) 00:08:01.659 Utilization (in LBAs): 1548666 (5GiB) 00:08:01.659 Thin Provisioning: Not Supported 00:08:01.659 Per-NS Atomic Units: No 00:08:01.659 Maximum Single Source Range Length: 128 00:08:01.659 Maximum Copy Length: 128 00:08:01.659 Maximum Source Range Count: 128 00:08:01.659 NGUID/EUI64 Never Reused: No 00:08:01.659 Namespace Write Protected: No 00:08:01.659 Number of LBA Formats: 8 00:08:01.659 Current LBA Format: LBA Format #07 00:08:01.659 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:01.659 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:01.659 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:01.659 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:01.659 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:01.659 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:01.659 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:01.659 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:01.659 00:08:01.659 NVM Specific Namespace Data 00:08:01.659 =========================== 00:08:01.659 Logical Block Storage Tag Mask: 0 00:08:01.659 Protection Information Capabilities: 00:08:01.659 16b Guard Protection Information Storage Tag Support: No 00:08:01.659 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:01.659 Storage Tag Check Read Support: No 00:08:01.659 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.659 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.659 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.659 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.659 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.659 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.659 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.659 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.659 04:55:39 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:01.659 04:55:39 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:08:01.659 ===================================================== 00:08:01.659 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:01.659 ===================================================== 00:08:01.660 Controller Capabilities/Features 00:08:01.660 ================================ 00:08:01.660 Vendor ID: 1b36 00:08:01.660 Subsystem Vendor ID: 1af4 00:08:01.660 Serial Number: 12341 00:08:01.660 Model Number: QEMU NVMe Ctrl 00:08:01.660 Firmware Version: 8.0.0 00:08:01.660 Recommended Arb Burst: 6 00:08:01.660 IEEE OUI Identifier: 00 54 52 00:08:01.660 Multi-path I/O 00:08:01.660 May have multiple subsystem ports: No 00:08:01.660 May have multiple controllers: No 00:08:01.660 Associated with SR-IOV VF: No 00:08:01.660 Max Data Transfer Size: 524288 00:08:01.660 Max Number of Namespaces: 256 00:08:01.660 Max Number of I/O Queues: 64 00:08:01.660 NVMe Specification Version (VS): 1.4 00:08:01.660 NVMe Specification Version (Identify): 1.4 00:08:01.660 Maximum Queue Entries: 2048 00:08:01.660 Contiguous Queues Required: Yes 00:08:01.660 Arbitration Mechanisms Supported 00:08:01.660 Weighted Round Robin: Not Supported 00:08:01.660 Vendor Specific: Not Supported 00:08:01.660 Reset Timeout: 7500 ms 00:08:01.660 Doorbell Stride: 4 bytes 00:08:01.660 NVM Subsystem Reset: Not Supported 00:08:01.660 Command Sets Supported 00:08:01.660 NVM Command Set: Supported 00:08:01.660 Boot Partition: Not Supported 00:08:01.660 Memory Page Size Minimum: 4096 bytes 00:08:01.660 Memory Page Size Maximum: 65536 bytes 00:08:01.660 Persistent Memory Region: Not Supported 00:08:01.660 Optional Asynchronous Events Supported 00:08:01.660 Namespace Attribute Notices: Supported 00:08:01.660 Firmware Activation Notices: Not Supported 00:08:01.660 ANA Change Notices: Not Supported 00:08:01.660 PLE Aggregate Log Change Notices: Not Supported 00:08:01.660 LBA Status Info Alert Notices: Not Supported 00:08:01.660 EGE Aggregate Log Change Notices: Not Supported 00:08:01.660 Normal NVM Subsystem Shutdown event: Not Supported 00:08:01.660 Zone Descriptor Change Notices: Not Supported 00:08:01.660 Discovery Log Change Notices: Not Supported 00:08:01.660 Controller Attributes 00:08:01.660 128-bit Host Identifier: Not Supported 00:08:01.660 Non-Operational Permissive Mode: Not Supported 00:08:01.660 NVM Sets: Not Supported 00:08:01.660 Read Recovery Levels: Not Supported 00:08:01.660 Endurance Groups: Not Supported 00:08:01.660 Predictable Latency Mode: Not Supported 00:08:01.660 Traffic Based Keep ALive: Not Supported 00:08:01.660 Namespace Granularity: Not Supported 00:08:01.660 SQ Associations: Not Supported 00:08:01.660 UUID List: Not Supported 00:08:01.660 Multi-Domain Subsystem: Not Supported 00:08:01.660 Fixed Capacity Management: Not Supported 00:08:01.660 Variable Capacity Management: Not Supported 00:08:01.660 Delete Endurance Group: Not Supported 00:08:01.660 Delete NVM Set: Not Supported 00:08:01.660 Extended LBA Formats Supported: Supported 00:08:01.660 Flexible Data Placement Supported: Not Supported 00:08:01.660 00:08:01.660 Controller Memory Buffer Support 00:08:01.660 ================================ 00:08:01.660 Supported: No 00:08:01.660 00:08:01.660 Persistent Memory Region Support 00:08:01.660 ================================ 00:08:01.660 Supported: No 00:08:01.660 00:08:01.660 Admin Command Set Attributes 00:08:01.660 ============================ 00:08:01.660 Security Send/Receive: Not Supported 00:08:01.660 Format NVM: Supported 00:08:01.660 Firmware Activate/Download: Not Supported 00:08:01.660 Namespace Management: Supported 00:08:01.660 Device Self-Test: Not Supported 00:08:01.660 Directives: Supported 00:08:01.660 NVMe-MI: Not Supported 00:08:01.660 Virtualization Management: Not Supported 00:08:01.660 Doorbell Buffer Config: Supported 00:08:01.660 Get LBA Status Capability: Not Supported 00:08:01.660 Command & Feature Lockdown Capability: Not Supported 00:08:01.660 Abort Command Limit: 4 00:08:01.660 Async Event Request Limit: 4 00:08:01.660 Number of Firmware Slots: N/A 00:08:01.660 Firmware Slot 1 Read-Only: N/A 00:08:01.660 Firmware Activation Without Reset: N/A 00:08:01.660 Multiple Update Detection Support: N/A 00:08:01.660 Firmware Update Granularity: No Information Provided 00:08:01.660 Per-Namespace SMART Log: Yes 00:08:01.660 Asymmetric Namespace Access Log Page: Not Supported 00:08:01.660 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:08:01.660 Command Effects Log Page: Supported 00:08:01.660 Get Log Page Extended Data: Supported 00:08:01.660 Telemetry Log Pages: Not Supported 00:08:01.660 Persistent Event Log Pages: Not Supported 00:08:01.660 Supported Log Pages Log Page: May Support 00:08:01.660 Commands Supported & Effects Log Page: Not Supported 00:08:01.660 Feature Identifiers & Effects Log Page:May Support 00:08:01.660 NVMe-MI Commands & Effects Log Page: May Support 00:08:01.660 Data Area 4 for Telemetry Log: Not Supported 00:08:01.660 Error Log Page Entries Supported: 1 00:08:01.660 Keep Alive: Not Supported 00:08:01.660 00:08:01.660 NVM Command Set Attributes 00:08:01.660 ========================== 00:08:01.660 Submission Queue Entry Size 00:08:01.660 Max: 64 00:08:01.660 Min: 64 00:08:01.660 Completion Queue Entry Size 00:08:01.660 Max: 16 00:08:01.660 Min: 16 00:08:01.660 Number of Namespaces: 256 00:08:01.660 Compare Command: Supported 00:08:01.660 Write Uncorrectable Command: Not Supported 00:08:01.660 Dataset Management Command: Supported 00:08:01.660 Write Zeroes Command: Supported 00:08:01.660 Set Features Save Field: Supported 00:08:01.660 Reservations: Not Supported 00:08:01.660 Timestamp: Supported 00:08:01.660 Copy: Supported 00:08:01.660 Volatile Write Cache: Present 00:08:01.660 Atomic Write Unit (Normal): 1 00:08:01.660 Atomic Write Unit (PFail): 1 00:08:01.660 Atomic Compare & Write Unit: 1 00:08:01.660 Fused Compare & Write: Not Supported 00:08:01.660 Scatter-Gather List 00:08:01.660 SGL Command Set: Supported 00:08:01.660 SGL Keyed: Not Supported 00:08:01.660 SGL Bit Bucket Descriptor: Not Supported 00:08:01.660 SGL Metadata Pointer: Not Supported 00:08:01.660 Oversized SGL: Not Supported 00:08:01.660 SGL Metadata Address: Not Supported 00:08:01.660 SGL Offset: Not Supported 00:08:01.660 Transport SGL Data Block: Not Supported 00:08:01.660 Replay Protected Memory Block: Not Supported 00:08:01.660 00:08:01.660 Firmware Slot Information 00:08:01.660 ========================= 00:08:01.660 Active slot: 1 00:08:01.660 Slot 1 Firmware Revision: 1.0 00:08:01.660 00:08:01.660 00:08:01.660 Commands Supported and Effects 00:08:01.660 ============================== 00:08:01.660 Admin Commands 00:08:01.660 -------------- 00:08:01.660 Delete I/O Submission Queue (00h): Supported 00:08:01.660 Create I/O Submission Queue (01h): Supported 00:08:01.660 Get Log Page (02h): Supported 00:08:01.660 Delete I/O Completion Queue (04h): Supported 00:08:01.660 Create I/O Completion Queue (05h): Supported 00:08:01.660 Identify (06h): Supported 00:08:01.660 Abort (08h): Supported 00:08:01.660 Set Features (09h): Supported 00:08:01.660 Get Features (0Ah): Supported 00:08:01.660 Asynchronous Event Request (0Ch): Supported 00:08:01.660 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:01.660 Directive Send (19h): Supported 00:08:01.660 Directive Receive (1Ah): Supported 00:08:01.660 Virtualization Management (1Ch): Supported 00:08:01.660 Doorbell Buffer Config (7Ch): Supported 00:08:01.660 Format NVM (80h): Supported LBA-Change 00:08:01.660 I/O Commands 00:08:01.660 ------------ 00:08:01.660 Flush (00h): Supported LBA-Change 00:08:01.660 Write (01h): Supported LBA-Change 00:08:01.660 Read (02h): Supported 00:08:01.660 Compare (05h): Supported 00:08:01.660 Write Zeroes (08h): Supported LBA-Change 00:08:01.660 Dataset Management (09h): Supported LBA-Change 00:08:01.660 Unknown (0Ch): Supported 00:08:01.660 Unknown (12h): Supported 00:08:01.660 Copy (19h): Supported LBA-Change 00:08:01.660 Unknown (1Dh): Supported LBA-Change 00:08:01.660 00:08:01.660 Error Log 00:08:01.660 ========= 00:08:01.660 00:08:01.660 Arbitration 00:08:01.660 =========== 00:08:01.660 Arbitration Burst: no limit 00:08:01.660 00:08:01.660 Power Management 00:08:01.660 ================ 00:08:01.660 Number of Power States: 1 00:08:01.660 Current Power State: Power State #0 00:08:01.660 Power State #0: 00:08:01.660 Max Power: 25.00 W 00:08:01.660 Non-Operational State: Operational 00:08:01.660 Entry Latency: 16 microseconds 00:08:01.660 Exit Latency: 4 microseconds 00:08:01.660 Relative Read Throughput: 0 00:08:01.660 Relative Read Latency: 0 00:08:01.660 Relative Write Throughput: 0 00:08:01.660 Relative Write Latency: 0 00:08:01.660 Idle Power: Not Reported 00:08:01.660 Active Power: Not Reported 00:08:01.660 Non-Operational Permissive Mode: Not Supported 00:08:01.660 00:08:01.660 Health Information 00:08:01.660 ================== 00:08:01.660 Critical Warnings: 00:08:01.660 Available Spare Space: OK 00:08:01.661 Temperature: OK 00:08:01.661 Device Reliability: OK 00:08:01.661 Read Only: No 00:08:01.661 Volatile Memory Backup: OK 00:08:01.661 Current Temperature: 323 Kelvin (50 Celsius) 00:08:01.661 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:01.661 Available Spare: 0% 00:08:01.661 Available Spare Threshold: 0% 00:08:01.661 Life Percentage Used: 0% 00:08:01.661 Data Units Read: 1011 00:08:01.661 Data Units Written: 884 00:08:01.661 Host Read Commands: 51710 00:08:01.661 Host Write Commands: 50596 00:08:01.661 Controller Busy Time: 0 minutes 00:08:01.661 Power Cycles: 0 00:08:01.661 Power On Hours: 0 hours 00:08:01.661 Unsafe Shutdowns: 0 00:08:01.661 Unrecoverable Media Errors: 0 00:08:01.661 Lifetime Error Log Entries: 0 00:08:01.661 Warning Temperature Time: 0 minutes 00:08:01.661 Critical Temperature Time: 0 minutes 00:08:01.661 00:08:01.661 Number of Queues 00:08:01.661 ================ 00:08:01.661 Number of I/O Submission Queues: 64 00:08:01.661 Number of I/O Completion Queues: 64 00:08:01.661 00:08:01.661 ZNS Specific Controller Data 00:08:01.661 ============================ 00:08:01.661 Zone Append Size Limit: 0 00:08:01.661 00:08:01.661 00:08:01.661 Active Namespaces 00:08:01.661 ================= 00:08:01.661 Namespace ID:1 00:08:01.661 Error Recovery Timeout: Unlimited 00:08:01.661 Command Set Identifier: NVM (00h) 00:08:01.661 Deallocate: Supported 00:08:01.661 Deallocated/Unwritten Error: Supported 00:08:01.661 Deallocated Read Value: All 0x00 00:08:01.661 Deallocate in Write Zeroes: Not Supported 00:08:01.661 Deallocated Guard Field: 0xFFFF 00:08:01.661 Flush: Supported 00:08:01.661 Reservation: Not Supported 00:08:01.661 Namespace Sharing Capabilities: Private 00:08:01.661 Size (in LBAs): 1310720 (5GiB) 00:08:01.661 Capacity (in LBAs): 1310720 (5GiB) 00:08:01.661 Utilization (in LBAs): 1310720 (5GiB) 00:08:01.661 Thin Provisioning: Not Supported 00:08:01.661 Per-NS Atomic Units: No 00:08:01.661 Maximum Single Source Range Length: 128 00:08:01.661 Maximum Copy Length: 128 00:08:01.661 Maximum Source Range Count: 128 00:08:01.661 NGUID/EUI64 Never Reused: No 00:08:01.661 Namespace Write Protected: No 00:08:01.661 Number of LBA Formats: 8 00:08:01.661 Current LBA Format: LBA Format #04 00:08:01.661 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:01.661 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:01.661 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:01.661 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:01.661 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:01.661 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:01.661 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:01.661 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:01.661 00:08:01.661 NVM Specific Namespace Data 00:08:01.661 =========================== 00:08:01.661 Logical Block Storage Tag Mask: 0 00:08:01.661 Protection Information Capabilities: 00:08:01.661 16b Guard Protection Information Storage Tag Support: No 00:08:01.661 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:01.661 Storage Tag Check Read Support: No 00:08:01.661 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.661 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.661 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.661 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.661 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.661 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.661 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.661 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.661 04:55:39 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:01.661 04:55:39 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:08:01.922 ===================================================== 00:08:01.922 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:01.922 ===================================================== 00:08:01.922 Controller Capabilities/Features 00:08:01.922 ================================ 00:08:01.922 Vendor ID: 1b36 00:08:01.922 Subsystem Vendor ID: 1af4 00:08:01.922 Serial Number: 12342 00:08:01.922 Model Number: QEMU NVMe Ctrl 00:08:01.922 Firmware Version: 8.0.0 00:08:01.922 Recommended Arb Burst: 6 00:08:01.922 IEEE OUI Identifier: 00 54 52 00:08:01.922 Multi-path I/O 00:08:01.922 May have multiple subsystem ports: No 00:08:01.922 May have multiple controllers: No 00:08:01.922 Associated with SR-IOV VF: No 00:08:01.922 Max Data Transfer Size: 524288 00:08:01.922 Max Number of Namespaces: 256 00:08:01.922 Max Number of I/O Queues: 64 00:08:01.922 NVMe Specification Version (VS): 1.4 00:08:01.922 NVMe Specification Version (Identify): 1.4 00:08:01.922 Maximum Queue Entries: 2048 00:08:01.922 Contiguous Queues Required: Yes 00:08:01.922 Arbitration Mechanisms Supported 00:08:01.922 Weighted Round Robin: Not Supported 00:08:01.922 Vendor Specific: Not Supported 00:08:01.922 Reset Timeout: 7500 ms 00:08:01.922 Doorbell Stride: 4 bytes 00:08:01.922 NVM Subsystem Reset: Not Supported 00:08:01.922 Command Sets Supported 00:08:01.922 NVM Command Set: Supported 00:08:01.922 Boot Partition: Not Supported 00:08:01.922 Memory Page Size Minimum: 4096 bytes 00:08:01.922 Memory Page Size Maximum: 65536 bytes 00:08:01.922 Persistent Memory Region: Not Supported 00:08:01.922 Optional Asynchronous Events Supported 00:08:01.922 Namespace Attribute Notices: Supported 00:08:01.922 Firmware Activation Notices: Not Supported 00:08:01.922 ANA Change Notices: Not Supported 00:08:01.922 PLE Aggregate Log Change Notices: Not Supported 00:08:01.922 LBA Status Info Alert Notices: Not Supported 00:08:01.922 EGE Aggregate Log Change Notices: Not Supported 00:08:01.922 Normal NVM Subsystem Shutdown event: Not Supported 00:08:01.922 Zone Descriptor Change Notices: Not Supported 00:08:01.922 Discovery Log Change Notices: Not Supported 00:08:01.922 Controller Attributes 00:08:01.922 128-bit Host Identifier: Not Supported 00:08:01.922 Non-Operational Permissive Mode: Not Supported 00:08:01.922 NVM Sets: Not Supported 00:08:01.922 Read Recovery Levels: Not Supported 00:08:01.922 Endurance Groups: Not Supported 00:08:01.922 Predictable Latency Mode: Not Supported 00:08:01.922 Traffic Based Keep ALive: Not Supported 00:08:01.922 Namespace Granularity: Not Supported 00:08:01.922 SQ Associations: Not Supported 00:08:01.922 UUID List: Not Supported 00:08:01.922 Multi-Domain Subsystem: Not Supported 00:08:01.922 Fixed Capacity Management: Not Supported 00:08:01.922 Variable Capacity Management: Not Supported 00:08:01.922 Delete Endurance Group: Not Supported 00:08:01.922 Delete NVM Set: Not Supported 00:08:01.922 Extended LBA Formats Supported: Supported 00:08:01.922 Flexible Data Placement Supported: Not Supported 00:08:01.922 00:08:01.922 Controller Memory Buffer Support 00:08:01.922 ================================ 00:08:01.922 Supported: No 00:08:01.922 00:08:01.922 Persistent Memory Region Support 00:08:01.922 ================================ 00:08:01.922 Supported: No 00:08:01.922 00:08:01.922 Admin Command Set Attributes 00:08:01.922 ============================ 00:08:01.922 Security Send/Receive: Not Supported 00:08:01.922 Format NVM: Supported 00:08:01.922 Firmware Activate/Download: Not Supported 00:08:01.922 Namespace Management: Supported 00:08:01.922 Device Self-Test: Not Supported 00:08:01.922 Directives: Supported 00:08:01.922 NVMe-MI: Not Supported 00:08:01.922 Virtualization Management: Not Supported 00:08:01.922 Doorbell Buffer Config: Supported 00:08:01.922 Get LBA Status Capability: Not Supported 00:08:01.922 Command & Feature Lockdown Capability: Not Supported 00:08:01.922 Abort Command Limit: 4 00:08:01.922 Async Event Request Limit: 4 00:08:01.922 Number of Firmware Slots: N/A 00:08:01.922 Firmware Slot 1 Read-Only: N/A 00:08:01.922 Firmware Activation Without Reset: N/A 00:08:01.922 Multiple Update Detection Support: N/A 00:08:01.922 Firmware Update Granularity: No Information Provided 00:08:01.922 Per-Namespace SMART Log: Yes 00:08:01.922 Asymmetric Namespace Access Log Page: Not Supported 00:08:01.922 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:08:01.922 Command Effects Log Page: Supported 00:08:01.922 Get Log Page Extended Data: Supported 00:08:01.922 Telemetry Log Pages: Not Supported 00:08:01.922 Persistent Event Log Pages: Not Supported 00:08:01.922 Supported Log Pages Log Page: May Support 00:08:01.922 Commands Supported & Effects Log Page: Not Supported 00:08:01.922 Feature Identifiers & Effects Log Page:May Support 00:08:01.922 NVMe-MI Commands & Effects Log Page: May Support 00:08:01.922 Data Area 4 for Telemetry Log: Not Supported 00:08:01.922 Error Log Page Entries Supported: 1 00:08:01.922 Keep Alive: Not Supported 00:08:01.922 00:08:01.922 NVM Command Set Attributes 00:08:01.922 ========================== 00:08:01.922 Submission Queue Entry Size 00:08:01.922 Max: 64 00:08:01.922 Min: 64 00:08:01.922 Completion Queue Entry Size 00:08:01.922 Max: 16 00:08:01.922 Min: 16 00:08:01.922 Number of Namespaces: 256 00:08:01.922 Compare Command: Supported 00:08:01.922 Write Uncorrectable Command: Not Supported 00:08:01.922 Dataset Management Command: Supported 00:08:01.922 Write Zeroes Command: Supported 00:08:01.922 Set Features Save Field: Supported 00:08:01.922 Reservations: Not Supported 00:08:01.922 Timestamp: Supported 00:08:01.923 Copy: Supported 00:08:01.923 Volatile Write Cache: Present 00:08:01.923 Atomic Write Unit (Normal): 1 00:08:01.923 Atomic Write Unit (PFail): 1 00:08:01.923 Atomic Compare & Write Unit: 1 00:08:01.923 Fused Compare & Write: Not Supported 00:08:01.923 Scatter-Gather List 00:08:01.923 SGL Command Set: Supported 00:08:01.923 SGL Keyed: Not Supported 00:08:01.923 SGL Bit Bucket Descriptor: Not Supported 00:08:01.923 SGL Metadata Pointer: Not Supported 00:08:01.923 Oversized SGL: Not Supported 00:08:01.923 SGL Metadata Address: Not Supported 00:08:01.923 SGL Offset: Not Supported 00:08:01.923 Transport SGL Data Block: Not Supported 00:08:01.923 Replay Protected Memory Block: Not Supported 00:08:01.923 00:08:01.923 Firmware Slot Information 00:08:01.923 ========================= 00:08:01.923 Active slot: 1 00:08:01.923 Slot 1 Firmware Revision: 1.0 00:08:01.923 00:08:01.923 00:08:01.923 Commands Supported and Effects 00:08:01.923 ============================== 00:08:01.923 Admin Commands 00:08:01.923 -------------- 00:08:01.923 Delete I/O Submission Queue (00h): Supported 00:08:01.923 Create I/O Submission Queue (01h): Supported 00:08:01.923 Get Log Page (02h): Supported 00:08:01.923 Delete I/O Completion Queue (04h): Supported 00:08:01.923 Create I/O Completion Queue (05h): Supported 00:08:01.923 Identify (06h): Supported 00:08:01.923 Abort (08h): Supported 00:08:01.923 Set Features (09h): Supported 00:08:01.923 Get Features (0Ah): Supported 00:08:01.923 Asynchronous Event Request (0Ch): Supported 00:08:01.923 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:01.923 Directive Send (19h): Supported 00:08:01.923 Directive Receive (1Ah): Supported 00:08:01.923 Virtualization Management (1Ch): Supported 00:08:01.923 Doorbell Buffer Config (7Ch): Supported 00:08:01.923 Format NVM (80h): Supported LBA-Change 00:08:01.923 I/O Commands 00:08:01.923 ------------ 00:08:01.923 Flush (00h): Supported LBA-Change 00:08:01.923 Write (01h): Supported LBA-Change 00:08:01.923 Read (02h): Supported 00:08:01.923 Compare (05h): Supported 00:08:01.923 Write Zeroes (08h): Supported LBA-Change 00:08:01.923 Dataset Management (09h): Supported LBA-Change 00:08:01.923 Unknown (0Ch): Supported 00:08:01.923 Unknown (12h): Supported 00:08:01.923 Copy (19h): Supported LBA-Change 00:08:01.923 Unknown (1Dh): Supported LBA-Change 00:08:01.923 00:08:01.923 Error Log 00:08:01.923 ========= 00:08:01.923 00:08:01.923 Arbitration 00:08:01.923 =========== 00:08:01.923 Arbitration Burst: no limit 00:08:01.923 00:08:01.923 Power Management 00:08:01.923 ================ 00:08:01.923 Number of Power States: 1 00:08:01.923 Current Power State: Power State #0 00:08:01.923 Power State #0: 00:08:01.923 Max Power: 25.00 W 00:08:01.923 Non-Operational State: Operational 00:08:01.923 Entry Latency: 16 microseconds 00:08:01.923 Exit Latency: 4 microseconds 00:08:01.923 Relative Read Throughput: 0 00:08:01.923 Relative Read Latency: 0 00:08:01.923 Relative Write Throughput: 0 00:08:01.923 Relative Write Latency: 0 00:08:01.923 Idle Power: Not Reported 00:08:01.923 Active Power: Not Reported 00:08:01.923 Non-Operational Permissive Mode: Not Supported 00:08:01.923 00:08:01.923 Health Information 00:08:01.923 ================== 00:08:01.923 Critical Warnings: 00:08:01.923 Available Spare Space: OK 00:08:01.923 Temperature: OK 00:08:01.923 Device Reliability: OK 00:08:01.923 Read Only: No 00:08:01.923 Volatile Memory Backup: OK 00:08:01.923 Current Temperature: 323 Kelvin (50 Celsius) 00:08:01.923 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:01.923 Available Spare: 0% 00:08:01.923 Available Spare Threshold: 0% 00:08:01.923 Life Percentage Used: 0% 00:08:01.923 Data Units Read: 2242 00:08:01.923 Data Units Written: 2029 00:08:01.923 Host Read Commands: 107938 00:08:01.923 Host Write Commands: 106207 00:08:01.923 Controller Busy Time: 0 minutes 00:08:01.923 Power Cycles: 0 00:08:01.923 Power On Hours: 0 hours 00:08:01.923 Unsafe Shutdowns: 0 00:08:01.923 Unrecoverable Media Errors: 0 00:08:01.923 Lifetime Error Log Entries: 0 00:08:01.923 Warning Temperature Time: 0 minutes 00:08:01.923 Critical Temperature Time: 0 minutes 00:08:01.923 00:08:01.923 Number of Queues 00:08:01.923 ================ 00:08:01.923 Number of I/O Submission Queues: 64 00:08:01.923 Number of I/O Completion Queues: 64 00:08:01.923 00:08:01.923 ZNS Specific Controller Data 00:08:01.923 ============================ 00:08:01.923 Zone Append Size Limit: 0 00:08:01.923 00:08:01.923 00:08:01.923 Active Namespaces 00:08:01.923 ================= 00:08:01.923 Namespace ID:1 00:08:01.923 Error Recovery Timeout: Unlimited 00:08:01.923 Command Set Identifier: NVM (00h) 00:08:01.923 Deallocate: Supported 00:08:01.923 Deallocated/Unwritten Error: Supported 00:08:01.923 Deallocated Read Value: All 0x00 00:08:01.923 Deallocate in Write Zeroes: Not Supported 00:08:01.923 Deallocated Guard Field: 0xFFFF 00:08:01.923 Flush: Supported 00:08:01.923 Reservation: Not Supported 00:08:01.923 Namespace Sharing Capabilities: Private 00:08:01.923 Size (in LBAs): 1048576 (4GiB) 00:08:01.923 Capacity (in LBAs): 1048576 (4GiB) 00:08:01.923 Utilization (in LBAs): 1048576 (4GiB) 00:08:01.923 Thin Provisioning: Not Supported 00:08:01.923 Per-NS Atomic Units: No 00:08:01.923 Maximum Single Source Range Length: 128 00:08:01.923 Maximum Copy Length: 128 00:08:01.923 Maximum Source Range Count: 128 00:08:01.923 NGUID/EUI64 Never Reused: No 00:08:01.923 Namespace Write Protected: No 00:08:01.923 Number of LBA Formats: 8 00:08:01.923 Current LBA Format: LBA Format #04 00:08:01.923 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:01.923 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:01.923 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:01.923 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:01.923 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:01.923 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:01.923 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:01.923 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:01.923 00:08:01.923 NVM Specific Namespace Data 00:08:01.923 =========================== 00:08:01.923 Logical Block Storage Tag Mask: 0 00:08:01.923 Protection Information Capabilities: 00:08:01.923 16b Guard Protection Information Storage Tag Support: No 00:08:01.923 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:01.923 Storage Tag Check Read Support: No 00:08:01.923 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.923 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.923 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.923 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.923 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.923 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.923 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.923 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.923 Namespace ID:2 00:08:01.923 Error Recovery Timeout: Unlimited 00:08:01.923 Command Set Identifier: NVM (00h) 00:08:01.923 Deallocate: Supported 00:08:01.923 Deallocated/Unwritten Error: Supported 00:08:01.923 Deallocated Read Value: All 0x00 00:08:01.923 Deallocate in Write Zeroes: Not Supported 00:08:01.923 Deallocated Guard Field: 0xFFFF 00:08:01.923 Flush: Supported 00:08:01.923 Reservation: Not Supported 00:08:01.923 Namespace Sharing Capabilities: Private 00:08:01.923 Size (in LBAs): 1048576 (4GiB) 00:08:01.923 Capacity (in LBAs): 1048576 (4GiB) 00:08:01.923 Utilization (in LBAs): 1048576 (4GiB) 00:08:01.923 Thin Provisioning: Not Supported 00:08:01.923 Per-NS Atomic Units: No 00:08:01.923 Maximum Single Source Range Length: 128 00:08:01.923 Maximum Copy Length: 128 00:08:01.923 Maximum Source Range Count: 128 00:08:01.923 NGUID/EUI64 Never Reused: No 00:08:01.923 Namespace Write Protected: No 00:08:01.923 Number of LBA Formats: 8 00:08:01.923 Current LBA Format: LBA Format #04 00:08:01.923 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:01.923 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:01.923 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:01.923 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:01.923 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:01.923 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:01.923 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:01.923 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:01.923 00:08:01.923 NVM Specific Namespace Data 00:08:01.923 =========================== 00:08:01.923 Logical Block Storage Tag Mask: 0 00:08:01.923 Protection Information Capabilities: 00:08:01.924 16b Guard Protection Information Storage Tag Support: No 00:08:01.924 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:01.924 Storage Tag Check Read Support: No 00:08:01.924 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.924 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.924 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.924 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.924 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.924 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.924 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.924 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.924 Namespace ID:3 00:08:01.924 Error Recovery Timeout: Unlimited 00:08:01.924 Command Set Identifier: NVM (00h) 00:08:01.924 Deallocate: Supported 00:08:01.924 Deallocated/Unwritten Error: Supported 00:08:01.924 Deallocated Read Value: All 0x00 00:08:01.924 Deallocate in Write Zeroes: Not Supported 00:08:01.924 Deallocated Guard Field: 0xFFFF 00:08:01.924 Flush: Supported 00:08:01.924 Reservation: Not Supported 00:08:01.924 Namespace Sharing Capabilities: Private 00:08:01.924 Size (in LBAs): 1048576 (4GiB) 00:08:01.924 Capacity (in LBAs): 1048576 (4GiB) 00:08:01.924 Utilization (in LBAs): 1048576 (4GiB) 00:08:01.924 Thin Provisioning: Not Supported 00:08:01.924 Per-NS Atomic Units: No 00:08:01.924 Maximum Single Source Range Length: 128 00:08:01.924 Maximum Copy Length: 128 00:08:01.924 Maximum Source Range Count: 128 00:08:01.924 NGUID/EUI64 Never Reused: No 00:08:01.924 Namespace Write Protected: No 00:08:01.924 Number of LBA Formats: 8 00:08:01.924 Current LBA Format: LBA Format #04 00:08:01.924 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:01.924 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:01.924 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:01.924 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:01.924 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:01.924 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:01.924 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:01.924 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:01.924 00:08:01.924 NVM Specific Namespace Data 00:08:01.924 =========================== 00:08:01.924 Logical Block Storage Tag Mask: 0 00:08:01.924 Protection Information Capabilities: 00:08:01.924 16b Guard Protection Information Storage Tag Support: No 00:08:01.924 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:01.924 Storage Tag Check Read Support: No 00:08:01.924 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.924 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.924 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.924 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.924 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.924 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.924 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.924 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:01.924 04:55:39 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:01.924 04:55:39 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:08:01.924 ===================================================== 00:08:01.924 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:01.924 ===================================================== 00:08:01.924 Controller Capabilities/Features 00:08:01.924 ================================ 00:08:01.924 Vendor ID: 1b36 00:08:01.924 Subsystem Vendor ID: 1af4 00:08:01.924 Serial Number: 12343 00:08:01.924 Model Number: QEMU NVMe Ctrl 00:08:01.924 Firmware Version: 8.0.0 00:08:01.924 Recommended Arb Burst: 6 00:08:01.924 IEEE OUI Identifier: 00 54 52 00:08:01.924 Multi-path I/O 00:08:01.924 May have multiple subsystem ports: No 00:08:01.924 May have multiple controllers: Yes 00:08:01.924 Associated with SR-IOV VF: No 00:08:01.924 Max Data Transfer Size: 524288 00:08:01.924 Max Number of Namespaces: 256 00:08:01.924 Max Number of I/O Queues: 64 00:08:01.924 NVMe Specification Version (VS): 1.4 00:08:01.924 NVMe Specification Version (Identify): 1.4 00:08:01.924 Maximum Queue Entries: 2048 00:08:01.924 Contiguous Queues Required: Yes 00:08:01.924 Arbitration Mechanisms Supported 00:08:01.924 Weighted Round Robin: Not Supported 00:08:01.924 Vendor Specific: Not Supported 00:08:01.924 Reset Timeout: 7500 ms 00:08:01.924 Doorbell Stride: 4 bytes 00:08:01.924 NVM Subsystem Reset: Not Supported 00:08:01.924 Command Sets Supported 00:08:01.924 NVM Command Set: Supported 00:08:01.924 Boot Partition: Not Supported 00:08:01.924 Memory Page Size Minimum: 4096 bytes 00:08:01.924 Memory Page Size Maximum: 65536 bytes 00:08:01.924 Persistent Memory Region: Not Supported 00:08:01.924 Optional Asynchronous Events Supported 00:08:01.924 Namespace Attribute Notices: Supported 00:08:01.924 Firmware Activation Notices: Not Supported 00:08:01.924 ANA Change Notices: Not Supported 00:08:01.924 PLE Aggregate Log Change Notices: Not Supported 00:08:01.924 LBA Status Info Alert Notices: Not Supported 00:08:01.924 EGE Aggregate Log Change Notices: Not Supported 00:08:01.924 Normal NVM Subsystem Shutdown event: Not Supported 00:08:01.924 Zone Descriptor Change Notices: Not Supported 00:08:01.924 Discovery Log Change Notices: Not Supported 00:08:01.924 Controller Attributes 00:08:01.924 128-bit Host Identifier: Not Supported 00:08:01.924 Non-Operational Permissive Mode: Not Supported 00:08:01.924 NVM Sets: Not Supported 00:08:01.924 Read Recovery Levels: Not Supported 00:08:01.924 Endurance Groups: Supported 00:08:01.924 Predictable Latency Mode: Not Supported 00:08:01.924 Traffic Based Keep ALive: Not Supported 00:08:01.924 Namespace Granularity: Not Supported 00:08:01.924 SQ Associations: Not Supported 00:08:01.924 UUID List: Not Supported 00:08:01.924 Multi-Domain Subsystem: Not Supported 00:08:01.924 Fixed Capacity Management: Not Supported 00:08:01.924 Variable Capacity Management: Not Supported 00:08:01.924 Delete Endurance Group: Not Supported 00:08:01.924 Delete NVM Set: Not Supported 00:08:01.924 Extended LBA Formats Supported: Supported 00:08:01.924 Flexible Data Placement Supported: Supported 00:08:01.924 00:08:01.924 Controller Memory Buffer Support 00:08:01.924 ================================ 00:08:01.924 Supported: No 00:08:01.924 00:08:01.924 Persistent Memory Region Support 00:08:01.924 ================================ 00:08:01.924 Supported: No 00:08:01.924 00:08:01.924 Admin Command Set Attributes 00:08:01.924 ============================ 00:08:01.924 Security Send/Receive: Not Supported 00:08:01.924 Format NVM: Supported 00:08:01.924 Firmware Activate/Download: Not Supported 00:08:01.924 Namespace Management: Supported 00:08:01.924 Device Self-Test: Not Supported 00:08:01.924 Directives: Supported 00:08:01.924 NVMe-MI: Not Supported 00:08:01.924 Virtualization Management: Not Supported 00:08:01.924 Doorbell Buffer Config: Supported 00:08:01.924 Get LBA Status Capability: Not Supported 00:08:01.924 Command & Feature Lockdown Capability: Not Supported 00:08:01.924 Abort Command Limit: 4 00:08:01.924 Async Event Request Limit: 4 00:08:01.924 Number of Firmware Slots: N/A 00:08:01.924 Firmware Slot 1 Read-Only: N/A 00:08:01.924 Firmware Activation Without Reset: N/A 00:08:01.924 Multiple Update Detection Support: N/A 00:08:01.924 Firmware Update Granularity: No Information Provided 00:08:01.924 Per-Namespace SMART Log: Yes 00:08:01.924 Asymmetric Namespace Access Log Page: Not Supported 00:08:01.924 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:08:01.924 Command Effects Log Page: Supported 00:08:01.924 Get Log Page Extended Data: Supported 00:08:01.924 Telemetry Log Pages: Not Supported 00:08:01.924 Persistent Event Log Pages: Not Supported 00:08:01.924 Supported Log Pages Log Page: May Support 00:08:01.924 Commands Supported & Effects Log Page: Not Supported 00:08:01.924 Feature Identifiers & Effects Log Page:May Support 00:08:01.924 NVMe-MI Commands & Effects Log Page: May Support 00:08:01.924 Data Area 4 for Telemetry Log: Not Supported 00:08:01.924 Error Log Page Entries Supported: 1 00:08:01.924 Keep Alive: Not Supported 00:08:01.924 00:08:01.924 NVM Command Set Attributes 00:08:01.924 ========================== 00:08:01.924 Submission Queue Entry Size 00:08:01.924 Max: 64 00:08:01.924 Min: 64 00:08:01.924 Completion Queue Entry Size 00:08:01.924 Max: 16 00:08:01.924 Min: 16 00:08:01.924 Number of Namespaces: 256 00:08:01.924 Compare Command: Supported 00:08:01.924 Write Uncorrectable Command: Not Supported 00:08:01.924 Dataset Management Command: Supported 00:08:01.924 Write Zeroes Command: Supported 00:08:01.924 Set Features Save Field: Supported 00:08:01.925 Reservations: Not Supported 00:08:01.925 Timestamp: Supported 00:08:01.925 Copy: Supported 00:08:01.925 Volatile Write Cache: Present 00:08:01.925 Atomic Write Unit (Normal): 1 00:08:01.925 Atomic Write Unit (PFail): 1 00:08:01.925 Atomic Compare & Write Unit: 1 00:08:01.925 Fused Compare & Write: Not Supported 00:08:01.925 Scatter-Gather List 00:08:01.925 SGL Command Set: Supported 00:08:01.925 SGL Keyed: Not Supported 00:08:01.925 SGL Bit Bucket Descriptor: Not Supported 00:08:01.925 SGL Metadata Pointer: Not Supported 00:08:01.925 Oversized SGL: Not Supported 00:08:01.925 SGL Metadata Address: Not Supported 00:08:01.925 SGL Offset: Not Supported 00:08:01.925 Transport SGL Data Block: Not Supported 00:08:01.925 Replay Protected Memory Block: Not Supported 00:08:01.925 00:08:01.925 Firmware Slot Information 00:08:01.925 ========================= 00:08:01.925 Active slot: 1 00:08:01.925 Slot 1 Firmware Revision: 1.0 00:08:01.925 00:08:01.925 00:08:01.925 Commands Supported and Effects 00:08:01.925 ============================== 00:08:01.925 Admin Commands 00:08:01.925 -------------- 00:08:01.925 Delete I/O Submission Queue (00h): Supported 00:08:01.925 Create I/O Submission Queue (01h): Supported 00:08:01.925 Get Log Page (02h): Supported 00:08:01.925 Delete I/O Completion Queue (04h): Supported 00:08:01.925 Create I/O Completion Queue (05h): Supported 00:08:01.925 Identify (06h): Supported 00:08:01.925 Abort (08h): Supported 00:08:01.925 Set Features (09h): Supported 00:08:01.925 Get Features (0Ah): Supported 00:08:01.925 Asynchronous Event Request (0Ch): Supported 00:08:01.925 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:01.925 Directive Send (19h): Supported 00:08:01.925 Directive Receive (1Ah): Supported 00:08:01.925 Virtualization Management (1Ch): Supported 00:08:01.925 Doorbell Buffer Config (7Ch): Supported 00:08:01.925 Format NVM (80h): Supported LBA-Change 00:08:01.925 I/O Commands 00:08:01.925 ------------ 00:08:01.925 Flush (00h): Supported LBA-Change 00:08:01.925 Write (01h): Supported LBA-Change 00:08:01.925 Read (02h): Supported 00:08:01.925 Compare (05h): Supported 00:08:01.925 Write Zeroes (08h): Supported LBA-Change 00:08:01.925 Dataset Management (09h): Supported LBA-Change 00:08:01.925 Unknown (0Ch): Supported 00:08:01.925 Unknown (12h): Supported 00:08:01.925 Copy (19h): Supported LBA-Change 00:08:01.925 Unknown (1Dh): Supported LBA-Change 00:08:01.925 00:08:01.925 Error Log 00:08:01.925 ========= 00:08:01.925 00:08:01.925 Arbitration 00:08:01.925 =========== 00:08:01.925 Arbitration Burst: no limit 00:08:01.925 00:08:01.925 Power Management 00:08:01.925 ================ 00:08:01.925 Number of Power States: 1 00:08:01.925 Current Power State: Power State #0 00:08:01.925 Power State #0: 00:08:01.925 Max Power: 25.00 W 00:08:01.925 Non-Operational State: Operational 00:08:01.925 Entry Latency: 16 microseconds 00:08:01.925 Exit Latency: 4 microseconds 00:08:01.925 Relative Read Throughput: 0 00:08:01.925 Relative Read Latency: 0 00:08:01.925 Relative Write Throughput: 0 00:08:01.925 Relative Write Latency: 0 00:08:01.925 Idle Power: Not Reported 00:08:01.925 Active Power: Not Reported 00:08:01.925 Non-Operational Permissive Mode: Not Supported 00:08:01.925 00:08:01.925 Health Information 00:08:01.925 ================== 00:08:01.925 Critical Warnings: 00:08:01.925 Available Spare Space: OK 00:08:01.925 Temperature: OK 00:08:01.925 Device Reliability: OK 00:08:01.925 Read Only: No 00:08:01.925 Volatile Memory Backup: OK 00:08:01.925 Current Temperature: 323 Kelvin (50 Celsius) 00:08:01.925 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:01.925 Available Spare: 0% 00:08:01.925 Available Spare Threshold: 0% 00:08:01.925 Life Percentage Used: 0% 00:08:01.925 Data Units Read: 801 00:08:01.925 Data Units Written: 730 00:08:01.925 Host Read Commands: 36456 00:08:01.925 Host Write Commands: 35880 00:08:01.925 Controller Busy Time: 0 minutes 00:08:01.925 Power Cycles: 0 00:08:01.925 Power On Hours: 0 hours 00:08:01.925 Unsafe Shutdowns: 0 00:08:01.925 Unrecoverable Media Errors: 0 00:08:01.925 Lifetime Error Log Entries: 0 00:08:01.925 Warning Temperature Time: 0 minutes 00:08:01.925 Critical Temperature Time: 0 minutes 00:08:01.925 00:08:01.925 Number of Queues 00:08:01.925 ================ 00:08:01.925 Number of I/O Submission Queues: 64 00:08:01.925 Number of I/O Completion Queues: 64 00:08:01.925 00:08:01.925 ZNS Specific Controller Data 00:08:01.925 ============================ 00:08:01.925 Zone Append Size Limit: 0 00:08:01.925 00:08:01.925 00:08:01.925 Active Namespaces 00:08:01.925 ================= 00:08:01.925 Namespace ID:1 00:08:01.925 Error Recovery Timeout: Unlimited 00:08:01.925 Command Set Identifier: NVM (00h) 00:08:01.925 Deallocate: Supported 00:08:01.925 Deallocated/Unwritten Error: Supported 00:08:01.925 Deallocated Read Value: All 0x00 00:08:01.925 Deallocate in Write Zeroes: Not Supported 00:08:01.925 Deallocated Guard Field: 0xFFFF 00:08:01.925 Flush: Supported 00:08:01.925 Reservation: Not Supported 00:08:01.925 Namespace Sharing Capabilities: Multiple Controllers 00:08:01.925 Size (in LBAs): 262144 (1GiB) 00:08:01.925 Capacity (in LBAs): 262144 (1GiB) 00:08:01.925 Utilization (in LBAs): 262144 (1GiB) 00:08:01.925 Thin Provisioning: Not Supported 00:08:01.925 Per-NS Atomic Units: No 00:08:01.925 Maximum Single Source Range Length: 128 00:08:01.925 Maximum Copy Length: 128 00:08:01.925 Maximum Source Range Count: 128 00:08:01.925 NGUID/EUI64 Never Reused: No 00:08:01.925 Namespace Write Protected: No 00:08:01.925 Endurance group ID: 1 00:08:01.925 Number of LBA Formats: 8 00:08:01.925 Current LBA Format: LBA Format #04 00:08:01.925 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:01.925 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:01.925 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:01.925 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:01.925 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:01.925 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:01.925 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:01.925 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:01.925 00:08:01.925 Get Feature FDP: 00:08:01.925 ================ 00:08:01.925 Enabled: Yes 00:08:01.925 FDP configuration index: 0 00:08:01.925 00:08:01.925 FDP configurations log page 00:08:01.925 =========================== 00:08:01.925 Number of FDP configurations: 1 00:08:01.925 Version: 0 00:08:01.925 Size: 112 00:08:01.925 FDP Configuration Descriptor: 0 00:08:01.925 Descriptor Size: 96 00:08:01.925 Reclaim Group Identifier format: 2 00:08:01.925 FDP Volatile Write Cache: Not Present 00:08:01.925 FDP Configuration: Valid 00:08:01.925 Vendor Specific Size: 0 00:08:01.925 Number of Reclaim Groups: 2 00:08:01.925 Number of Recalim Unit Handles: 8 00:08:01.925 Max Placement Identifiers: 128 00:08:01.925 Number of Namespaces Suppprted: 256 00:08:01.925 Reclaim unit Nominal Size: 6000000 bytes 00:08:01.925 Estimated Reclaim Unit Time Limit: Not Reported 00:08:01.925 RUH Desc #000: RUH Type: Initially Isolated 00:08:01.925 RUH Desc #001: RUH Type: Initially Isolated 00:08:01.925 RUH Desc #002: RUH Type: Initially Isolated 00:08:01.925 RUH Desc #003: RUH Type: Initially Isolated 00:08:01.925 RUH Desc #004: RUH Type: Initially Isolated 00:08:01.925 RUH Desc #005: RUH Type: Initially Isolated 00:08:01.925 RUH Desc #006: RUH Type: Initially Isolated 00:08:01.925 RUH Desc #007: RUH Type: Initially Isolated 00:08:01.925 00:08:01.925 FDP reclaim unit handle usage log page 00:08:02.186 ====================================== 00:08:02.186 Number of Reclaim Unit Handles: 8 00:08:02.186 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:02.186 RUH Usage Desc #001: RUH Attributes: Unused 00:08:02.186 RUH Usage Desc #002: RUH Attributes: Unused 00:08:02.186 RUH Usage Desc #003: RUH Attributes: Unused 00:08:02.186 RUH Usage Desc #004: RUH Attributes: Unused 00:08:02.186 RUH Usage Desc #005: RUH Attributes: Unused 00:08:02.186 RUH Usage Desc #006: RUH Attributes: Unused 00:08:02.186 RUH Usage Desc #007: RUH Attributes: Unused 00:08:02.186 00:08:02.186 FDP statistics log page 00:08:02.186 ======================= 00:08:02.186 Host bytes with metadata written: 478978048 00:08:02.186 Media bytes with metadata written: 479031296 00:08:02.186 Media bytes erased: 0 00:08:02.186 00:08:02.186 FDP events log page 00:08:02.186 =================== 00:08:02.186 Number of FDP events: 0 00:08:02.186 00:08:02.186 NVM Specific Namespace Data 00:08:02.186 =========================== 00:08:02.186 Logical Block Storage Tag Mask: 0 00:08:02.186 Protection Information Capabilities: 00:08:02.186 16b Guard Protection Information Storage Tag Support: No 00:08:02.186 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:02.186 Storage Tag Check Read Support: No 00:08:02.186 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.186 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.186 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.186 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.186 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.186 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.186 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.186 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.186 00:08:02.186 real 0m0.917s 00:08:02.186 user 0m0.311s 00:08:02.186 sys 0m0.407s 00:08:02.186 04:55:40 nvme.nvme_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:02.186 ************************************ 00:08:02.186 END TEST nvme_identify 00:08:02.186 ************************************ 00:08:02.186 04:55:40 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:08:02.186 04:55:40 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:08:02.186 04:55:40 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:02.186 04:55:40 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:02.186 04:55:40 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:02.186 ************************************ 00:08:02.186 START TEST nvme_perf 00:08:02.186 ************************************ 00:08:02.186 04:55:40 nvme.nvme_perf -- common/autotest_common.sh@1125 -- # nvme_perf 00:08:02.186 04:55:40 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:08:03.573 Initializing NVMe Controllers 00:08:03.573 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:03.573 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:03.573 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:03.573 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:03.573 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:03.573 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:03.573 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:03.573 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:03.573 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:03.573 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:03.573 Initialization complete. Launching workers. 00:08:03.573 ======================================================== 00:08:03.573 Latency(us) 00:08:03.573 Device Information : IOPS MiB/s Average min max 00:08:03.573 PCIE (0000:00:13.0) NSID 1 from core 0: 9724.41 113.96 13178.24 7559.94 26094.78 00:08:03.573 PCIE (0000:00:10.0) NSID 1 from core 0: 9724.41 113.96 13174.88 7925.15 26027.97 00:08:03.573 PCIE (0000:00:11.0) NSID 1 from core 0: 9724.41 113.96 13168.60 7905.62 25809.62 00:08:03.573 PCIE (0000:00:12.0) NSID 1 from core 0: 9724.41 113.96 13160.54 7618.89 26944.51 00:08:03.573 PCIE (0000:00:12.0) NSID 2 from core 0: 9724.41 113.96 13152.56 6141.39 27282.87 00:08:03.573 PCIE (0000:00:12.0) NSID 3 from core 0: 9724.41 113.96 13144.20 5226.20 27601.66 00:08:03.573 ======================================================== 00:08:03.573 Total : 58346.43 683.75 13163.17 5226.20 27601.66 00:08:03.573 00:08:03.573 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:03.573 ================================================================================= 00:08:03.573 1.00000% : 8570.092us 00:08:03.573 10.00000% : 10384.935us 00:08:03.573 25.00000% : 11090.708us 00:08:03.573 50.00000% : 13107.200us 00:08:03.573 75.00000% : 14922.043us 00:08:03.573 90.00000% : 16333.588us 00:08:03.573 95.00000% : 17039.360us 00:08:03.573 98.00000% : 18047.606us 00:08:03.573 99.00000% : 18652.554us 00:08:03.573 99.50000% : 25306.978us 00:08:03.573 99.90000% : 26012.751us 00:08:03.573 99.99000% : 26214.400us 00:08:03.573 99.99900% : 26214.400us 00:08:03.573 99.99990% : 26214.400us 00:08:03.573 99.99999% : 26214.400us 00:08:03.573 00:08:03.573 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:03.573 ================================================================================= 00:08:03.573 1.00000% : 8620.505us 00:08:03.573 10.00000% : 10334.523us 00:08:03.573 25.00000% : 11141.120us 00:08:03.573 50.00000% : 13107.200us 00:08:03.573 75.00000% : 14922.043us 00:08:03.573 90.00000% : 16333.588us 00:08:03.573 95.00000% : 17039.360us 00:08:03.573 98.00000% : 18350.080us 00:08:03.573 99.00000% : 19559.975us 00:08:03.573 99.50000% : 25105.329us 00:08:03.573 99.90000% : 26012.751us 00:08:03.573 99.99000% : 26214.400us 00:08:03.573 99.99900% : 26214.400us 00:08:03.573 99.99990% : 26214.400us 00:08:03.573 99.99999% : 26214.400us 00:08:03.573 00:08:03.573 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:03.573 ================================================================================= 00:08:03.573 1.00000% : 8620.505us 00:08:03.573 10.00000% : 10384.935us 00:08:03.573 25.00000% : 11090.708us 00:08:03.573 50.00000% : 13107.200us 00:08:03.573 75.00000% : 14821.218us 00:08:03.573 90.00000% : 16434.412us 00:08:03.573 95.00000% : 17140.185us 00:08:03.573 98.00000% : 18652.554us 00:08:03.573 99.00000% : 19459.151us 00:08:03.573 99.50000% : 25004.505us 00:08:03.574 99.90000% : 25710.277us 00:08:03.574 99.99000% : 25811.102us 00:08:03.574 99.99900% : 25811.102us 00:08:03.574 99.99990% : 25811.102us 00:08:03.574 99.99999% : 25811.102us 00:08:03.574 00:08:03.574 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:03.574 ================================================================================= 00:08:03.574 1.00000% : 8267.618us 00:08:03.574 10.00000% : 10384.935us 00:08:03.574 25.00000% : 11141.120us 00:08:03.574 50.00000% : 13208.025us 00:08:03.574 75.00000% : 14821.218us 00:08:03.574 90.00000% : 16232.763us 00:08:03.574 95.00000% : 17140.185us 00:08:03.574 98.00000% : 17946.782us 00:08:03.574 99.00000% : 20164.923us 00:08:03.574 99.50000% : 25710.277us 00:08:03.574 99.90000% : 26819.348us 00:08:03.574 99.99000% : 27020.997us 00:08:03.574 99.99900% : 27020.997us 00:08:03.574 99.99990% : 27020.997us 00:08:03.574 99.99999% : 27020.997us 00:08:03.574 00:08:03.574 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:03.574 ================================================================================= 00:08:03.574 1.00000% : 8065.969us 00:08:03.574 10.00000% : 10384.935us 00:08:03.574 25.00000% : 11141.120us 00:08:03.574 50.00000% : 13107.200us 00:08:03.574 75.00000% : 14821.218us 00:08:03.574 90.00000% : 16232.763us 00:08:03.574 95.00000% : 17341.834us 00:08:03.574 98.00000% : 18148.431us 00:08:03.574 99.00000% : 19963.274us 00:08:03.574 99.50000% : 26214.400us 00:08:03.574 99.90000% : 27222.646us 00:08:03.574 99.99000% : 27424.295us 00:08:03.574 99.99900% : 27424.295us 00:08:03.574 99.99990% : 27424.295us 00:08:03.574 99.99999% : 27424.295us 00:08:03.574 00:08:03.574 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:03.574 ================================================================================= 00:08:03.574 1.00000% : 8217.206us 00:08:03.574 10.00000% : 10334.523us 00:08:03.574 25.00000% : 11090.708us 00:08:03.574 50.00000% : 13107.200us 00:08:03.574 75.00000% : 14922.043us 00:08:03.574 90.00000% : 16333.588us 00:08:03.574 95.00000% : 17140.185us 00:08:03.574 98.00000% : 18249.255us 00:08:03.574 99.00000% : 19761.625us 00:08:03.574 99.50000% : 26416.049us 00:08:03.574 99.90000% : 27424.295us 00:08:03.574 99.99000% : 27625.945us 00:08:03.574 99.99900% : 27625.945us 00:08:03.574 99.99990% : 27625.945us 00:08:03.574 99.99999% : 27625.945us 00:08:03.574 00:08:03.574 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:03.574 ============================================================================== 00:08:03.574 Range in us Cumulative IO count 00:08:03.574 7511.434 - 7561.846: 0.0204% ( 2) 00:08:03.574 7561.846 - 7612.258: 0.0511% ( 3) 00:08:03.574 7612.258 - 7662.671: 0.0817% ( 3) 00:08:03.574 7662.671 - 7713.083: 0.1225% ( 4) 00:08:03.574 7713.083 - 7763.495: 0.1634% ( 4) 00:08:03.574 7763.495 - 7813.908: 0.2042% ( 4) 00:08:03.574 7813.908 - 7864.320: 0.2247% ( 2) 00:08:03.574 7864.320 - 7914.732: 0.2655% ( 4) 00:08:03.574 7914.732 - 7965.145: 0.2962% ( 3) 00:08:03.574 7965.145 - 8015.557: 0.3370% ( 4) 00:08:03.574 8015.557 - 8065.969: 0.3881% ( 5) 00:08:03.574 8065.969 - 8116.382: 0.4902% ( 10) 00:08:03.574 8116.382 - 8166.794: 0.6025% ( 11) 00:08:03.574 8166.794 - 8217.206: 0.6638% ( 6) 00:08:03.574 8217.206 - 8267.618: 0.7353% ( 7) 00:08:03.574 8267.618 - 8318.031: 0.7761% ( 4) 00:08:03.574 8318.031 - 8368.443: 0.8272% ( 5) 00:08:03.574 8368.443 - 8418.855: 0.8885% ( 6) 00:08:03.574 8418.855 - 8469.268: 0.9600% ( 7) 00:08:03.574 8469.268 - 8519.680: 0.9906% ( 3) 00:08:03.574 8519.680 - 8570.092: 1.0417% ( 5) 00:08:03.574 8570.092 - 8620.505: 1.0723% ( 3) 00:08:03.574 8620.505 - 8670.917: 1.1029% ( 3) 00:08:03.574 8670.917 - 8721.329: 1.1336% ( 3) 00:08:03.574 8721.329 - 8771.742: 1.1540% ( 2) 00:08:03.574 8771.742 - 8822.154: 1.1846% ( 3) 00:08:03.574 8822.154 - 8872.566: 1.2459% ( 6) 00:08:03.574 8872.566 - 8922.978: 1.3276% ( 8) 00:08:03.574 8922.978 - 8973.391: 1.4910% ( 16) 00:08:03.574 8973.391 - 9023.803: 1.5931% ( 10) 00:08:03.574 9023.803 - 9074.215: 1.7055% ( 11) 00:08:03.574 9074.215 - 9124.628: 1.7872% ( 8) 00:08:03.574 9124.628 - 9175.040: 1.8893% ( 10) 00:08:03.574 9175.040 - 9225.452: 1.9812% ( 9) 00:08:03.574 9225.452 - 9275.865: 2.0731% ( 9) 00:08:03.574 9275.865 - 9326.277: 2.1855% ( 11) 00:08:03.574 9326.277 - 9376.689: 2.2774% ( 9) 00:08:03.574 9376.689 - 9427.102: 2.3897% ( 11) 00:08:03.574 9427.102 - 9477.514: 2.4816% ( 9) 00:08:03.574 9477.514 - 9527.926: 2.6042% ( 12) 00:08:03.574 9527.926 - 9578.338: 2.7471% ( 14) 00:08:03.574 9578.338 - 9628.751: 2.9105% ( 16) 00:08:03.574 9628.751 - 9679.163: 3.0637% ( 15) 00:08:03.574 9679.163 - 9729.575: 3.2169% ( 15) 00:08:03.574 9729.575 - 9779.988: 3.4007% ( 18) 00:08:03.574 9779.988 - 9830.400: 3.5948% ( 19) 00:08:03.574 9830.400 - 9880.812: 3.9318% ( 33) 00:08:03.574 9880.812 - 9931.225: 4.2892% ( 35) 00:08:03.574 9931.225 - 9981.637: 4.7590% ( 46) 00:08:03.574 9981.637 - 10032.049: 5.3411% ( 57) 00:08:03.574 10032.049 - 10082.462: 5.8313% ( 48) 00:08:03.574 10082.462 - 10132.874: 6.3930% ( 55) 00:08:03.574 10132.874 - 10183.286: 6.9547% ( 55) 00:08:03.574 10183.286 - 10233.698: 7.6185% ( 65) 00:08:03.574 10233.698 - 10284.111: 8.3742% ( 74) 00:08:03.574 10284.111 - 10334.523: 9.2218% ( 83) 00:08:03.574 10334.523 - 10384.935: 10.3043% ( 106) 00:08:03.574 10384.935 - 10435.348: 11.4890% ( 116) 00:08:03.574 10435.348 - 10485.760: 12.6123% ( 110) 00:08:03.574 10485.760 - 10536.172: 13.8276% ( 119) 00:08:03.574 10536.172 - 10586.585: 14.9714% ( 112) 00:08:03.574 10586.585 - 10636.997: 16.1050% ( 111) 00:08:03.574 10636.997 - 10687.409: 17.1671% ( 104) 00:08:03.574 10687.409 - 10737.822: 18.2700% ( 108) 00:08:03.574 10737.822 - 10788.234: 19.2913% ( 100) 00:08:03.574 10788.234 - 10838.646: 20.3738% ( 106) 00:08:03.574 10838.646 - 10889.058: 21.2827% ( 89) 00:08:03.574 10889.058 - 10939.471: 22.2937% ( 99) 00:08:03.574 10939.471 - 10989.883: 23.2128% ( 90) 00:08:03.574 10989.883 - 11040.295: 24.1217% ( 89) 00:08:03.574 11040.295 - 11090.708: 25.0511% ( 91) 00:08:03.574 11090.708 - 11141.120: 25.8885% ( 82) 00:08:03.574 11141.120 - 11191.532: 26.9608% ( 105) 00:08:03.574 11191.532 - 11241.945: 27.7880% ( 81) 00:08:03.574 11241.945 - 11292.357: 28.4620% ( 66) 00:08:03.574 11292.357 - 11342.769: 29.1973% ( 72) 00:08:03.574 11342.769 - 11393.182: 29.9224% ( 71) 00:08:03.574 11393.182 - 11443.594: 30.5249% ( 59) 00:08:03.574 11443.594 - 11494.006: 31.1887% ( 65) 00:08:03.574 11494.006 - 11544.418: 31.8423% ( 64) 00:08:03.574 11544.418 - 11594.831: 32.5163% ( 66) 00:08:03.574 11594.831 - 11645.243: 33.1393% ( 61) 00:08:03.574 11645.243 - 11695.655: 33.6908% ( 54) 00:08:03.574 11695.655 - 11746.068: 34.2218% ( 52) 00:08:03.574 11746.068 - 11796.480: 34.7222% ( 49) 00:08:03.574 11796.480 - 11846.892: 35.2737% ( 54) 00:08:03.574 11846.892 - 11897.305: 35.8864% ( 60) 00:08:03.574 11897.305 - 11947.717: 36.5094% ( 61) 00:08:03.574 11947.717 - 11998.129: 37.0915% ( 57) 00:08:03.574 11998.129 - 12048.542: 37.7349% ( 63) 00:08:03.574 12048.542 - 12098.954: 38.3374% ( 59) 00:08:03.574 12098.954 - 12149.366: 38.8685% ( 52) 00:08:03.574 12149.366 - 12199.778: 39.4301% ( 55) 00:08:03.574 12199.778 - 12250.191: 39.9816% ( 54) 00:08:03.574 12250.191 - 12300.603: 40.5944% ( 60) 00:08:03.574 12300.603 - 12351.015: 41.2275% ( 62) 00:08:03.574 12351.015 - 12401.428: 41.8607% ( 62) 00:08:03.574 12401.428 - 12451.840: 42.4734% ( 60) 00:08:03.574 12451.840 - 12502.252: 43.0453% ( 56) 00:08:03.574 12502.252 - 12552.665: 43.7092% ( 65) 00:08:03.574 12552.665 - 12603.077: 44.3525% ( 63) 00:08:03.575 12603.077 - 12653.489: 44.9653% ( 60) 00:08:03.575 12653.489 - 12703.902: 45.5984% ( 62) 00:08:03.575 12703.902 - 12754.314: 46.2929% ( 68) 00:08:03.575 12754.314 - 12804.726: 46.9975% ( 69) 00:08:03.575 12804.726 - 12855.138: 47.5899% ( 58) 00:08:03.575 12855.138 - 12905.551: 48.2945% ( 69) 00:08:03.575 12905.551 - 13006.375: 49.6017% ( 128) 00:08:03.575 13006.375 - 13107.200: 50.8068% ( 118) 00:08:03.575 13107.200 - 13208.025: 52.1548% ( 132) 00:08:03.575 13208.025 - 13308.849: 53.4212% ( 124) 00:08:03.575 13308.849 - 13409.674: 54.6467% ( 120) 00:08:03.575 13409.674 - 13510.498: 56.0253% ( 135) 00:08:03.575 13510.498 - 13611.323: 57.3121% ( 126) 00:08:03.575 13611.323 - 13712.148: 58.6397% ( 130) 00:08:03.575 13712.148 - 13812.972: 60.1103% ( 144) 00:08:03.575 13812.972 - 13913.797: 61.4175% ( 128) 00:08:03.575 13913.797 - 14014.622: 62.8574% ( 141) 00:08:03.575 14014.622 - 14115.446: 64.2667% ( 138) 00:08:03.575 14115.446 - 14216.271: 65.6352% ( 134) 00:08:03.575 14216.271 - 14317.095: 67.0037% ( 134) 00:08:03.575 14317.095 - 14417.920: 68.4641% ( 143) 00:08:03.575 14417.920 - 14518.745: 69.9653% ( 147) 00:08:03.575 14518.745 - 14619.569: 71.4665% ( 147) 00:08:03.575 14619.569 - 14720.394: 72.7941% ( 130) 00:08:03.575 14720.394 - 14821.218: 74.0400% ( 122) 00:08:03.575 14821.218 - 14922.043: 75.3370% ( 127) 00:08:03.575 14922.043 - 15022.868: 76.6850% ( 132) 00:08:03.575 15022.868 - 15123.692: 78.2578% ( 154) 00:08:03.575 15123.692 - 15224.517: 79.6160% ( 133) 00:08:03.575 15224.517 - 15325.342: 80.8824% ( 124) 00:08:03.575 15325.342 - 15426.166: 82.1181% ( 121) 00:08:03.575 15426.166 - 15526.991: 83.3129% ( 117) 00:08:03.575 15526.991 - 15627.815: 84.2014% ( 87) 00:08:03.575 15627.815 - 15728.640: 85.1614% ( 94) 00:08:03.575 15728.640 - 15829.465: 86.0805% ( 90) 00:08:03.575 15829.465 - 15930.289: 86.9485% ( 85) 00:08:03.575 15930.289 - 16031.114: 87.7962% ( 83) 00:08:03.575 16031.114 - 16131.938: 88.6846% ( 87) 00:08:03.575 16131.938 - 16232.763: 89.5016% ( 80) 00:08:03.575 16232.763 - 16333.588: 90.4105% ( 89) 00:08:03.575 16333.588 - 16434.412: 91.3194% ( 89) 00:08:03.575 16434.412 - 16535.237: 92.2079% ( 87) 00:08:03.575 16535.237 - 16636.062: 92.7594% ( 54) 00:08:03.575 16636.062 - 16736.886: 93.3619% ( 59) 00:08:03.575 16736.886 - 16837.711: 93.9542% ( 58) 00:08:03.575 16837.711 - 16938.535: 94.5159% ( 55) 00:08:03.575 16938.535 - 17039.360: 95.0163% ( 49) 00:08:03.575 17039.360 - 17140.185: 95.5474% ( 52) 00:08:03.575 17140.185 - 17241.009: 96.0274% ( 47) 00:08:03.575 17241.009 - 17341.834: 96.4461% ( 41) 00:08:03.575 17341.834 - 17442.658: 96.7831% ( 33) 00:08:03.575 17442.658 - 17543.483: 97.0588% ( 27) 00:08:03.575 17543.483 - 17644.308: 97.2835% ( 22) 00:08:03.575 17644.308 - 17745.132: 97.4980% ( 21) 00:08:03.575 17745.132 - 17845.957: 97.7328% ( 23) 00:08:03.575 17845.957 - 17946.782: 97.9167% ( 18) 00:08:03.575 17946.782 - 18047.606: 98.1107% ( 19) 00:08:03.575 18047.606 - 18148.431: 98.3354% ( 22) 00:08:03.575 18148.431 - 18249.255: 98.5090% ( 17) 00:08:03.575 18249.255 - 18350.080: 98.6622% ( 15) 00:08:03.575 18350.080 - 18450.905: 98.7847% ( 12) 00:08:03.575 18450.905 - 18551.729: 98.9175% ( 13) 00:08:03.575 18551.729 - 18652.554: 99.0400% ( 12) 00:08:03.575 18652.554 - 18753.378: 99.1626% ( 12) 00:08:03.575 18753.378 - 18854.203: 99.2341% ( 7) 00:08:03.575 18854.203 - 18955.028: 99.2953% ( 6) 00:08:03.575 18955.028 - 19055.852: 99.3464% ( 5) 00:08:03.575 24903.680 - 25004.505: 99.3770% ( 3) 00:08:03.575 25004.505 - 25105.329: 99.4281% ( 5) 00:08:03.575 25105.329 - 25206.154: 99.4894% ( 6) 00:08:03.575 25206.154 - 25306.978: 99.5507% ( 6) 00:08:03.575 25306.978 - 25407.803: 99.6017% ( 5) 00:08:03.575 25407.803 - 25508.628: 99.6630% ( 6) 00:08:03.575 25508.628 - 25609.452: 99.7243% ( 6) 00:08:03.575 25609.452 - 25710.277: 99.7753% ( 5) 00:08:03.575 25710.277 - 25811.102: 99.8366% ( 6) 00:08:03.575 25811.102 - 26012.751: 99.9592% ( 12) 00:08:03.575 26012.751 - 26214.400: 100.0000% ( 4) 00:08:03.575 00:08:03.575 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:03.575 ============================================================================== 00:08:03.575 Range in us Cumulative IO count 00:08:03.575 7914.732 - 7965.145: 0.0613% ( 6) 00:08:03.575 7965.145 - 8015.557: 0.1225% ( 6) 00:08:03.575 8015.557 - 8065.969: 0.1532% ( 3) 00:08:03.575 8065.969 - 8116.382: 0.2247% ( 7) 00:08:03.575 8116.382 - 8166.794: 0.2757% ( 5) 00:08:03.575 8166.794 - 8217.206: 0.3268% ( 5) 00:08:03.575 8217.206 - 8267.618: 0.3881% ( 6) 00:08:03.575 8267.618 - 8318.031: 0.4698% ( 8) 00:08:03.575 8318.031 - 8368.443: 0.5719% ( 10) 00:08:03.575 8368.443 - 8418.855: 0.6332% ( 6) 00:08:03.575 8418.855 - 8469.268: 0.7251% ( 9) 00:08:03.575 8469.268 - 8519.680: 0.8068% ( 8) 00:08:03.575 8519.680 - 8570.092: 0.8885% ( 8) 00:08:03.575 8570.092 - 8620.505: 1.0212% ( 13) 00:08:03.575 8620.505 - 8670.917: 1.1438% ( 12) 00:08:03.575 8670.917 - 8721.329: 1.2255% ( 8) 00:08:03.575 8721.329 - 8771.742: 1.3583% ( 13) 00:08:03.575 8771.742 - 8822.154: 1.4400% ( 8) 00:08:03.575 8822.154 - 8872.566: 1.5727% ( 13) 00:08:03.575 8872.566 - 8922.978: 1.6544% ( 8) 00:08:03.575 8922.978 - 8973.391: 1.7974% ( 14) 00:08:03.575 8973.391 - 9023.803: 1.9301% ( 13) 00:08:03.575 9023.803 - 9074.215: 2.0221% ( 9) 00:08:03.575 9074.215 - 9124.628: 2.1344% ( 11) 00:08:03.575 9124.628 - 9175.040: 2.2263% ( 9) 00:08:03.575 9175.040 - 9225.452: 2.2978% ( 7) 00:08:03.575 9225.452 - 9275.865: 2.4101% ( 11) 00:08:03.575 9275.865 - 9326.277: 2.5327% ( 12) 00:08:03.575 9326.277 - 9376.689: 2.7165% ( 18) 00:08:03.575 9376.689 - 9427.102: 2.8391% ( 12) 00:08:03.575 9427.102 - 9477.514: 2.9820% ( 14) 00:08:03.575 9477.514 - 9527.926: 3.0842% ( 10) 00:08:03.575 9527.926 - 9578.338: 3.2169% ( 13) 00:08:03.575 9578.338 - 9628.751: 3.3905% ( 17) 00:08:03.575 9628.751 - 9679.163: 3.7275% ( 33) 00:08:03.575 9679.163 - 9729.575: 3.9318% ( 20) 00:08:03.575 9729.575 - 9779.988: 4.1769% ( 24) 00:08:03.575 9779.988 - 9830.400: 4.4526% ( 27) 00:08:03.575 9830.400 - 9880.812: 4.8611% ( 40) 00:08:03.575 9880.812 - 9931.225: 5.4330% ( 56) 00:08:03.575 9931.225 - 9981.637: 5.8619% ( 42) 00:08:03.575 9981.637 - 10032.049: 6.5564% ( 68) 00:08:03.575 10032.049 - 10082.462: 7.2304% ( 66) 00:08:03.575 10082.462 - 10132.874: 7.7512% ( 51) 00:08:03.575 10132.874 - 10183.286: 8.4763% ( 71) 00:08:03.575 10183.286 - 10233.698: 9.0891% ( 60) 00:08:03.575 10233.698 - 10284.111: 9.7018% ( 60) 00:08:03.575 10284.111 - 10334.523: 10.6107% ( 89) 00:08:03.575 10334.523 - 10384.935: 11.4481% ( 82) 00:08:03.575 10384.935 - 10435.348: 12.2651% ( 80) 00:08:03.575 10435.348 - 10485.760: 13.3170% ( 103) 00:08:03.575 10485.760 - 10536.172: 14.2463% ( 91) 00:08:03.575 10536.172 - 10586.585: 15.1757% ( 91) 00:08:03.575 10586.585 - 10636.997: 16.0846% ( 89) 00:08:03.575 10636.997 - 10687.409: 17.0854% ( 98) 00:08:03.575 10687.409 - 10737.822: 18.0862% ( 98) 00:08:03.575 10737.822 - 10788.234: 18.9338% ( 83) 00:08:03.575 10788.234 - 10838.646: 20.0572% ( 110) 00:08:03.575 10838.646 - 10889.058: 21.0682% ( 99) 00:08:03.575 10889.058 - 10939.471: 22.0384% ( 95) 00:08:03.575 10939.471 - 10989.883: 22.9473% ( 89) 00:08:03.575 10989.883 - 11040.295: 23.9890% ( 102) 00:08:03.575 11040.295 - 11090.708: 24.8468% ( 84) 00:08:03.575 11090.708 - 11141.120: 25.6434% ( 78) 00:08:03.575 11141.120 - 11191.532: 26.5217% ( 86) 00:08:03.575 11191.532 - 11241.945: 27.3182% ( 78) 00:08:03.575 11241.945 - 11292.357: 28.1658% ( 83) 00:08:03.575 11292.357 - 11342.769: 28.8909% ( 71) 00:08:03.575 11342.769 - 11393.182: 29.7181% ( 81) 00:08:03.575 11393.182 - 11443.594: 30.4432% ( 71) 00:08:03.575 11443.594 - 11494.006: 31.0764% ( 62) 00:08:03.575 11494.006 - 11544.418: 31.8423% ( 75) 00:08:03.575 11544.418 - 11594.831: 32.4959% ( 64) 00:08:03.575 11594.831 - 11645.243: 33.0678% ( 56) 00:08:03.575 11645.243 - 11695.655: 33.8235% ( 74) 00:08:03.575 11695.655 - 11746.068: 34.4567% ( 62) 00:08:03.575 11746.068 - 11796.480: 35.0490% ( 58) 00:08:03.575 11796.480 - 11846.892: 35.8150% ( 75) 00:08:03.575 11846.892 - 11897.305: 36.4175% ( 59) 00:08:03.575 11897.305 - 11947.717: 37.0302% ( 60) 00:08:03.575 11947.717 - 11998.129: 37.5306% ( 49) 00:08:03.575 11998.129 - 12048.542: 38.0719% ( 53) 00:08:03.575 12048.542 - 12098.954: 38.4906% ( 41) 00:08:03.575 12098.954 - 12149.366: 39.2770% ( 77) 00:08:03.575 12149.366 - 12199.778: 39.7467% ( 46) 00:08:03.575 12199.778 - 12250.191: 40.1961% ( 44) 00:08:03.575 12250.191 - 12300.603: 40.7475% ( 54) 00:08:03.575 12300.603 - 12351.015: 41.2480% ( 49) 00:08:03.575 12351.015 - 12401.428: 41.7994% ( 54) 00:08:03.575 12401.428 - 12451.840: 42.4632% ( 65) 00:08:03.575 12451.840 - 12502.252: 43.0658% ( 59) 00:08:03.575 12502.252 - 12552.665: 43.6785% ( 60) 00:08:03.575 12552.665 - 12603.077: 44.4240% ( 73) 00:08:03.575 12603.077 - 12653.489: 45.0470% ( 61) 00:08:03.575 12653.489 - 12703.902: 45.7618% ( 70) 00:08:03.575 12703.902 - 12754.314: 46.2623% ( 49) 00:08:03.575 12754.314 - 12804.726: 46.9771% ( 70) 00:08:03.575 12804.726 - 12855.138: 47.5184% ( 53) 00:08:03.575 12855.138 - 12905.551: 48.1822% ( 65) 00:08:03.575 12905.551 - 13006.375: 49.3873% ( 118) 00:08:03.575 13006.375 - 13107.200: 50.7353% ( 132) 00:08:03.575 13107.200 - 13208.025: 52.3795% ( 161) 00:08:03.575 13208.025 - 13308.849: 53.7275% ( 132) 00:08:03.576 13308.849 - 13409.674: 55.0960% ( 134) 00:08:03.576 13409.674 - 13510.498: 56.2806% ( 116) 00:08:03.576 13510.498 - 13611.323: 57.7717% ( 146) 00:08:03.576 13611.323 - 13712.148: 59.0993% ( 130) 00:08:03.576 13712.148 - 13812.972: 60.6107% ( 148) 00:08:03.576 13812.972 - 13913.797: 61.9587% ( 132) 00:08:03.576 13913.797 - 14014.622: 63.6336% ( 164) 00:08:03.576 14014.622 - 14115.446: 65.1859% ( 152) 00:08:03.576 14115.446 - 14216.271: 66.4624% ( 125) 00:08:03.576 14216.271 - 14317.095: 67.9024% ( 141) 00:08:03.576 14317.095 - 14417.920: 69.4036% ( 147) 00:08:03.576 14417.920 - 14518.745: 70.7312% ( 130) 00:08:03.576 14518.745 - 14619.569: 72.2120% ( 145) 00:08:03.576 14619.569 - 14720.394: 73.5600% ( 132) 00:08:03.576 14720.394 - 14821.218: 74.9694% ( 138) 00:08:03.576 14821.218 - 14922.043: 76.1846% ( 119) 00:08:03.576 14922.043 - 15022.868: 77.3489% ( 114) 00:08:03.576 15022.868 - 15123.692: 78.3803% ( 101) 00:08:03.576 15123.692 - 15224.517: 79.3811% ( 98) 00:08:03.576 15224.517 - 15325.342: 80.3207% ( 92) 00:08:03.576 15325.342 - 15426.166: 81.2908% ( 95) 00:08:03.576 15426.166 - 15526.991: 82.1998% ( 89) 00:08:03.576 15526.991 - 15627.815: 83.2108% ( 99) 00:08:03.576 15627.815 - 15728.640: 84.0380% ( 81) 00:08:03.576 15728.640 - 15829.465: 85.2737% ( 121) 00:08:03.576 15829.465 - 15930.289: 86.3154% ( 102) 00:08:03.576 15930.289 - 16031.114: 87.2038% ( 87) 00:08:03.576 16031.114 - 16131.938: 88.2353% ( 101) 00:08:03.576 16131.938 - 16232.763: 89.2055% ( 95) 00:08:03.576 16232.763 - 16333.588: 90.0429% ( 82) 00:08:03.576 16333.588 - 16434.412: 90.8599% ( 80) 00:08:03.576 16434.412 - 16535.237: 91.8301% ( 95) 00:08:03.576 16535.237 - 16636.062: 92.6164% ( 77) 00:08:03.576 16636.062 - 16736.886: 93.3415% ( 71) 00:08:03.576 16736.886 - 16837.711: 94.0768% ( 72) 00:08:03.576 16837.711 - 16938.535: 94.6487% ( 56) 00:08:03.576 16938.535 - 17039.360: 95.1287% ( 47) 00:08:03.576 17039.360 - 17140.185: 95.6597% ( 52) 00:08:03.576 17140.185 - 17241.009: 96.0069% ( 34) 00:08:03.576 17241.009 - 17341.834: 96.3031% ( 29) 00:08:03.576 17341.834 - 17442.658: 96.5891% ( 28) 00:08:03.576 17442.658 - 17543.483: 96.8954% ( 30) 00:08:03.576 17543.483 - 17644.308: 97.1507% ( 25) 00:08:03.576 17644.308 - 17745.132: 97.3550% ( 20) 00:08:03.576 17745.132 - 17845.957: 97.4877% ( 13) 00:08:03.576 17845.957 - 17946.782: 97.6205% ( 13) 00:08:03.576 17946.782 - 18047.606: 97.6920% ( 7) 00:08:03.576 18047.606 - 18148.431: 97.8043% ( 11) 00:08:03.576 18148.431 - 18249.255: 97.8860% ( 8) 00:08:03.576 18249.255 - 18350.080: 98.0086% ( 12) 00:08:03.576 18350.080 - 18450.905: 98.1311% ( 12) 00:08:03.576 18450.905 - 18551.729: 98.2230% ( 9) 00:08:03.576 18551.729 - 18652.554: 98.2741% ( 5) 00:08:03.576 18652.554 - 18753.378: 98.3456% ( 7) 00:08:03.576 18753.378 - 18854.203: 98.4171% ( 7) 00:08:03.576 18854.203 - 18955.028: 98.5192% ( 10) 00:08:03.576 18955.028 - 19055.852: 98.5907% ( 7) 00:08:03.576 19055.852 - 19156.677: 98.6724% ( 8) 00:08:03.576 19156.677 - 19257.502: 98.7643% ( 9) 00:08:03.576 19257.502 - 19358.326: 98.8562% ( 9) 00:08:03.576 19358.326 - 19459.151: 98.9788% ( 12) 00:08:03.576 19459.151 - 19559.975: 99.0707% ( 9) 00:08:03.576 19559.975 - 19660.800: 99.1217% ( 5) 00:08:03.576 19660.800 - 19761.625: 99.1932% ( 7) 00:08:03.576 19761.625 - 19862.449: 99.2034% ( 1) 00:08:03.576 19862.449 - 19963.274: 99.2647% ( 6) 00:08:03.576 19963.274 - 20064.098: 99.3158% ( 5) 00:08:03.576 20064.098 - 20164.923: 99.3464% ( 3) 00:08:03.576 24702.031 - 24802.855: 99.3873% ( 4) 00:08:03.576 24802.855 - 24903.680: 99.4281% ( 4) 00:08:03.576 24903.680 - 25004.505: 99.4792% ( 5) 00:08:03.576 25004.505 - 25105.329: 99.5302% ( 5) 00:08:03.576 25105.329 - 25206.154: 99.5813% ( 5) 00:08:03.576 25206.154 - 25306.978: 99.6324% ( 5) 00:08:03.576 25306.978 - 25407.803: 99.6834% ( 5) 00:08:03.576 25407.803 - 25508.628: 99.7345% ( 5) 00:08:03.576 25508.628 - 25609.452: 99.8060% ( 7) 00:08:03.576 25609.452 - 25710.277: 99.8468% ( 4) 00:08:03.576 25710.277 - 25811.102: 99.8775% ( 3) 00:08:03.576 25811.102 - 26012.751: 99.9898% ( 11) 00:08:03.576 26012.751 - 26214.400: 100.0000% ( 1) 00:08:03.576 00:08:03.576 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:03.576 ============================================================================== 00:08:03.576 Range in us Cumulative IO count 00:08:03.576 7864.320 - 7914.732: 0.0204% ( 2) 00:08:03.576 7914.732 - 7965.145: 0.0511% ( 3) 00:08:03.576 7965.145 - 8015.557: 0.0817% ( 3) 00:08:03.576 8015.557 - 8065.969: 0.1736% ( 9) 00:08:03.576 8065.969 - 8116.382: 0.2247% ( 5) 00:08:03.576 8116.382 - 8166.794: 0.2962% ( 7) 00:08:03.576 8166.794 - 8217.206: 0.3574% ( 6) 00:08:03.576 8217.206 - 8267.618: 0.4187% ( 6) 00:08:03.576 8267.618 - 8318.031: 0.5004% ( 8) 00:08:03.576 8318.031 - 8368.443: 0.5923% ( 9) 00:08:03.576 8368.443 - 8418.855: 0.6944% ( 10) 00:08:03.576 8418.855 - 8469.268: 0.7864% ( 9) 00:08:03.576 8469.268 - 8519.680: 0.8681% ( 8) 00:08:03.576 8519.680 - 8570.092: 0.9906% ( 12) 00:08:03.576 8570.092 - 8620.505: 1.1540% ( 16) 00:08:03.576 8620.505 - 8670.917: 1.3276% ( 17) 00:08:03.576 8670.917 - 8721.329: 1.4400% ( 11) 00:08:03.576 8721.329 - 8771.742: 1.5625% ( 12) 00:08:03.576 8771.742 - 8822.154: 1.7157% ( 15) 00:08:03.576 8822.154 - 8872.566: 1.8484% ( 13) 00:08:03.576 8872.566 - 8922.978: 1.9710% ( 12) 00:08:03.576 8922.978 - 8973.391: 2.0935% ( 12) 00:08:03.576 8973.391 - 9023.803: 2.1855% ( 9) 00:08:03.576 9023.803 - 9074.215: 2.2672% ( 8) 00:08:03.576 9074.215 - 9124.628: 2.3386% ( 7) 00:08:03.576 9124.628 - 9175.040: 2.4203% ( 8) 00:08:03.576 9175.040 - 9225.452: 2.4714% ( 5) 00:08:03.576 9225.452 - 9275.865: 2.5020% ( 3) 00:08:03.576 9275.865 - 9326.277: 2.5327% ( 3) 00:08:03.576 9326.277 - 9376.689: 2.5837% ( 5) 00:08:03.576 9376.689 - 9427.102: 2.6757% ( 9) 00:08:03.576 9427.102 - 9477.514: 2.8288% ( 15) 00:08:03.576 9477.514 - 9527.926: 3.0025% ( 17) 00:08:03.576 9527.926 - 9578.338: 3.2067% ( 20) 00:08:03.576 9578.338 - 9628.751: 3.3701% ( 16) 00:08:03.576 9628.751 - 9679.163: 3.5335% ( 16) 00:08:03.576 9679.163 - 9729.575: 3.8501% ( 31) 00:08:03.576 9729.575 - 9779.988: 4.1769% ( 32) 00:08:03.576 9779.988 - 9830.400: 4.5343% ( 35) 00:08:03.576 9830.400 - 9880.812: 4.8407% ( 30) 00:08:03.576 9880.812 - 9931.225: 5.1573% ( 31) 00:08:03.576 9931.225 - 9981.637: 5.5862% ( 42) 00:08:03.576 9981.637 - 10032.049: 6.0253% ( 43) 00:08:03.576 10032.049 - 10082.462: 6.5053% ( 47) 00:08:03.576 10082.462 - 10132.874: 7.1078% ( 59) 00:08:03.576 10132.874 - 10183.286: 7.7002% ( 58) 00:08:03.576 10183.286 - 10233.698: 8.3844% ( 67) 00:08:03.576 10233.698 - 10284.111: 9.1299% ( 73) 00:08:03.576 10284.111 - 10334.523: 9.8652% ( 72) 00:08:03.576 10334.523 - 10384.935: 10.7128% ( 83) 00:08:03.576 10384.935 - 10435.348: 11.6115% ( 88) 00:08:03.576 10435.348 - 10485.760: 12.6225% ( 99) 00:08:03.576 10485.760 - 10536.172: 13.6540% ( 101) 00:08:03.576 10536.172 - 10586.585: 14.6344% ( 96) 00:08:03.576 10586.585 - 10636.997: 15.6863% ( 103) 00:08:03.576 10636.997 - 10687.409: 16.6973% ( 99) 00:08:03.576 10687.409 - 10737.822: 17.5858% ( 87) 00:08:03.576 10737.822 - 10788.234: 18.6172% ( 101) 00:08:03.576 10788.234 - 10838.646: 19.6691% ( 103) 00:08:03.576 10838.646 - 10889.058: 20.7006% ( 101) 00:08:03.576 10889.058 - 10939.471: 21.9567% ( 123) 00:08:03.576 10939.471 - 10989.883: 23.1209% ( 114) 00:08:03.576 10989.883 - 11040.295: 24.2239% ( 108) 00:08:03.576 11040.295 - 11090.708: 25.1736% ( 93) 00:08:03.576 11090.708 - 11141.120: 26.1336% ( 94) 00:08:03.576 11141.120 - 11191.532: 27.0935% ( 94) 00:08:03.576 11191.532 - 11241.945: 27.9310% ( 82) 00:08:03.576 11241.945 - 11292.357: 28.7071% ( 76) 00:08:03.576 11292.357 - 11342.769: 29.4016% ( 68) 00:08:03.576 11342.769 - 11393.182: 30.1777% ( 76) 00:08:03.576 11393.182 - 11443.594: 31.0253% ( 83) 00:08:03.576 11443.594 - 11494.006: 31.7708% ( 73) 00:08:03.576 11494.006 - 11544.418: 32.5368% ( 75) 00:08:03.577 11544.418 - 11594.831: 33.2721% ( 72) 00:08:03.577 11594.831 - 11645.243: 33.9257% ( 64) 00:08:03.577 11645.243 - 11695.655: 34.6201% ( 68) 00:08:03.577 11695.655 - 11746.068: 35.2124% ( 58) 00:08:03.577 11746.068 - 11796.480: 35.7741% ( 55) 00:08:03.577 11796.480 - 11846.892: 36.2847% ( 50) 00:08:03.577 11846.892 - 11897.305: 36.7443% ( 45) 00:08:03.577 11897.305 - 11947.717: 37.2038% ( 45) 00:08:03.577 11947.717 - 11998.129: 37.6940% ( 48) 00:08:03.577 11998.129 - 12048.542: 38.1127% ( 41) 00:08:03.577 12048.542 - 12098.954: 38.5825% ( 46) 00:08:03.577 12098.954 - 12149.366: 39.0829% ( 49) 00:08:03.577 12149.366 - 12199.778: 39.6344% ( 54) 00:08:03.577 12199.778 - 12250.191: 40.2778% ( 63) 00:08:03.577 12250.191 - 12300.603: 40.9007% ( 61) 00:08:03.577 12300.603 - 12351.015: 41.5237% ( 61) 00:08:03.577 12351.015 - 12401.428: 42.0139% ( 48) 00:08:03.577 12401.428 - 12451.840: 42.5449% ( 52) 00:08:03.577 12451.840 - 12502.252: 43.0964% ( 54) 00:08:03.577 12502.252 - 12552.665: 43.7704% ( 66) 00:08:03.577 12552.665 - 12603.077: 44.3525% ( 57) 00:08:03.577 12603.077 - 12653.489: 44.8938% ( 53) 00:08:03.577 12653.489 - 12703.902: 45.4963% ( 59) 00:08:03.577 12703.902 - 12754.314: 46.1908% ( 68) 00:08:03.577 12754.314 - 12804.726: 46.7831% ( 58) 00:08:03.577 12804.726 - 12855.138: 47.4265% ( 63) 00:08:03.577 12855.138 - 12905.551: 48.0188% ( 58) 00:08:03.577 12905.551 - 13006.375: 49.4383% ( 139) 00:08:03.577 13006.375 - 13107.200: 50.7761% ( 131) 00:08:03.577 13107.200 - 13208.025: 52.2365% ( 143) 00:08:03.577 13208.025 - 13308.849: 53.5437% ( 128) 00:08:03.577 13308.849 - 13409.674: 54.9122% ( 134) 00:08:03.577 13409.674 - 13510.498: 56.2092% ( 127) 00:08:03.577 13510.498 - 13611.323: 57.6593% ( 142) 00:08:03.577 13611.323 - 13712.148: 59.1605% ( 147) 00:08:03.577 13712.148 - 13812.972: 60.7128% ( 152) 00:08:03.577 13812.972 - 13913.797: 62.3775% ( 163) 00:08:03.577 13913.797 - 14014.622: 64.0114% ( 160) 00:08:03.577 14014.622 - 14115.446: 65.6250% ( 158) 00:08:03.577 14115.446 - 14216.271: 67.1773% ( 152) 00:08:03.577 14216.271 - 14317.095: 68.8623% ( 165) 00:08:03.577 14317.095 - 14417.920: 70.4350% ( 154) 00:08:03.577 14417.920 - 14518.745: 71.8444% ( 138) 00:08:03.577 14518.745 - 14619.569: 73.3456% ( 147) 00:08:03.577 14619.569 - 14720.394: 74.7958% ( 142) 00:08:03.577 14720.394 - 14821.218: 76.1132% ( 129) 00:08:03.577 14821.218 - 14922.043: 77.1855% ( 105) 00:08:03.577 14922.043 - 15022.868: 78.1454% ( 94) 00:08:03.577 15022.868 - 15123.692: 79.0135% ( 85) 00:08:03.577 15123.692 - 15224.517: 79.8203% ( 79) 00:08:03.577 15224.517 - 15325.342: 80.5964% ( 76) 00:08:03.577 15325.342 - 15426.166: 81.3623% ( 75) 00:08:03.577 15426.166 - 15526.991: 82.1589% ( 78) 00:08:03.577 15526.991 - 15627.815: 82.9453% ( 77) 00:08:03.577 15627.815 - 15728.640: 83.6601% ( 70) 00:08:03.577 15728.640 - 15829.465: 84.4873% ( 81) 00:08:03.577 15829.465 - 15930.289: 85.3350% ( 83) 00:08:03.577 15930.289 - 16031.114: 86.4175% ( 106) 00:08:03.577 16031.114 - 16131.938: 87.5817% ( 114) 00:08:03.577 16131.938 - 16232.763: 88.7766% ( 117) 00:08:03.577 16232.763 - 16333.588: 89.9203% ( 112) 00:08:03.577 16333.588 - 16434.412: 90.8701% ( 93) 00:08:03.577 16434.412 - 16535.237: 91.6667% ( 78) 00:08:03.577 16535.237 - 16636.062: 92.3815% ( 70) 00:08:03.577 16636.062 - 16736.886: 93.0249% ( 63) 00:08:03.577 16736.886 - 16837.711: 93.7092% ( 67) 00:08:03.577 16837.711 - 16938.535: 94.3525% ( 63) 00:08:03.577 16938.535 - 17039.360: 94.9755% ( 61) 00:08:03.577 17039.360 - 17140.185: 95.5167% ( 53) 00:08:03.577 17140.185 - 17241.009: 95.9355% ( 41) 00:08:03.577 17241.009 - 17341.834: 96.3235% ( 38) 00:08:03.577 17341.834 - 17442.658: 96.6810% ( 35) 00:08:03.577 17442.658 - 17543.483: 96.9363% ( 25) 00:08:03.577 17543.483 - 17644.308: 97.1609% ( 22) 00:08:03.577 17644.308 - 17745.132: 97.3448% ( 18) 00:08:03.577 17745.132 - 17845.957: 97.4775% ( 13) 00:08:03.577 17845.957 - 17946.782: 97.5797% ( 10) 00:08:03.577 17946.782 - 18047.606: 97.6818% ( 10) 00:08:03.577 18047.606 - 18148.431: 97.7533% ( 7) 00:08:03.577 18148.431 - 18249.255: 97.7941% ( 4) 00:08:03.577 18249.255 - 18350.080: 97.8350% ( 4) 00:08:03.577 18350.080 - 18450.905: 97.9065% ( 7) 00:08:03.577 18450.905 - 18551.729: 97.9984% ( 9) 00:08:03.577 18551.729 - 18652.554: 98.1107% ( 11) 00:08:03.577 18652.554 - 18753.378: 98.2128% ( 10) 00:08:03.577 18753.378 - 18854.203: 98.3456% ( 13) 00:08:03.577 18854.203 - 18955.028: 98.4477% ( 10) 00:08:03.577 18955.028 - 19055.852: 98.5703% ( 12) 00:08:03.577 19055.852 - 19156.677: 98.6928% ( 12) 00:08:03.577 19156.677 - 19257.502: 98.8051% ( 11) 00:08:03.577 19257.502 - 19358.326: 98.9175% ( 11) 00:08:03.577 19358.326 - 19459.151: 99.0400% ( 12) 00:08:03.577 19459.151 - 19559.975: 99.1422% ( 10) 00:08:03.577 19559.975 - 19660.800: 99.2034% ( 6) 00:08:03.577 19660.800 - 19761.625: 99.2545% ( 5) 00:08:03.577 19761.625 - 19862.449: 99.3158% ( 6) 00:08:03.577 19862.449 - 19963.274: 99.3464% ( 3) 00:08:03.577 24601.206 - 24702.031: 99.3566% ( 1) 00:08:03.577 24702.031 - 24802.855: 99.4179% ( 6) 00:08:03.577 24802.855 - 24903.680: 99.4792% ( 6) 00:08:03.577 24903.680 - 25004.505: 99.5302% ( 5) 00:08:03.577 25004.505 - 25105.329: 99.5915% ( 6) 00:08:03.577 25105.329 - 25206.154: 99.6528% ( 6) 00:08:03.577 25206.154 - 25306.978: 99.7038% ( 5) 00:08:03.577 25306.978 - 25407.803: 99.7549% ( 5) 00:08:03.577 25407.803 - 25508.628: 99.8162% ( 6) 00:08:03.577 25508.628 - 25609.452: 99.8775% ( 6) 00:08:03.577 25609.452 - 25710.277: 99.9387% ( 6) 00:08:03.577 25710.277 - 25811.102: 100.0000% ( 6) 00:08:03.577 00:08:03.577 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:03.577 ============================================================================== 00:08:03.577 Range in us Cumulative IO count 00:08:03.577 7612.258 - 7662.671: 0.0613% ( 6) 00:08:03.577 7662.671 - 7713.083: 0.1021% ( 4) 00:08:03.577 7713.083 - 7763.495: 0.1430% ( 4) 00:08:03.577 7763.495 - 7813.908: 0.1838% ( 4) 00:08:03.577 7813.908 - 7864.320: 0.2247% ( 4) 00:08:03.577 7864.320 - 7914.732: 0.2859% ( 6) 00:08:03.577 7914.732 - 7965.145: 0.4391% ( 15) 00:08:03.577 7965.145 - 8015.557: 0.5310% ( 9) 00:08:03.577 8015.557 - 8065.969: 0.6842% ( 15) 00:08:03.577 8065.969 - 8116.382: 0.7659% ( 8) 00:08:03.577 8116.382 - 8166.794: 0.8272% ( 6) 00:08:03.577 8166.794 - 8217.206: 0.9498% ( 12) 00:08:03.577 8217.206 - 8267.618: 1.0519% ( 10) 00:08:03.577 8267.618 - 8318.031: 1.2153% ( 16) 00:08:03.577 8318.031 - 8368.443: 1.5012% ( 28) 00:08:03.577 8368.443 - 8418.855: 1.6442% ( 14) 00:08:03.577 8418.855 - 8469.268: 1.7565% ( 11) 00:08:03.577 8469.268 - 8519.680: 1.8382% ( 8) 00:08:03.577 8519.680 - 8570.092: 1.9097% ( 7) 00:08:03.577 8570.092 - 8620.505: 1.9812% ( 7) 00:08:03.577 8620.505 - 8670.917: 2.0527% ( 7) 00:08:03.577 8670.917 - 8721.329: 2.1446% ( 9) 00:08:03.577 8721.329 - 8771.742: 2.2569% ( 11) 00:08:03.577 8771.742 - 8822.154: 2.4306% ( 17) 00:08:03.577 8822.154 - 8872.566: 2.5429% ( 11) 00:08:03.577 8872.566 - 8922.978: 2.6552% ( 11) 00:08:03.577 8922.978 - 8973.391: 2.7165% ( 6) 00:08:03.577 8973.391 - 9023.803: 2.7880% ( 7) 00:08:03.577 9023.803 - 9074.215: 2.8595% ( 7) 00:08:03.577 9074.215 - 9124.628: 2.9105% ( 5) 00:08:03.577 9124.628 - 9175.040: 2.9514% ( 4) 00:08:03.577 9175.040 - 9225.452: 2.9922% ( 4) 00:08:03.577 9225.452 - 9275.865: 3.0229% ( 3) 00:08:03.577 9275.865 - 9326.277: 3.0535% ( 3) 00:08:03.577 9326.277 - 9376.689: 3.0944% ( 4) 00:08:03.577 9376.689 - 9427.102: 3.1556% ( 6) 00:08:03.577 9427.102 - 9477.514: 3.2169% ( 6) 00:08:03.577 9477.514 - 9527.926: 3.3905% ( 17) 00:08:03.577 9527.926 - 9578.338: 3.4722% ( 8) 00:08:03.577 9578.338 - 9628.751: 3.6152% ( 14) 00:08:03.577 9628.751 - 9679.163: 3.8501% ( 23) 00:08:03.577 9679.163 - 9729.575: 4.0339% ( 18) 00:08:03.577 9729.575 - 9779.988: 4.2892% ( 25) 00:08:03.577 9779.988 - 9830.400: 4.5854% ( 29) 00:08:03.577 9830.400 - 9880.812: 4.9020% ( 31) 00:08:03.577 9880.812 - 9931.225: 5.2900% ( 38) 00:08:03.577 9931.225 - 9981.637: 5.6883% ( 39) 00:08:03.577 9981.637 - 10032.049: 6.0560% ( 36) 00:08:03.577 10032.049 - 10082.462: 6.5155% ( 45) 00:08:03.577 10082.462 - 10132.874: 7.0261% ( 50) 00:08:03.577 10132.874 - 10183.286: 7.6695% ( 63) 00:08:03.577 10183.286 - 10233.698: 8.3231% ( 64) 00:08:03.577 10233.698 - 10284.111: 9.0993% ( 76) 00:08:03.577 10284.111 - 10334.523: 9.9571% ( 84) 00:08:03.577 10334.523 - 10384.935: 10.7843% ( 81) 00:08:03.577 10384.935 - 10435.348: 11.6422% ( 84) 00:08:03.577 10435.348 - 10485.760: 12.4796% ( 82) 00:08:03.577 10485.760 - 10536.172: 13.3783% ( 88) 00:08:03.577 10536.172 - 10586.585: 14.2770% ( 88) 00:08:03.577 10586.585 - 10636.997: 15.4105% ( 111) 00:08:03.577 10636.997 - 10687.409: 16.4114% ( 98) 00:08:03.577 10687.409 - 10737.822: 17.6062% ( 117) 00:08:03.577 10737.822 - 10788.234: 18.5764% ( 95) 00:08:03.577 10788.234 - 10838.646: 19.5159% ( 92) 00:08:03.577 10838.646 - 10889.058: 20.5270% ( 99) 00:08:03.577 10889.058 - 10939.471: 21.6197% ( 107) 00:08:03.577 10939.471 - 10989.883: 22.8043% ( 116) 00:08:03.577 10989.883 - 11040.295: 23.9175% ( 109) 00:08:03.577 11040.295 - 11090.708: 24.9387% ( 100) 00:08:03.577 11090.708 - 11141.120: 25.9293% ( 97) 00:08:03.577 11141.120 - 11191.532: 26.8893% ( 94) 00:08:03.577 11191.532 - 11241.945: 27.8901% ( 98) 00:08:03.577 11241.945 - 11292.357: 28.7173% ( 81) 00:08:03.577 11292.357 - 11342.769: 29.5343% ( 80) 00:08:03.578 11342.769 - 11393.182: 30.3615% ( 81) 00:08:03.578 11393.182 - 11443.594: 31.0662% ( 69) 00:08:03.578 11443.594 - 11494.006: 31.7504% ( 67) 00:08:03.578 11494.006 - 11544.418: 32.3938% ( 63) 00:08:03.578 11544.418 - 11594.831: 33.0065% ( 60) 00:08:03.578 11594.831 - 11645.243: 33.6091% ( 59) 00:08:03.578 11645.243 - 11695.655: 34.2014% ( 58) 00:08:03.578 11695.655 - 11746.068: 34.7733% ( 56) 00:08:03.578 11746.068 - 11796.480: 35.4371% ( 65) 00:08:03.578 11796.480 - 11846.892: 36.0805% ( 63) 00:08:03.578 11846.892 - 11897.305: 36.6626% ( 57) 00:08:03.578 11897.305 - 11947.717: 37.2549% ( 58) 00:08:03.578 11947.717 - 11998.129: 37.7451% ( 48) 00:08:03.578 11998.129 - 12048.542: 38.2353% ( 48) 00:08:03.578 12048.542 - 12098.954: 38.7357% ( 49) 00:08:03.578 12098.954 - 12149.366: 39.1850% ( 44) 00:08:03.578 12149.366 - 12199.778: 39.6957% ( 50) 00:08:03.578 12199.778 - 12250.191: 40.1552% ( 45) 00:08:03.578 12250.191 - 12300.603: 40.5842% ( 42) 00:08:03.578 12300.603 - 12351.015: 41.0743% ( 48) 00:08:03.578 12351.015 - 12401.428: 41.5135% ( 43) 00:08:03.578 12401.428 - 12451.840: 42.0547% ( 53) 00:08:03.578 12451.840 - 12502.252: 42.5858% ( 52) 00:08:03.578 12502.252 - 12552.665: 43.0658% ( 47) 00:08:03.578 12552.665 - 12603.077: 43.6070% ( 53) 00:08:03.578 12603.077 - 12653.489: 44.1074% ( 49) 00:08:03.578 12653.489 - 12703.902: 44.5874% ( 47) 00:08:03.578 12703.902 - 12754.314: 45.2512% ( 65) 00:08:03.578 12754.314 - 12804.726: 45.8435% ( 58) 00:08:03.578 12804.726 - 12855.138: 46.4971% ( 64) 00:08:03.578 12855.138 - 12905.551: 47.1099% ( 60) 00:08:03.578 12905.551 - 13006.375: 48.4477% ( 131) 00:08:03.578 13006.375 - 13107.200: 49.6426% ( 117) 00:08:03.578 13107.200 - 13208.025: 51.0417% ( 137) 00:08:03.578 13208.025 - 13308.849: 52.3999% ( 133) 00:08:03.578 13308.849 - 13409.674: 53.7990% ( 137) 00:08:03.578 13409.674 - 13510.498: 55.3105% ( 148) 00:08:03.578 13510.498 - 13611.323: 56.8730% ( 153) 00:08:03.578 13611.323 - 13712.148: 58.5886% ( 168) 00:08:03.578 13712.148 - 13812.972: 60.3145% ( 169) 00:08:03.578 13812.972 - 13913.797: 61.8873% ( 154) 00:08:03.578 13913.797 - 14014.622: 63.3885% ( 147) 00:08:03.578 14014.622 - 14115.446: 64.7774% ( 136) 00:08:03.578 14115.446 - 14216.271: 66.2990% ( 149) 00:08:03.578 14216.271 - 14317.095: 67.7594% ( 143) 00:08:03.578 14317.095 - 14417.920: 69.4649% ( 167) 00:08:03.578 14417.920 - 14518.745: 71.2112% ( 171) 00:08:03.578 14518.745 - 14619.569: 72.6614% ( 142) 00:08:03.578 14619.569 - 14720.394: 73.9073% ( 122) 00:08:03.578 14720.394 - 14821.218: 75.0919% ( 116) 00:08:03.578 14821.218 - 14922.043: 76.3685% ( 125) 00:08:03.578 14922.043 - 15022.868: 77.8799% ( 148) 00:08:03.578 15022.868 - 15123.692: 79.3199% ( 141) 00:08:03.578 15123.692 - 15224.517: 80.6373% ( 129) 00:08:03.578 15224.517 - 15325.342: 81.9342% ( 127) 00:08:03.578 15325.342 - 15426.166: 83.1597% ( 120) 00:08:03.578 15426.166 - 15526.991: 84.2729% ( 109) 00:08:03.578 15526.991 - 15627.815: 85.2328% ( 94) 00:08:03.578 15627.815 - 15728.640: 86.2030% ( 95) 00:08:03.578 15728.640 - 15829.465: 87.1528% ( 93) 00:08:03.578 15829.465 - 15930.289: 88.0413% ( 87) 00:08:03.578 15930.289 - 16031.114: 88.8276% ( 77) 00:08:03.578 16031.114 - 16131.938: 89.4506% ( 61) 00:08:03.578 16131.938 - 16232.763: 90.1552% ( 69) 00:08:03.578 16232.763 - 16333.588: 90.8088% ( 64) 00:08:03.578 16333.588 - 16434.412: 91.4318% ( 61) 00:08:03.578 16434.412 - 16535.237: 92.0139% ( 57) 00:08:03.578 16535.237 - 16636.062: 92.5756% ( 55) 00:08:03.578 16636.062 - 16736.886: 93.1781% ( 59) 00:08:03.578 16736.886 - 16837.711: 93.7908% ( 60) 00:08:03.578 16837.711 - 16938.535: 94.3117% ( 51) 00:08:03.578 16938.535 - 17039.360: 94.8121% ( 49) 00:08:03.578 17039.360 - 17140.185: 95.2717% ( 45) 00:08:03.578 17140.185 - 17241.009: 95.7210% ( 44) 00:08:03.578 17241.009 - 17341.834: 96.1806% ( 45) 00:08:03.578 17341.834 - 17442.658: 96.6095% ( 42) 00:08:03.578 17442.658 - 17543.483: 96.9567% ( 34) 00:08:03.578 17543.483 - 17644.308: 97.2937% ( 33) 00:08:03.578 17644.308 - 17745.132: 97.5490% ( 25) 00:08:03.578 17745.132 - 17845.957: 97.8248% ( 27) 00:08:03.578 17845.957 - 17946.782: 98.0188% ( 19) 00:08:03.578 17946.782 - 18047.606: 98.2026% ( 18) 00:08:03.578 18047.606 - 18148.431: 98.3864% ( 18) 00:08:03.578 18148.431 - 18249.255: 98.4886% ( 10) 00:08:03.578 18249.255 - 18350.080: 98.5498% ( 6) 00:08:03.578 18350.080 - 18450.905: 98.6111% ( 6) 00:08:03.578 18450.905 - 18551.729: 98.6826% ( 7) 00:08:03.578 18551.729 - 18652.554: 98.6928% ( 1) 00:08:03.578 19559.975 - 19660.800: 98.7132% ( 2) 00:08:03.578 19660.800 - 19761.625: 98.7541% ( 4) 00:08:03.578 19761.625 - 19862.449: 98.8460% ( 9) 00:08:03.578 19862.449 - 19963.274: 98.9175% ( 7) 00:08:03.578 19963.274 - 20064.098: 98.9685% ( 5) 00:08:03.578 20064.098 - 20164.923: 99.0196% ( 5) 00:08:03.578 20164.923 - 20265.748: 99.0707% ( 5) 00:08:03.578 20265.748 - 20366.572: 99.1319% ( 6) 00:08:03.578 20366.572 - 20467.397: 99.1932% ( 6) 00:08:03.578 20467.397 - 20568.222: 99.2443% ( 5) 00:08:03.578 20568.222 - 20669.046: 99.3056% ( 6) 00:08:03.578 20669.046 - 20769.871: 99.3464% ( 4) 00:08:03.578 25004.505 - 25105.329: 99.3566% ( 1) 00:08:03.578 25105.329 - 25206.154: 99.3668% ( 1) 00:08:03.578 25206.154 - 25306.978: 99.3975% ( 3) 00:08:03.578 25306.978 - 25407.803: 99.4281% ( 3) 00:08:03.578 25407.803 - 25508.628: 99.4587% ( 3) 00:08:03.578 25508.628 - 25609.452: 99.4996% ( 4) 00:08:03.578 25609.452 - 25710.277: 99.5302% ( 3) 00:08:03.578 25710.277 - 25811.102: 99.5609% ( 3) 00:08:03.578 25811.102 - 26012.751: 99.6324% ( 7) 00:08:03.578 26012.751 - 26214.400: 99.7141% ( 8) 00:08:03.578 26214.400 - 26416.049: 99.7855% ( 7) 00:08:03.578 26416.049 - 26617.698: 99.8672% ( 8) 00:08:03.578 26617.698 - 26819.348: 99.9489% ( 8) 00:08:03.578 26819.348 - 27020.997: 100.0000% ( 5) 00:08:03.578 00:08:03.578 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:03.578 ============================================================================== 00:08:03.578 Range in us Cumulative IO count 00:08:03.578 6125.095 - 6150.302: 0.0102% ( 1) 00:08:03.578 6150.302 - 6175.508: 0.0306% ( 2) 00:08:03.578 6175.508 - 6200.714: 0.0511% ( 2) 00:08:03.578 6200.714 - 6225.920: 0.0715% ( 2) 00:08:03.578 6225.920 - 6251.126: 0.1021% ( 3) 00:08:03.578 6251.126 - 6276.332: 0.1225% ( 2) 00:08:03.578 6276.332 - 6301.538: 0.1430% ( 2) 00:08:03.578 6301.538 - 6326.745: 0.1634% ( 2) 00:08:03.578 6326.745 - 6351.951: 0.1838% ( 2) 00:08:03.578 6351.951 - 6377.157: 0.2042% ( 2) 00:08:03.578 6377.157 - 6402.363: 0.2247% ( 2) 00:08:03.578 6402.363 - 6427.569: 0.2451% ( 2) 00:08:03.578 6427.569 - 6452.775: 0.2655% ( 2) 00:08:03.578 6452.775 - 6503.188: 0.2962% ( 3) 00:08:03.578 6503.188 - 6553.600: 0.3370% ( 4) 00:08:03.578 6553.600 - 6604.012: 0.3779% ( 4) 00:08:03.578 6604.012 - 6654.425: 0.4187% ( 4) 00:08:03.578 6654.425 - 6704.837: 0.4493% ( 3) 00:08:03.578 6704.837 - 6755.249: 0.4800% ( 3) 00:08:03.578 6755.249 - 6805.662: 0.5208% ( 4) 00:08:03.578 6805.662 - 6856.074: 0.5515% ( 3) 00:08:03.578 6856.074 - 6906.486: 0.5821% ( 3) 00:08:03.578 6906.486 - 6956.898: 0.6025% ( 2) 00:08:03.578 6956.898 - 7007.311: 0.6434% ( 4) 00:08:03.579 7007.311 - 7057.723: 0.6536% ( 1) 00:08:03.579 7813.908 - 7864.320: 0.6842% ( 3) 00:08:03.579 7864.320 - 7914.732: 0.7149% ( 3) 00:08:03.579 7914.732 - 7965.145: 0.7659% ( 5) 00:08:03.579 7965.145 - 8015.557: 0.9395% ( 17) 00:08:03.579 8015.557 - 8065.969: 1.0315% ( 9) 00:08:03.579 8065.969 - 8116.382: 1.0723% ( 4) 00:08:03.579 8116.382 - 8166.794: 1.1132% ( 4) 00:08:03.579 8166.794 - 8217.206: 1.1642% ( 5) 00:08:03.579 8217.206 - 8267.618: 1.2255% ( 6) 00:08:03.579 8267.618 - 8318.031: 1.2970% ( 7) 00:08:03.579 8318.031 - 8368.443: 1.3685% ( 7) 00:08:03.579 8368.443 - 8418.855: 1.4400% ( 7) 00:08:03.579 8418.855 - 8469.268: 1.5114% ( 7) 00:08:03.579 8469.268 - 8519.680: 1.5625% ( 5) 00:08:03.579 8519.680 - 8570.092: 1.6953% ( 13) 00:08:03.579 8570.092 - 8620.505: 1.8280% ( 13) 00:08:03.579 8620.505 - 8670.917: 1.9404% ( 11) 00:08:03.579 8670.917 - 8721.329: 2.0323% ( 9) 00:08:03.579 8721.329 - 8771.742: 2.1038% ( 7) 00:08:03.579 8771.742 - 8822.154: 2.1752% ( 7) 00:08:03.579 8822.154 - 8872.566: 2.2365% ( 6) 00:08:03.579 8872.566 - 8922.978: 2.2672% ( 3) 00:08:03.579 8922.978 - 8973.391: 2.2978% ( 3) 00:08:03.579 8973.391 - 9023.803: 2.3284% ( 3) 00:08:03.579 9023.803 - 9074.215: 2.3693% ( 4) 00:08:03.579 9074.215 - 9124.628: 2.4510% ( 8) 00:08:03.579 9124.628 - 9175.040: 2.5225% ( 7) 00:08:03.579 9175.040 - 9225.452: 2.5940% ( 7) 00:08:03.579 9225.452 - 9275.865: 2.6552% ( 6) 00:08:03.579 9275.865 - 9326.277: 2.6961% ( 4) 00:08:03.579 9326.277 - 9376.689: 2.7369% ( 4) 00:08:03.579 9376.689 - 9427.102: 2.7982% ( 6) 00:08:03.579 9427.102 - 9477.514: 2.8186% ( 2) 00:08:03.579 9477.514 - 9527.926: 2.8697% ( 5) 00:08:03.579 9527.926 - 9578.338: 2.9922% ( 12) 00:08:03.579 9578.338 - 9628.751: 3.1761% ( 18) 00:08:03.579 9628.751 - 9679.163: 3.4007% ( 22) 00:08:03.579 9679.163 - 9729.575: 3.6356% ( 23) 00:08:03.579 9729.575 - 9779.988: 3.9011% ( 26) 00:08:03.579 9779.988 - 9830.400: 4.1667% ( 26) 00:08:03.579 9830.400 - 9880.812: 4.5445% ( 37) 00:08:03.579 9880.812 - 9931.225: 4.9326% ( 38) 00:08:03.579 9931.225 - 9981.637: 5.3717% ( 43) 00:08:03.579 9981.637 - 10032.049: 5.9232% ( 54) 00:08:03.579 10032.049 - 10082.462: 6.3930% ( 46) 00:08:03.579 10082.462 - 10132.874: 6.9649% ( 56) 00:08:03.579 10132.874 - 10183.286: 7.6083% ( 63) 00:08:03.579 10183.286 - 10233.698: 8.2516% ( 63) 00:08:03.579 10233.698 - 10284.111: 9.0176% ( 75) 00:08:03.579 10284.111 - 10334.523: 9.7529% ( 72) 00:08:03.579 10334.523 - 10384.935: 10.5801% ( 81) 00:08:03.579 10384.935 - 10435.348: 11.3664% ( 77) 00:08:03.579 10435.348 - 10485.760: 12.2038% ( 82) 00:08:03.579 10485.760 - 10536.172: 13.1944% ( 97) 00:08:03.579 10536.172 - 10586.585: 14.2565% ( 104) 00:08:03.579 10586.585 - 10636.997: 15.2982% ( 102) 00:08:03.579 10636.997 - 10687.409: 16.3807% ( 106) 00:08:03.579 10687.409 - 10737.822: 17.4632% ( 106) 00:08:03.579 10737.822 - 10788.234: 18.4947% ( 101) 00:08:03.579 10788.234 - 10838.646: 19.4751% ( 96) 00:08:03.579 10838.646 - 10889.058: 20.5167% ( 102) 00:08:03.579 10889.058 - 10939.471: 21.6503% ( 111) 00:08:03.579 10939.471 - 10989.883: 22.6511% ( 98) 00:08:03.579 10989.883 - 11040.295: 23.5907% ( 92) 00:08:03.579 11040.295 - 11090.708: 24.5609% ( 95) 00:08:03.579 11090.708 - 11141.120: 25.5923% ( 101) 00:08:03.579 11141.120 - 11191.532: 26.6136% ( 100) 00:08:03.579 11191.532 - 11241.945: 27.4816% ( 85) 00:08:03.579 11241.945 - 11292.357: 28.2782% ( 78) 00:08:03.579 11292.357 - 11342.769: 29.1871% ( 89) 00:08:03.579 11342.769 - 11393.182: 30.0041% ( 80) 00:08:03.579 11393.182 - 11443.594: 30.8313% ( 81) 00:08:03.579 11443.594 - 11494.006: 31.6176% ( 77) 00:08:03.579 11494.006 - 11544.418: 32.4142% ( 78) 00:08:03.579 11544.418 - 11594.831: 33.1597% ( 73) 00:08:03.579 11594.831 - 11645.243: 33.9563% ( 78) 00:08:03.579 11645.243 - 11695.655: 34.6609% ( 69) 00:08:03.579 11695.655 - 11746.068: 35.4575% ( 78) 00:08:03.579 11746.068 - 11796.480: 36.2030% ( 73) 00:08:03.579 11796.480 - 11846.892: 36.9281% ( 71) 00:08:03.579 11846.892 - 11897.305: 37.5511% ( 61) 00:08:03.579 11897.305 - 11947.717: 38.2149% ( 65) 00:08:03.579 11947.717 - 11998.129: 38.8072% ( 58) 00:08:03.579 11998.129 - 12048.542: 39.3995% ( 58) 00:08:03.579 12048.542 - 12098.954: 39.9408% ( 53) 00:08:03.579 12098.954 - 12149.366: 40.4310% ( 48) 00:08:03.579 12149.366 - 12199.778: 40.8803% ( 44) 00:08:03.579 12199.778 - 12250.191: 41.3399% ( 45) 00:08:03.579 12250.191 - 12300.603: 41.8096% ( 46) 00:08:03.579 12300.603 - 12351.015: 42.2692% ( 45) 00:08:03.579 12351.015 - 12401.428: 42.7594% ( 48) 00:08:03.579 12401.428 - 12451.840: 43.3109% ( 54) 00:08:03.579 12451.840 - 12502.252: 43.8419% ( 52) 00:08:03.579 12502.252 - 12552.665: 44.3627% ( 51) 00:08:03.579 12552.665 - 12603.077: 44.8427% ( 47) 00:08:03.579 12603.077 - 12653.489: 45.3738% ( 52) 00:08:03.579 12653.489 - 12703.902: 46.0274% ( 64) 00:08:03.579 12703.902 - 12754.314: 46.5686% ( 53) 00:08:03.579 12754.314 - 12804.726: 47.1303% ( 55) 00:08:03.579 12804.726 - 12855.138: 47.6818% ( 54) 00:08:03.579 12855.138 - 12905.551: 48.2026% ( 51) 00:08:03.579 12905.551 - 13006.375: 49.2851% ( 106) 00:08:03.579 13006.375 - 13107.200: 50.4800% ( 117) 00:08:03.579 13107.200 - 13208.025: 51.6442% ( 114) 00:08:03.579 13208.025 - 13308.849: 52.8391% ( 117) 00:08:03.579 13308.849 - 13409.674: 54.1054% ( 124) 00:08:03.579 13409.674 - 13510.498: 55.3207% ( 119) 00:08:03.579 13510.498 - 13611.323: 56.5155% ( 117) 00:08:03.579 13611.323 - 13712.148: 57.7206% ( 118) 00:08:03.579 13712.148 - 13812.972: 59.1708% ( 142) 00:08:03.579 13812.972 - 13913.797: 60.5699% ( 137) 00:08:03.579 13913.797 - 14014.622: 62.1528% ( 155) 00:08:03.579 14014.622 - 14115.446: 63.9297% ( 174) 00:08:03.579 14115.446 - 14216.271: 65.7680% ( 180) 00:08:03.579 14216.271 - 14317.095: 67.5756% ( 177) 00:08:03.579 14317.095 - 14417.920: 69.3321% ( 172) 00:08:03.579 14417.920 - 14518.745: 71.0172% ( 165) 00:08:03.579 14518.745 - 14619.569: 72.4877% ( 144) 00:08:03.579 14619.569 - 14720.394: 73.9685% ( 145) 00:08:03.579 14720.394 - 14821.218: 75.4187% ( 142) 00:08:03.579 14821.218 - 14922.043: 76.8689% ( 142) 00:08:03.579 14922.043 - 15022.868: 78.2986% ( 140) 00:08:03.579 15022.868 - 15123.692: 79.6569% ( 133) 00:08:03.579 15123.692 - 15224.517: 81.1172% ( 143) 00:08:03.579 15224.517 - 15325.342: 82.3019% ( 116) 00:08:03.579 15325.342 - 15426.166: 83.5478% ( 122) 00:08:03.579 15426.166 - 15526.991: 84.8448% ( 127) 00:08:03.579 15526.991 - 15627.815: 85.9375% ( 107) 00:08:03.579 15627.815 - 15728.640: 86.8668% ( 91) 00:08:03.579 15728.640 - 15829.465: 87.5715% ( 69) 00:08:03.579 15829.465 - 15930.289: 88.3476% ( 76) 00:08:03.579 15930.289 - 16031.114: 89.0523% ( 69) 00:08:03.579 16031.114 - 16131.938: 89.7672% ( 70) 00:08:03.579 16131.938 - 16232.763: 90.4003% ( 62) 00:08:03.579 16232.763 - 16333.588: 90.9722% ( 56) 00:08:03.579 16333.588 - 16434.412: 91.4114% ( 43) 00:08:03.579 16434.412 - 16535.237: 91.8709% ( 45) 00:08:03.579 16535.237 - 16636.062: 92.3713% ( 49) 00:08:03.579 16636.062 - 16736.886: 92.8309% ( 45) 00:08:03.579 16736.886 - 16837.711: 93.3415% ( 50) 00:08:03.579 16837.711 - 16938.535: 93.8521% ( 50) 00:08:03.579 16938.535 - 17039.360: 94.2300% ( 37) 00:08:03.579 17039.360 - 17140.185: 94.6283% ( 39) 00:08:03.579 17140.185 - 17241.009: 94.9959% ( 36) 00:08:03.579 17241.009 - 17341.834: 95.4248% ( 42) 00:08:03.579 17341.834 - 17442.658: 95.8027% ( 37) 00:08:03.579 17442.658 - 17543.483: 96.1703% ( 36) 00:08:03.579 17543.483 - 17644.308: 96.5584% ( 38) 00:08:03.579 17644.308 - 17745.132: 97.0078% ( 44) 00:08:03.579 17745.132 - 17845.957: 97.3958% ( 38) 00:08:03.579 17845.957 - 17946.782: 97.6511% ( 25) 00:08:03.579 17946.782 - 18047.606: 97.8554% ( 20) 00:08:03.579 18047.606 - 18148.431: 98.0801% ( 22) 00:08:03.579 18148.431 - 18249.255: 98.2843% ( 20) 00:08:03.579 18249.255 - 18350.080: 98.4171% ( 13) 00:08:03.579 18350.080 - 18450.905: 98.5498% ( 13) 00:08:03.579 18450.905 - 18551.729: 98.6826% ( 13) 00:08:03.579 18551.729 - 18652.554: 98.6928% ( 1) 00:08:03.579 19358.326 - 19459.151: 98.7439% ( 5) 00:08:03.579 19459.151 - 19559.975: 98.8051% ( 6) 00:08:03.579 19559.975 - 19660.800: 98.8562% ( 5) 00:08:03.579 19660.800 - 19761.625: 98.9073% ( 5) 00:08:03.579 19761.625 - 19862.449: 98.9685% ( 6) 00:08:03.579 19862.449 - 19963.274: 99.0196% ( 5) 00:08:03.580 19963.274 - 20064.098: 99.0809% ( 6) 00:08:03.580 20064.098 - 20164.923: 99.1319% ( 5) 00:08:03.580 20164.923 - 20265.748: 99.1830% ( 5) 00:08:03.580 20265.748 - 20366.572: 99.2341% ( 5) 00:08:03.580 20366.572 - 20467.397: 99.2953% ( 6) 00:08:03.580 20467.397 - 20568.222: 99.3464% ( 5) 00:08:03.580 25609.452 - 25710.277: 99.3668% ( 2) 00:08:03.580 25710.277 - 25811.102: 99.4077% ( 4) 00:08:03.580 25811.102 - 26012.751: 99.4894% ( 8) 00:08:03.580 26012.751 - 26214.400: 99.5711% ( 8) 00:08:03.580 26214.400 - 26416.049: 99.6426% ( 7) 00:08:03.580 26416.049 - 26617.698: 99.7243% ( 8) 00:08:03.580 26617.698 - 26819.348: 99.8060% ( 8) 00:08:03.580 26819.348 - 27020.997: 99.8877% ( 8) 00:08:03.580 27020.997 - 27222.646: 99.9694% ( 8) 00:08:03.580 27222.646 - 27424.295: 100.0000% ( 3) 00:08:03.580 00:08:03.580 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:03.580 ============================================================================== 00:08:03.580 Range in us Cumulative IO count 00:08:03.580 5217.674 - 5242.880: 0.0204% ( 2) 00:08:03.580 5242.880 - 5268.086: 0.0408% ( 2) 00:08:03.580 5268.086 - 5293.292: 0.0613% ( 2) 00:08:03.580 5293.292 - 5318.498: 0.0817% ( 2) 00:08:03.580 5318.498 - 5343.705: 0.0919% ( 1) 00:08:03.580 5343.705 - 5368.911: 0.1225% ( 3) 00:08:03.580 5368.911 - 5394.117: 0.1430% ( 2) 00:08:03.580 5394.117 - 5419.323: 0.1634% ( 2) 00:08:03.580 5419.323 - 5444.529: 0.1838% ( 2) 00:08:03.580 5444.529 - 5469.735: 0.2042% ( 2) 00:08:03.580 5469.735 - 5494.942: 0.2247% ( 2) 00:08:03.580 5494.942 - 5520.148: 0.2349% ( 1) 00:08:03.580 5520.148 - 5545.354: 0.2553% ( 2) 00:08:03.580 5545.354 - 5570.560: 0.2757% ( 2) 00:08:03.580 5570.560 - 5595.766: 0.2962% ( 2) 00:08:03.580 5595.766 - 5620.972: 0.3166% ( 2) 00:08:03.580 5620.972 - 5646.178: 0.3370% ( 2) 00:08:03.580 5646.178 - 5671.385: 0.3574% ( 2) 00:08:03.580 5671.385 - 5696.591: 0.3779% ( 2) 00:08:03.580 5696.591 - 5721.797: 0.3881% ( 1) 00:08:03.580 5721.797 - 5747.003: 0.4085% ( 2) 00:08:03.580 5747.003 - 5772.209: 0.4289% ( 2) 00:08:03.580 5772.209 - 5797.415: 0.4493% ( 2) 00:08:03.580 5797.415 - 5822.622: 0.4698% ( 2) 00:08:03.580 5822.622 - 5847.828: 0.4902% ( 2) 00:08:03.580 5847.828 - 5873.034: 0.5106% ( 2) 00:08:03.580 5873.034 - 5898.240: 0.5310% ( 2) 00:08:03.580 5898.240 - 5923.446: 0.5413% ( 1) 00:08:03.580 5923.446 - 5948.652: 0.5617% ( 2) 00:08:03.580 5948.652 - 5973.858: 0.5821% ( 2) 00:08:03.580 5973.858 - 5999.065: 0.6025% ( 2) 00:08:03.580 5999.065 - 6024.271: 0.6230% ( 2) 00:08:03.580 6024.271 - 6049.477: 0.6434% ( 2) 00:08:03.580 6049.477 - 6074.683: 0.6536% ( 1) 00:08:03.580 7965.145 - 8015.557: 0.7455% ( 9) 00:08:03.580 8015.557 - 8065.969: 0.8681% ( 12) 00:08:03.580 8065.969 - 8116.382: 0.9293% ( 6) 00:08:03.580 8116.382 - 8166.794: 0.9702% ( 4) 00:08:03.580 8166.794 - 8217.206: 1.0315% ( 6) 00:08:03.580 8217.206 - 8267.618: 1.1029% ( 7) 00:08:03.580 8267.618 - 8318.031: 1.1744% ( 7) 00:08:03.580 8318.031 - 8368.443: 1.2255% ( 5) 00:08:03.580 8368.443 - 8418.855: 1.2663% ( 4) 00:08:03.580 8418.855 - 8469.268: 1.3276% ( 6) 00:08:03.580 8469.268 - 8519.680: 1.3889% ( 6) 00:08:03.580 8519.680 - 8570.092: 1.4502% ( 6) 00:08:03.580 8570.092 - 8620.505: 1.5114% ( 6) 00:08:03.580 8620.505 - 8670.917: 1.6033% ( 9) 00:08:03.580 8670.917 - 8721.329: 1.7259% ( 12) 00:08:03.580 8721.329 - 8771.742: 1.8382% ( 11) 00:08:03.580 8771.742 - 8822.154: 1.9301% ( 9) 00:08:03.580 8822.154 - 8872.566: 2.0323% ( 10) 00:08:03.580 8872.566 - 8922.978: 2.1242% ( 9) 00:08:03.580 8922.978 - 8973.391: 2.2161% ( 9) 00:08:03.580 8973.391 - 9023.803: 2.2569% ( 4) 00:08:03.580 9023.803 - 9074.215: 2.2876% ( 3) 00:08:03.580 9074.215 - 9124.628: 2.3284% ( 4) 00:08:03.580 9124.628 - 9175.040: 2.3591% ( 3) 00:08:03.580 9175.040 - 9225.452: 2.3897% ( 3) 00:08:03.580 9225.452 - 9275.865: 2.4306% ( 4) 00:08:03.580 9275.865 - 9326.277: 2.4612% ( 3) 00:08:03.580 9326.277 - 9376.689: 2.5225% ( 6) 00:08:03.580 9376.689 - 9427.102: 2.5940% ( 7) 00:08:03.580 9427.102 - 9477.514: 2.6961% ( 10) 00:08:03.580 9477.514 - 9527.926: 2.8901% ( 19) 00:08:03.580 9527.926 - 9578.338: 3.0535% ( 16) 00:08:03.580 9578.338 - 9628.751: 3.2475% ( 19) 00:08:03.580 9628.751 - 9679.163: 3.4926% ( 24) 00:08:03.580 9679.163 - 9729.575: 3.7377% ( 24) 00:08:03.580 9729.575 - 9779.988: 4.0237% ( 28) 00:08:03.580 9779.988 - 9830.400: 4.4118% ( 38) 00:08:03.580 9830.400 - 9880.812: 4.8509% ( 43) 00:08:03.580 9880.812 - 9931.225: 5.2594% ( 40) 00:08:03.580 9931.225 - 9981.637: 5.7904% ( 52) 00:08:03.580 9981.637 - 10032.049: 6.3623% ( 56) 00:08:03.580 10032.049 - 10082.462: 6.9240% ( 55) 00:08:03.580 10082.462 - 10132.874: 7.5061% ( 57) 00:08:03.580 10132.874 - 10183.286: 8.1291% ( 61) 00:08:03.580 10183.286 - 10233.698: 8.8440% ( 70) 00:08:03.580 10233.698 - 10284.111: 9.5384% ( 68) 00:08:03.580 10284.111 - 10334.523: 10.2124% ( 66) 00:08:03.580 10334.523 - 10384.935: 11.1622% ( 93) 00:08:03.580 10384.935 - 10435.348: 12.0813% ( 90) 00:08:03.580 10435.348 - 10485.760: 13.0821% ( 98) 00:08:03.580 10485.760 - 10536.172: 14.1544% ( 105) 00:08:03.580 10536.172 - 10586.585: 15.1552% ( 98) 00:08:03.580 10586.585 - 10636.997: 16.1663% ( 99) 00:08:03.580 10636.997 - 10687.409: 17.2181% ( 103) 00:08:03.580 10687.409 - 10737.822: 18.1679% ( 93) 00:08:03.580 10737.822 - 10788.234: 19.0359% ( 85) 00:08:03.580 10788.234 - 10838.646: 19.9755% ( 92) 00:08:03.580 10838.646 - 10889.058: 20.9967% ( 100) 00:08:03.580 10889.058 - 10939.471: 21.9873% ( 97) 00:08:03.580 10939.471 - 10989.883: 22.9779% ( 97) 00:08:03.580 10989.883 - 11040.295: 23.9481% ( 95) 00:08:03.580 11040.295 - 11090.708: 25.1838% ( 121) 00:08:03.580 11090.708 - 11141.120: 26.3378% ( 113) 00:08:03.580 11141.120 - 11191.532: 27.3795% ( 102) 00:08:03.580 11191.532 - 11241.945: 28.3190% ( 92) 00:08:03.580 11241.945 - 11292.357: 29.2688% ( 93) 00:08:03.580 11292.357 - 11342.769: 30.1266% ( 84) 00:08:03.580 11342.769 - 11393.182: 30.9538% ( 81) 00:08:03.580 11393.182 - 11443.594: 31.6993% ( 73) 00:08:03.580 11443.594 - 11494.006: 32.4449% ( 73) 00:08:03.580 11494.006 - 11544.418: 33.1904% ( 73) 00:08:03.580 11544.418 - 11594.831: 33.8644% ( 66) 00:08:03.580 11594.831 - 11645.243: 34.5895% ( 71) 00:08:03.580 11645.243 - 11695.655: 35.3145% ( 71) 00:08:03.580 11695.655 - 11746.068: 36.0192% ( 69) 00:08:03.580 11746.068 - 11796.480: 36.6830% ( 65) 00:08:03.580 11796.480 - 11846.892: 37.3162% ( 62) 00:08:03.580 11846.892 - 11897.305: 37.9596% ( 63) 00:08:03.580 11897.305 - 11947.717: 38.6336% ( 66) 00:08:03.580 11947.717 - 11998.129: 39.2667% ( 62) 00:08:03.580 11998.129 - 12048.542: 39.7672% ( 49) 00:08:03.580 12048.542 - 12098.954: 40.2471% ( 47) 00:08:03.580 12098.954 - 12149.366: 40.7271% ( 47) 00:08:03.580 12149.366 - 12199.778: 41.2377% ( 50) 00:08:03.580 12199.778 - 12250.191: 41.7279% ( 48) 00:08:03.580 12250.191 - 12300.603: 42.1977% ( 46) 00:08:03.580 12300.603 - 12351.015: 42.6675% ( 46) 00:08:03.580 12351.015 - 12401.428: 43.1577% ( 48) 00:08:03.580 12401.428 - 12451.840: 43.6377% ( 47) 00:08:03.580 12451.840 - 12502.252: 44.0870% ( 44) 00:08:03.580 12502.252 - 12552.665: 44.5976% ( 50) 00:08:03.580 12552.665 - 12603.077: 45.0674% ( 46) 00:08:03.580 12603.077 - 12653.489: 45.6087% ( 53) 00:08:03.580 12653.489 - 12703.902: 46.1397% ( 52) 00:08:03.580 12703.902 - 12754.314: 46.6912% ( 54) 00:08:03.580 12754.314 - 12804.726: 47.1814% ( 48) 00:08:03.580 12804.726 - 12855.138: 47.7124% ( 52) 00:08:03.580 12855.138 - 12905.551: 48.3047% ( 58) 00:08:03.580 12905.551 - 13006.375: 49.5404% ( 121) 00:08:03.580 13006.375 - 13107.200: 50.7659% ( 120) 00:08:03.580 13107.200 - 13208.025: 51.8587% ( 107) 00:08:03.580 13208.025 - 13308.849: 53.1965% ( 131) 00:08:03.580 13308.849 - 13409.674: 54.5343% ( 131) 00:08:03.580 13409.674 - 13510.498: 55.7394% ( 118) 00:08:03.580 13510.498 - 13611.323: 56.9138% ( 115) 00:08:03.580 13611.323 - 13712.148: 58.2414% ( 130) 00:08:03.580 13712.148 - 13812.972: 59.5486% ( 128) 00:08:03.580 13812.972 - 13913.797: 60.9375% ( 136) 00:08:03.580 13913.797 - 14014.622: 62.4387% ( 147) 00:08:03.580 14014.622 - 14115.446: 64.0217% ( 155) 00:08:03.580 14115.446 - 14216.271: 65.5229% ( 147) 00:08:03.580 14216.271 - 14317.095: 67.0343% ( 148) 00:08:03.580 14317.095 - 14417.920: 68.4436% ( 138) 00:08:03.580 14417.920 - 14518.745: 70.0572% ( 158) 00:08:03.580 14518.745 - 14619.569: 71.6401% ( 155) 00:08:03.580 14619.569 - 14720.394: 73.1924% ( 152) 00:08:03.580 14720.394 - 14821.218: 74.7753% ( 155) 00:08:03.580 14821.218 - 14922.043: 76.1132% ( 131) 00:08:03.580 14922.043 - 15022.868: 77.2263% ( 109) 00:08:03.580 15022.868 - 15123.692: 78.2884% ( 104) 00:08:03.580 15123.692 - 15224.517: 79.6364% ( 132) 00:08:03.580 15224.517 - 15325.342: 80.9947% ( 133) 00:08:03.580 15325.342 - 15426.166: 82.2712% ( 125) 00:08:03.580 15426.166 - 15526.991: 83.4967% ( 120) 00:08:03.580 15526.991 - 15627.815: 84.5078% ( 99) 00:08:03.580 15627.815 - 15728.640: 85.4984% ( 97) 00:08:03.580 15728.640 - 15829.465: 86.4175% ( 90) 00:08:03.580 15829.465 - 15930.289: 87.3468% ( 91) 00:08:03.580 15930.289 - 16031.114: 88.2455% ( 88) 00:08:03.580 16031.114 - 16131.938: 89.1033% ( 84) 00:08:03.580 16131.938 - 16232.763: 89.8386% ( 72) 00:08:03.580 16232.763 - 16333.588: 90.5433% ( 69) 00:08:03.580 16333.588 - 16434.412: 91.2684% ( 71) 00:08:03.581 16434.412 - 16535.237: 92.0037% ( 72) 00:08:03.581 16535.237 - 16636.062: 92.6266% ( 61) 00:08:03.581 16636.062 - 16736.886: 93.2292% ( 59) 00:08:03.581 16736.886 - 16837.711: 93.7908% ( 55) 00:08:03.581 16837.711 - 16938.535: 94.3934% ( 59) 00:08:03.581 16938.535 - 17039.360: 94.9551% ( 55) 00:08:03.581 17039.360 - 17140.185: 95.5882% ( 62) 00:08:03.581 17140.185 - 17241.009: 96.0478% ( 45) 00:08:03.581 17241.009 - 17341.834: 96.3644% ( 31) 00:08:03.581 17341.834 - 17442.658: 96.6299% ( 26) 00:08:03.581 17442.658 - 17543.483: 96.8444% ( 21) 00:08:03.581 17543.483 - 17644.308: 97.0486% ( 20) 00:08:03.581 17644.308 - 17745.132: 97.2937% ( 24) 00:08:03.581 17745.132 - 17845.957: 97.4775% ( 18) 00:08:03.581 17845.957 - 17946.782: 97.6920% ( 21) 00:08:03.581 17946.782 - 18047.606: 97.8554% ( 16) 00:08:03.581 18047.606 - 18148.431: 97.9882% ( 13) 00:08:03.581 18148.431 - 18249.255: 98.1209% ( 13) 00:08:03.581 18249.255 - 18350.080: 98.2333% ( 11) 00:08:03.581 18350.080 - 18450.905: 98.3558% ( 12) 00:08:03.581 18450.905 - 18551.729: 98.4886% ( 13) 00:08:03.581 18551.729 - 18652.554: 98.5907% ( 10) 00:08:03.581 18652.554 - 18753.378: 98.6724% ( 8) 00:08:03.581 18753.378 - 18854.203: 98.6928% ( 2) 00:08:03.581 19055.852 - 19156.677: 98.7030% ( 1) 00:08:03.581 19156.677 - 19257.502: 98.7643% ( 6) 00:08:03.581 19257.502 - 19358.326: 98.8358% ( 7) 00:08:03.581 19358.326 - 19459.151: 98.8868% ( 5) 00:08:03.581 19459.151 - 19559.975: 98.9481% ( 6) 00:08:03.581 19559.975 - 19660.800: 98.9992% ( 5) 00:08:03.581 19660.800 - 19761.625: 99.0605% ( 6) 00:08:03.581 19761.625 - 19862.449: 99.0809% ( 2) 00:08:03.581 19862.449 - 19963.274: 99.1422% ( 6) 00:08:03.581 19963.274 - 20064.098: 99.2034% ( 6) 00:08:03.581 20064.098 - 20164.923: 99.2545% ( 5) 00:08:03.581 20164.923 - 20265.748: 99.3056% ( 5) 00:08:03.581 20265.748 - 20366.572: 99.3464% ( 4) 00:08:03.581 25811.102 - 26012.751: 99.3668% ( 2) 00:08:03.581 26012.751 - 26214.400: 99.4485% ( 8) 00:08:03.581 26214.400 - 26416.049: 99.5200% ( 7) 00:08:03.581 26416.049 - 26617.698: 99.6017% ( 8) 00:08:03.581 26617.698 - 26819.348: 99.6834% ( 8) 00:08:03.581 26819.348 - 27020.997: 99.7549% ( 7) 00:08:03.581 27020.997 - 27222.646: 99.8366% ( 8) 00:08:03.581 27222.646 - 27424.295: 99.9183% ( 8) 00:08:03.581 27424.295 - 27625.945: 100.0000% ( 8) 00:08:03.581 00:08:03.581 04:55:41 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:08:04.527 Initializing NVMe Controllers 00:08:04.527 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:04.527 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:04.527 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:04.527 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:04.527 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:04.527 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:04.527 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:04.527 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:04.527 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:04.527 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:04.527 Initialization complete. Launching workers. 00:08:04.527 ======================================================== 00:08:04.527 Latency(us) 00:08:04.527 Device Information : IOPS MiB/s Average min max 00:08:04.527 PCIE (0000:00:13.0) NSID 1 from core 0: 10670.93 125.05 12000.61 6106.64 31261.03 00:08:04.527 PCIE (0000:00:10.0) NSID 1 from core 0: 10670.93 125.05 11990.72 6138.65 32809.91 00:08:04.527 PCIE (0000:00:11.0) NSID 1 from core 0: 10670.93 125.05 11980.84 6226.55 32044.66 00:08:04.527 PCIE (0000:00:12.0) NSID 1 from core 0: 10670.93 125.05 11970.98 4862.28 33286.72 00:08:04.527 PCIE (0000:00:12.0) NSID 2 from core 0: 10670.93 125.05 11961.12 4735.99 33358.91 00:08:04.527 PCIE (0000:00:12.0) NSID 3 from core 0: 10734.82 125.80 11880.29 4338.34 24274.90 00:08:04.527 ======================================================== 00:08:04.527 Total : 64089.46 751.05 11964.01 4338.34 33358.91 00:08:04.527 00:08:04.527 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:04.527 ================================================================================= 00:08:04.527 1.00000% : 6503.188us 00:08:04.527 10.00000% : 8015.557us 00:08:04.527 25.00000% : 10082.462us 00:08:04.527 50.00000% : 12199.778us 00:08:04.527 75.00000% : 13611.323us 00:08:04.527 90.00000% : 15123.692us 00:08:04.527 95.00000% : 16535.237us 00:08:04.527 98.00000% : 18551.729us 00:08:04.527 99.00000% : 21475.643us 00:08:04.527 99.50000% : 30449.034us 00:08:04.527 99.90000% : 31255.631us 00:08:04.527 99.99000% : 31255.631us 00:08:04.527 99.99900% : 31457.280us 00:08:04.527 99.99990% : 31457.280us 00:08:04.527 99.99999% : 31457.280us 00:08:04.527 00:08:04.527 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:04.527 ================================================================================= 00:08:04.527 1.00000% : 6503.188us 00:08:04.527 10.00000% : 7914.732us 00:08:04.527 25.00000% : 9931.225us 00:08:04.527 50.00000% : 12149.366us 00:08:04.527 75.00000% : 13712.148us 00:08:04.527 90.00000% : 15022.868us 00:08:04.527 95.00000% : 16535.237us 00:08:04.527 98.00000% : 18450.905us 00:08:04.527 99.00000% : 21273.994us 00:08:04.527 99.50000% : 30449.034us 00:08:04.527 99.90000% : 32667.175us 00:08:04.527 99.99000% : 32868.825us 00:08:04.527 99.99900% : 32868.825us 00:08:04.527 99.99990% : 32868.825us 00:08:04.527 99.99999% : 32868.825us 00:08:04.527 00:08:04.527 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:04.527 ================================================================================= 00:08:04.527 1.00000% : 6503.188us 00:08:04.527 10.00000% : 7914.732us 00:08:04.527 25.00000% : 9880.812us 00:08:04.527 50.00000% : 12149.366us 00:08:04.527 75.00000% : 13812.972us 00:08:04.527 90.00000% : 15123.692us 00:08:04.527 95.00000% : 16434.412us 00:08:04.527 98.00000% : 18249.255us 00:08:04.527 99.00000% : 20870.695us 00:08:04.527 99.50000% : 31255.631us 00:08:04.527 99.90000% : 32062.228us 00:08:04.527 99.99000% : 32062.228us 00:08:04.527 99.99900% : 32062.228us 00:08:04.527 99.99990% : 32062.228us 00:08:04.527 99.99999% : 32062.228us 00:08:04.527 00:08:04.527 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:04.527 ================================================================================= 00:08:04.527 1.00000% : 6452.775us 00:08:04.527 10.00000% : 7813.908us 00:08:04.527 25.00000% : 9830.400us 00:08:04.527 50.00000% : 12199.778us 00:08:04.527 75.00000% : 13611.323us 00:08:04.527 90.00000% : 15526.991us 00:08:04.527 95.00000% : 16535.237us 00:08:04.527 98.00000% : 18047.606us 00:08:04.527 99.00000% : 22383.065us 00:08:04.527 99.50000% : 32465.526us 00:08:04.527 99.90000% : 33272.123us 00:08:04.527 99.99000% : 33272.123us 00:08:04.527 99.99900% : 33473.772us 00:08:04.527 99.99990% : 33473.772us 00:08:04.527 99.99999% : 33473.772us 00:08:04.527 00:08:04.527 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:04.527 ================================================================================= 00:08:04.527 1.00000% : 6377.157us 00:08:04.527 10.00000% : 7864.320us 00:08:04.527 25.00000% : 9880.812us 00:08:04.527 50.00000% : 12098.954us 00:08:04.527 75.00000% : 13611.323us 00:08:04.527 90.00000% : 15526.991us 00:08:04.527 95.00000% : 16333.588us 00:08:04.527 98.00000% : 18148.431us 00:08:04.527 99.00000% : 22786.363us 00:08:04.527 99.50000% : 32465.526us 00:08:04.527 99.90000% : 33272.123us 00:08:04.527 99.99000% : 33473.772us 00:08:04.527 99.99900% : 33473.772us 00:08:04.527 99.99990% : 33473.772us 00:08:04.527 99.99999% : 33473.772us 00:08:04.527 00:08:04.527 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:04.527 ================================================================================= 00:08:04.527 1.00000% : 6402.363us 00:08:04.527 10.00000% : 7864.320us 00:08:04.527 25.00000% : 9981.637us 00:08:04.527 50.00000% : 12149.366us 00:08:04.527 75.00000% : 13611.323us 00:08:04.527 90.00000% : 15325.342us 00:08:04.527 95.00000% : 16333.588us 00:08:04.527 98.00000% : 17845.957us 00:08:04.527 99.00000% : 18450.905us 00:08:04.527 99.50000% : 23492.135us 00:08:04.527 99.90000% : 24197.908us 00:08:04.527 99.99000% : 24298.732us 00:08:04.527 99.99900% : 24298.732us 00:08:04.527 99.99990% : 24298.732us 00:08:04.527 99.99999% : 24298.732us 00:08:04.527 00:08:04.527 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:04.527 ============================================================================== 00:08:04.527 Range in us Cumulative IO count 00:08:04.527 6099.889 - 6125.095: 0.0094% ( 1) 00:08:04.527 6175.508 - 6200.714: 0.0187% ( 1) 00:08:04.527 6251.126 - 6276.332: 0.0468% ( 3) 00:08:04.527 6276.332 - 6301.538: 0.1123% ( 7) 00:08:04.527 6301.538 - 6326.745: 0.1684% ( 6) 00:08:04.527 6326.745 - 6351.951: 0.2620% ( 10) 00:08:04.527 6351.951 - 6377.157: 0.3462% ( 9) 00:08:04.527 6377.157 - 6402.363: 0.4210% ( 8) 00:08:04.527 6402.363 - 6427.569: 0.8327% ( 44) 00:08:04.527 6427.569 - 6452.775: 0.9450% ( 12) 00:08:04.527 6452.775 - 6503.188: 1.0666% ( 13) 00:08:04.527 6503.188 - 6553.600: 1.1695% ( 11) 00:08:04.527 6553.600 - 6604.012: 1.2070% ( 4) 00:08:04.527 6604.012 - 6654.425: 1.2444% ( 4) 00:08:04.527 6654.425 - 6704.837: 1.3099% ( 7) 00:08:04.527 6704.837 - 6755.249: 1.3473% ( 4) 00:08:04.527 6755.249 - 6805.662: 1.3847% ( 4) 00:08:04.527 6805.662 - 6856.074: 1.4783% ( 10) 00:08:04.527 6856.074 - 6906.486: 1.5812% ( 11) 00:08:04.527 6906.486 - 6956.898: 2.0210% ( 47) 00:08:04.527 6956.898 - 7007.311: 2.6946% ( 72) 00:08:04.527 7007.311 - 7057.723: 3.2653% ( 61) 00:08:04.527 7057.723 - 7108.135: 3.8267% ( 60) 00:08:04.527 7108.135 - 7158.548: 4.6126% ( 84) 00:08:04.527 7158.548 - 7208.960: 4.9775% ( 39) 00:08:04.527 7208.960 - 7259.372: 5.2957% ( 34) 00:08:04.527 7259.372 - 7309.785: 5.6418% ( 37) 00:08:04.527 7309.785 - 7360.197: 6.1377% ( 53) 00:08:04.528 7360.197 - 7410.609: 6.6430% ( 54) 00:08:04.528 7410.609 - 7461.022: 7.3260% ( 73) 00:08:04.528 7461.022 - 7511.434: 7.7376% ( 44) 00:08:04.528 7511.434 - 7561.846: 7.9716% ( 25) 00:08:04.528 7561.846 - 7612.258: 8.1493% ( 19) 00:08:04.528 7612.258 - 7662.671: 8.3645% ( 23) 00:08:04.528 7662.671 - 7713.083: 8.7668% ( 43) 00:08:04.528 7713.083 - 7763.495: 8.8230% ( 6) 00:08:04.528 7763.495 - 7813.908: 8.8978% ( 8) 00:08:04.528 7813.908 - 7864.320: 9.0662% ( 18) 00:08:04.528 7864.320 - 7914.732: 9.2721% ( 22) 00:08:04.528 7914.732 - 7965.145: 9.9738% ( 75) 00:08:04.528 7965.145 - 8015.557: 10.5165% ( 58) 00:08:04.528 8015.557 - 8065.969: 11.2463% ( 78) 00:08:04.528 8065.969 - 8116.382: 11.7141% ( 50) 00:08:04.528 8116.382 - 8166.794: 12.1632% ( 48) 00:08:04.528 8166.794 - 8217.206: 12.6778% ( 55) 00:08:04.528 8217.206 - 8267.618: 13.0052% ( 35) 00:08:04.528 8267.618 - 8318.031: 13.2204% ( 23) 00:08:04.528 8318.031 - 8368.443: 13.5853% ( 39) 00:08:04.528 8368.443 - 8418.855: 14.0064% ( 45) 00:08:04.528 8418.855 - 8469.268: 14.1561% ( 16) 00:08:04.528 8469.268 - 8519.680: 14.2403% ( 9) 00:08:04.528 8519.680 - 8570.092: 14.3058% ( 7) 00:08:04.528 8570.092 - 8620.505: 14.3432% ( 4) 00:08:04.528 8620.505 - 8670.917: 14.3806% ( 4) 00:08:04.528 8670.917 - 8721.329: 14.3900% ( 1) 00:08:04.528 8721.329 - 8771.742: 14.4368% ( 5) 00:08:04.528 8771.742 - 8822.154: 14.5865% ( 16) 00:08:04.528 8822.154 - 8872.566: 14.9139% ( 35) 00:08:04.528 8872.566 - 8922.978: 15.4566% ( 58) 00:08:04.528 8922.978 - 8973.391: 15.8308% ( 40) 00:08:04.528 8973.391 - 9023.803: 16.3267% ( 53) 00:08:04.528 9023.803 - 9074.215: 16.8507% ( 56) 00:08:04.528 9074.215 - 9124.628: 17.3185% ( 50) 00:08:04.528 9124.628 - 9175.040: 17.6647% ( 37) 00:08:04.528 9175.040 - 9225.452: 18.1138% ( 48) 00:08:04.528 9225.452 - 9275.865: 18.5816% ( 50) 00:08:04.528 9275.865 - 9326.277: 19.3301% ( 80) 00:08:04.528 9326.277 - 9376.689: 19.9289% ( 64) 00:08:04.528 9376.689 - 9427.102: 20.4996% ( 61) 00:08:04.528 9427.102 - 9477.514: 21.0891% ( 63) 00:08:04.528 9477.514 - 9527.926: 21.4165% ( 35) 00:08:04.528 9527.926 - 9578.338: 21.8469% ( 46) 00:08:04.528 9578.338 - 9628.751: 22.3335% ( 52) 00:08:04.528 9628.751 - 9679.163: 22.7826% ( 48) 00:08:04.528 9679.163 - 9729.575: 23.0258% ( 26) 00:08:04.528 9729.575 - 9779.988: 23.2784% ( 27) 00:08:04.528 9779.988 - 9830.400: 23.5872% ( 33) 00:08:04.528 9830.400 - 9880.812: 23.9147% ( 35) 00:08:04.528 9880.812 - 9931.225: 24.2889% ( 40) 00:08:04.528 9931.225 - 9981.637: 24.5135% ( 24) 00:08:04.528 9981.637 - 10032.049: 24.8222% ( 33) 00:08:04.528 10032.049 - 10082.462: 25.0281% ( 22) 00:08:04.528 10082.462 - 10132.874: 25.3088% ( 30) 00:08:04.528 10132.874 - 10183.286: 25.7017% ( 42) 00:08:04.528 10183.286 - 10233.698: 26.1321% ( 46) 00:08:04.528 10233.698 - 10284.111: 26.7496% ( 66) 00:08:04.528 10284.111 - 10334.523: 27.6198% ( 93) 00:08:04.528 10334.523 - 10384.935: 28.1063% ( 52) 00:08:04.528 10384.935 - 10435.348: 28.6396% ( 57) 00:08:04.528 10435.348 - 10485.760: 29.2665% ( 67) 00:08:04.528 10485.760 - 10536.172: 30.1927% ( 99) 00:08:04.528 10536.172 - 10586.585: 31.0816% ( 95) 00:08:04.528 10586.585 - 10636.997: 32.0640% ( 105) 00:08:04.528 10636.997 - 10687.409: 32.8312% ( 82) 00:08:04.528 10687.409 - 10737.822: 33.6265% ( 85) 00:08:04.528 10737.822 - 10788.234: 34.4499% ( 88) 00:08:04.528 10788.234 - 10838.646: 35.0487% ( 64) 00:08:04.528 10838.646 - 10889.058: 35.6568% ( 65) 00:08:04.528 10889.058 - 10939.471: 35.9562% ( 32) 00:08:04.528 10939.471 - 10989.883: 36.2182% ( 28) 00:08:04.528 10989.883 - 11040.295: 36.4240% ( 22) 00:08:04.528 11040.295 - 11090.708: 36.6486% ( 24) 00:08:04.528 11090.708 - 11141.120: 36.9386% ( 31) 00:08:04.528 11141.120 - 11191.532: 37.2380% ( 32) 00:08:04.528 11191.532 - 11241.945: 37.7058% ( 50) 00:08:04.528 11241.945 - 11292.357: 38.1456% ( 47) 00:08:04.528 11292.357 - 11342.769: 38.6508% ( 54) 00:08:04.528 11342.769 - 11393.182: 39.2496% ( 64) 00:08:04.528 11393.182 - 11443.594: 39.8671% ( 66) 00:08:04.528 11443.594 - 11494.006: 40.2227% ( 38) 00:08:04.528 11494.006 - 11544.418: 40.6156% ( 42) 00:08:04.528 11544.418 - 11594.831: 41.1302% ( 55) 00:08:04.528 11594.831 - 11645.243: 41.8787% ( 80) 00:08:04.528 11645.243 - 11695.655: 42.6179% ( 79) 00:08:04.528 11695.655 - 11746.068: 43.2448% ( 67) 00:08:04.528 11746.068 - 11796.480: 43.8529% ( 65) 00:08:04.528 11796.480 - 11846.892: 44.5079% ( 70) 00:08:04.528 11846.892 - 11897.305: 45.3219% ( 87) 00:08:04.528 11897.305 - 11947.717: 46.2481% ( 99) 00:08:04.528 11947.717 - 11998.129: 47.1276% ( 94) 00:08:04.528 11998.129 - 12048.542: 47.8574% ( 78) 00:08:04.528 12048.542 - 12098.954: 48.5966% ( 79) 00:08:04.528 12098.954 - 12149.366: 49.3825% ( 84) 00:08:04.528 12149.366 - 12199.778: 50.2246% ( 90) 00:08:04.528 12199.778 - 12250.191: 50.9918% ( 82) 00:08:04.528 12250.191 - 12300.603: 51.6935% ( 75) 00:08:04.528 12300.603 - 12351.015: 52.5168% ( 88) 00:08:04.528 12351.015 - 12401.428: 53.4244% ( 97) 00:08:04.528 12401.428 - 12451.840: 54.2758% ( 91) 00:08:04.528 12451.840 - 12502.252: 55.1553% ( 94) 00:08:04.528 12502.252 - 12552.665: 56.1564% ( 107) 00:08:04.528 12552.665 - 12603.077: 57.0734% ( 98) 00:08:04.528 12603.077 - 12653.489: 57.7657% ( 74) 00:08:04.528 12653.489 - 12703.902: 58.3177% ( 59) 00:08:04.528 12703.902 - 12754.314: 58.8885% ( 61) 00:08:04.528 12754.314 - 12804.726: 59.5715% ( 73) 00:08:04.528 12804.726 - 12855.138: 60.4790% ( 97) 00:08:04.528 12855.138 - 12905.551: 61.5176% ( 111) 00:08:04.528 12905.551 - 13006.375: 63.3514% ( 196) 00:08:04.528 13006.375 - 13107.200: 65.5782% ( 238) 00:08:04.528 13107.200 - 13208.025: 67.8705% ( 245) 00:08:04.528 13208.025 - 13308.849: 70.0131% ( 229) 00:08:04.528 13308.849 - 13409.674: 71.8563% ( 197) 00:08:04.528 13409.674 - 13510.498: 73.8492% ( 213) 00:08:04.528 13510.498 - 13611.323: 75.4304% ( 169) 00:08:04.528 13611.323 - 13712.148: 77.0116% ( 169) 00:08:04.528 13712.148 - 13812.972: 78.4338% ( 152) 00:08:04.528 13812.972 - 13913.797: 79.7904% ( 145) 00:08:04.528 13913.797 - 14014.622: 81.0816% ( 138) 00:08:04.528 14014.622 - 14115.446: 82.4850% ( 150) 00:08:04.528 14115.446 - 14216.271: 83.8136% ( 142) 00:08:04.528 14216.271 - 14317.095: 84.8054% ( 106) 00:08:04.528 14317.095 - 14417.920: 85.7691% ( 103) 00:08:04.528 14417.920 - 14518.745: 86.4521% ( 73) 00:08:04.528 14518.745 - 14619.569: 87.0041% ( 59) 00:08:04.528 14619.569 - 14720.394: 87.5655% ( 60) 00:08:04.528 14720.394 - 14821.218: 88.3327% ( 82) 00:08:04.528 14821.218 - 14922.043: 89.0064% ( 72) 00:08:04.528 14922.043 - 15022.868: 89.6145% ( 65) 00:08:04.528 15022.868 - 15123.692: 90.1010% ( 52) 00:08:04.528 15123.692 - 15224.517: 90.6811% ( 62) 00:08:04.528 15224.517 - 15325.342: 91.1583% ( 51) 00:08:04.528 15325.342 - 15426.166: 91.6074% ( 48) 00:08:04.528 15426.166 - 15526.991: 92.0846% ( 51) 00:08:04.528 15526.991 - 15627.815: 92.5898% ( 54) 00:08:04.528 15627.815 - 15728.640: 93.0670% ( 51) 00:08:04.528 15728.640 - 15829.465: 93.5348% ( 50) 00:08:04.528 15829.465 - 15930.289: 93.9278% ( 42) 00:08:04.528 15930.289 - 16031.114: 94.2085% ( 30) 00:08:04.528 16031.114 - 16131.938: 94.5359% ( 35) 00:08:04.528 16131.938 - 16232.763: 94.6763% ( 15) 00:08:04.528 16232.763 - 16333.588: 94.7979% ( 13) 00:08:04.528 16333.588 - 16434.412: 94.9757% ( 19) 00:08:04.528 16434.412 - 16535.237: 95.0786% ( 11) 00:08:04.528 16535.237 - 16636.062: 95.2376% ( 17) 00:08:04.528 16636.062 - 16736.886: 95.3125% ( 8) 00:08:04.528 16736.886 - 16837.711: 95.3967% ( 9) 00:08:04.528 16837.711 - 16938.535: 95.4716% ( 8) 00:08:04.528 16938.535 - 17039.360: 95.5651% ( 10) 00:08:04.528 17039.360 - 17140.185: 95.7148% ( 16) 00:08:04.528 17140.185 - 17241.009: 95.8926% ( 19) 00:08:04.528 17241.009 - 17341.834: 96.2294% ( 36) 00:08:04.528 17341.834 - 17442.658: 96.7253% ( 53) 00:08:04.528 17442.658 - 17543.483: 96.9499% ( 24) 00:08:04.528 17543.483 - 17644.308: 97.1276% ( 19) 00:08:04.528 17644.308 - 17745.132: 97.2305% ( 11) 00:08:04.528 17745.132 - 17845.957: 97.3335% ( 11) 00:08:04.528 17845.957 - 17946.782: 97.3990% ( 7) 00:08:04.528 17946.782 - 18047.606: 97.4925% ( 10) 00:08:04.528 18047.606 - 18148.431: 97.6048% ( 12) 00:08:04.528 18148.431 - 18249.255: 97.7264% ( 13) 00:08:04.528 18249.255 - 18350.080: 97.8481% ( 13) 00:08:04.528 18350.080 - 18450.905: 97.9042% ( 6) 00:08:04.528 18450.905 - 18551.729: 98.0258% ( 13) 00:08:04.528 18551.729 - 18652.554: 98.0913% ( 7) 00:08:04.528 18652.554 - 18753.378: 98.1662% ( 8) 00:08:04.528 18753.378 - 18854.203: 98.2597% ( 10) 00:08:04.528 18854.203 - 18955.028: 98.3533% ( 10) 00:08:04.528 18955.028 - 19055.852: 98.4375% ( 9) 00:08:04.528 19055.852 - 19156.677: 98.5217% ( 9) 00:08:04.528 19156.677 - 19257.502: 98.5778% ( 6) 00:08:04.528 19257.502 - 19358.326: 98.6340% ( 6) 00:08:04.528 19358.326 - 19459.151: 98.6995% ( 7) 00:08:04.528 19459.151 - 19559.975: 98.7556% ( 6) 00:08:04.528 19559.975 - 19660.800: 98.8024% ( 5) 00:08:04.528 21072.345 - 21173.169: 98.8305% ( 3) 00:08:04.528 21173.169 - 21273.994: 98.8960% ( 7) 00:08:04.528 21273.994 - 21374.818: 98.9521% ( 6) 00:08:04.528 21374.818 - 21475.643: 99.0176% ( 7) 00:08:04.528 21475.643 - 21576.468: 99.0831% ( 7) 00:08:04.528 21576.468 - 21677.292: 99.1486% ( 7) 00:08:04.528 21677.292 - 21778.117: 99.2141% ( 7) 00:08:04.528 21778.117 - 21878.942: 99.2796% ( 7) 00:08:04.528 21878.942 - 21979.766: 99.3451% ( 7) 00:08:04.528 21979.766 - 22080.591: 99.4012% ( 6) 00:08:04.528 30045.735 - 30247.385: 99.4854% ( 9) 00:08:04.529 30247.385 - 30449.034: 99.6164% ( 14) 00:08:04.529 30449.034 - 30650.683: 99.7287% ( 12) 00:08:04.529 30650.683 - 30852.332: 99.7474% ( 2) 00:08:04.529 30852.332 - 31053.982: 99.8784% ( 14) 00:08:04.529 31053.982 - 31255.631: 99.9906% ( 12) 00:08:04.529 31255.631 - 31457.280: 100.0000% ( 1) 00:08:04.529 00:08:04.529 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:04.529 ============================================================================== 00:08:04.529 Range in us Cumulative IO count 00:08:04.529 6125.095 - 6150.302: 0.0094% ( 1) 00:08:04.529 6150.302 - 6175.508: 0.0187% ( 1) 00:08:04.529 6175.508 - 6200.714: 0.0281% ( 1) 00:08:04.529 6225.920 - 6251.126: 0.0468% ( 2) 00:08:04.529 6251.126 - 6276.332: 0.1029% ( 6) 00:08:04.529 6276.332 - 6301.538: 0.1591% ( 6) 00:08:04.529 6301.538 - 6326.745: 0.1965% ( 4) 00:08:04.529 6326.745 - 6351.951: 0.3743% ( 19) 00:08:04.529 6351.951 - 6377.157: 0.4865% ( 12) 00:08:04.529 6377.157 - 6402.363: 0.5614% ( 8) 00:08:04.529 6402.363 - 6427.569: 0.7204% ( 17) 00:08:04.529 6427.569 - 6452.775: 0.8701% ( 16) 00:08:04.529 6452.775 - 6503.188: 1.1040% ( 25) 00:08:04.529 6503.188 - 6553.600: 1.2444% ( 15) 00:08:04.529 6553.600 - 6604.012: 1.3941% ( 16) 00:08:04.529 6604.012 - 6654.425: 1.5719% ( 19) 00:08:04.529 6654.425 - 6704.837: 1.7590% ( 20) 00:08:04.529 6704.837 - 6755.249: 2.2174% ( 49) 00:08:04.529 6755.249 - 6805.662: 2.6759% ( 49) 00:08:04.529 6805.662 - 6856.074: 3.0034% ( 35) 00:08:04.529 6856.074 - 6906.486: 3.3028% ( 32) 00:08:04.529 6906.486 - 6956.898: 3.6490% ( 37) 00:08:04.529 6956.898 - 7007.311: 4.0326% ( 41) 00:08:04.529 7007.311 - 7057.723: 4.3226% ( 31) 00:08:04.529 7057.723 - 7108.135: 4.6220% ( 32) 00:08:04.529 7108.135 - 7158.548: 4.8933% ( 29) 00:08:04.529 7158.548 - 7208.960: 5.1927% ( 32) 00:08:04.529 7208.960 - 7259.372: 5.5389% ( 37) 00:08:04.529 7259.372 - 7309.785: 5.8664% ( 35) 00:08:04.529 7309.785 - 7360.197: 6.1751% ( 33) 00:08:04.529 7360.197 - 7410.609: 6.4839% ( 33) 00:08:04.529 7410.609 - 7461.022: 6.7085% ( 24) 00:08:04.529 7461.022 - 7511.434: 6.9704% ( 28) 00:08:04.529 7511.434 - 7561.846: 7.1669% ( 21) 00:08:04.529 7561.846 - 7612.258: 7.3634% ( 21) 00:08:04.529 7612.258 - 7662.671: 7.5786% ( 23) 00:08:04.529 7662.671 - 7713.083: 8.1961% ( 66) 00:08:04.529 7713.083 - 7763.495: 8.6359% ( 47) 00:08:04.529 7763.495 - 7813.908: 9.1598% ( 56) 00:08:04.529 7813.908 - 7864.320: 9.6557% ( 53) 00:08:04.529 7864.320 - 7914.732: 10.1329% ( 51) 00:08:04.529 7914.732 - 7965.145: 10.5726% ( 47) 00:08:04.529 7965.145 - 8015.557: 10.9936% ( 45) 00:08:04.529 8015.557 - 8065.969: 11.2837% ( 31) 00:08:04.529 8065.969 - 8116.382: 11.6299% ( 37) 00:08:04.529 8116.382 - 8166.794: 11.9480% ( 34) 00:08:04.529 8166.794 - 8217.206: 12.3690% ( 45) 00:08:04.529 8217.206 - 8267.618: 12.6403% ( 29) 00:08:04.529 8267.618 - 8318.031: 12.8555% ( 23) 00:08:04.529 8318.031 - 8368.443: 13.0801% ( 24) 00:08:04.529 8368.443 - 8418.855: 13.2672% ( 20) 00:08:04.529 8418.855 - 8469.268: 13.3795% ( 12) 00:08:04.529 8469.268 - 8519.680: 13.6040% ( 24) 00:08:04.529 8519.680 - 8570.092: 13.8660% ( 28) 00:08:04.529 8570.092 - 8620.505: 14.0812% ( 23) 00:08:04.529 8620.505 - 8670.917: 14.2216% ( 15) 00:08:04.529 8670.917 - 8721.329: 14.3713% ( 16) 00:08:04.529 8721.329 - 8771.742: 14.6613% ( 31) 00:08:04.529 8771.742 - 8822.154: 15.1665% ( 54) 00:08:04.529 8822.154 - 8872.566: 15.8308% ( 71) 00:08:04.529 8872.566 - 8922.978: 16.2238% ( 42) 00:08:04.529 8922.978 - 8973.391: 16.6542% ( 46) 00:08:04.529 8973.391 - 9023.803: 17.0191% ( 39) 00:08:04.529 9023.803 - 9074.215: 17.2436% ( 24) 00:08:04.529 9074.215 - 9124.628: 17.6553% ( 44) 00:08:04.529 9124.628 - 9175.040: 17.8892% ( 25) 00:08:04.529 9175.040 - 9225.452: 18.3103% ( 45) 00:08:04.529 9225.452 - 9275.865: 18.7032% ( 42) 00:08:04.529 9275.865 - 9326.277: 19.1336% ( 46) 00:08:04.529 9326.277 - 9376.689: 19.5546% ( 45) 00:08:04.529 9376.689 - 9427.102: 20.1534% ( 64) 00:08:04.529 9427.102 - 9477.514: 20.6961% ( 58) 00:08:04.529 9477.514 - 9527.926: 21.2949% ( 64) 00:08:04.529 9527.926 - 9578.338: 21.7627% ( 50) 00:08:04.529 9578.338 - 9628.751: 22.2399% ( 51) 00:08:04.529 9628.751 - 9679.163: 22.7358% ( 53) 00:08:04.529 9679.163 - 9729.575: 23.2597% ( 56) 00:08:04.529 9729.575 - 9779.988: 23.7743% ( 55) 00:08:04.529 9779.988 - 9830.400: 24.3170% ( 58) 00:08:04.529 9830.400 - 9880.812: 24.8035% ( 52) 00:08:04.529 9880.812 - 9931.225: 25.3649% ( 60) 00:08:04.529 9931.225 - 9981.637: 25.6830% ( 34) 00:08:04.529 9981.637 - 10032.049: 26.0105% ( 35) 00:08:04.529 10032.049 - 10082.462: 26.6374% ( 67) 00:08:04.529 10082.462 - 10132.874: 27.0677% ( 46) 00:08:04.529 10132.874 - 10183.286: 27.5356% ( 50) 00:08:04.529 10183.286 - 10233.698: 28.0969% ( 60) 00:08:04.529 10233.698 - 10284.111: 28.6396% ( 58) 00:08:04.529 10284.111 - 10334.523: 29.0700% ( 46) 00:08:04.529 10334.523 - 10384.935: 29.6220% ( 59) 00:08:04.529 10384.935 - 10435.348: 30.2957% ( 72) 00:08:04.529 10435.348 - 10485.760: 31.0254% ( 78) 00:08:04.529 10485.760 - 10536.172: 31.6430% ( 66) 00:08:04.529 10536.172 - 10586.585: 32.1576% ( 55) 00:08:04.529 10586.585 - 10636.997: 32.5599% ( 43) 00:08:04.529 10636.997 - 10687.409: 33.0838% ( 56) 00:08:04.529 10687.409 - 10737.822: 33.5142% ( 46) 00:08:04.529 10737.822 - 10788.234: 34.1037% ( 63) 00:08:04.529 10788.234 - 10838.646: 34.4592% ( 38) 00:08:04.529 10838.646 - 10889.058: 34.7212% ( 28) 00:08:04.529 10889.058 - 10939.471: 35.0487% ( 35) 00:08:04.529 10939.471 - 10989.883: 35.6662% ( 66) 00:08:04.529 10989.883 - 11040.295: 36.1153% ( 48) 00:08:04.529 11040.295 - 11090.708: 36.5082% ( 42) 00:08:04.529 11090.708 - 11141.120: 36.8918% ( 41) 00:08:04.529 11141.120 - 11191.532: 37.3877% ( 53) 00:08:04.529 11191.532 - 11241.945: 37.8649% ( 51) 00:08:04.529 11241.945 - 11292.357: 38.4076% ( 58) 00:08:04.529 11292.357 - 11342.769: 38.8286% ( 45) 00:08:04.529 11342.769 - 11393.182: 39.2496% ( 45) 00:08:04.529 11393.182 - 11443.594: 39.6707% ( 45) 00:08:04.529 11443.594 - 11494.006: 40.1665% ( 53) 00:08:04.529 11494.006 - 11544.418: 40.6624% ( 53) 00:08:04.529 11544.418 - 11594.831: 41.0928% ( 46) 00:08:04.529 11594.831 - 11645.243: 41.8413% ( 80) 00:08:04.529 11645.243 - 11695.655: 42.5337% ( 74) 00:08:04.529 11695.655 - 11746.068: 43.5535% ( 109) 00:08:04.529 11746.068 - 11796.480: 44.4611% ( 97) 00:08:04.529 11796.480 - 11846.892: 45.2096% ( 80) 00:08:04.529 11846.892 - 11897.305: 46.1171% ( 97) 00:08:04.529 11897.305 - 11947.717: 47.0902% ( 104) 00:08:04.529 11947.717 - 11998.129: 48.0632% ( 104) 00:08:04.529 11998.129 - 12048.542: 48.9427% ( 94) 00:08:04.529 12048.542 - 12098.954: 49.5790% ( 68) 00:08:04.529 12098.954 - 12149.366: 50.0842% ( 54) 00:08:04.529 12149.366 - 12199.778: 50.6362% ( 59) 00:08:04.529 12199.778 - 12250.191: 51.4689% ( 89) 00:08:04.529 12250.191 - 12300.603: 52.2549% ( 84) 00:08:04.529 12300.603 - 12351.015: 52.9098% ( 70) 00:08:04.529 12351.015 - 12401.428: 53.7519% ( 90) 00:08:04.529 12401.428 - 12451.840: 54.4442% ( 74) 00:08:04.529 12451.840 - 12502.252: 55.3986% ( 102) 00:08:04.529 12502.252 - 12552.665: 56.1658% ( 82) 00:08:04.529 12552.665 - 12603.077: 56.9985% ( 89) 00:08:04.529 12603.077 - 12653.489: 57.6534% ( 70) 00:08:04.529 12653.489 - 12703.902: 58.2522% ( 64) 00:08:04.529 12703.902 - 12754.314: 59.1037% ( 91) 00:08:04.529 12754.314 - 12804.726: 60.0674% ( 103) 00:08:04.529 12804.726 - 12855.138: 61.1527% ( 116) 00:08:04.529 12855.138 - 12905.551: 62.1538% ( 107) 00:08:04.529 12905.551 - 13006.375: 64.1093% ( 209) 00:08:04.529 13006.375 - 13107.200: 66.2987% ( 234) 00:08:04.529 13107.200 - 13208.025: 68.1606% ( 199) 00:08:04.529 13208.025 - 13308.849: 69.8353% ( 179) 00:08:04.529 13308.849 - 13409.674: 71.5475% ( 183) 00:08:04.529 13409.674 - 13510.498: 72.9323% ( 148) 00:08:04.529 13510.498 - 13611.323: 74.2796% ( 144) 00:08:04.529 13611.323 - 13712.148: 75.9731% ( 181) 00:08:04.529 13712.148 - 13812.972: 77.2549% ( 137) 00:08:04.529 13812.972 - 13913.797: 78.5554% ( 139) 00:08:04.529 13913.797 - 14014.622: 79.9775% ( 152) 00:08:04.529 14014.622 - 14115.446: 81.2687% ( 138) 00:08:04.529 14115.446 - 14216.271: 82.5879% ( 141) 00:08:04.529 14216.271 - 14317.095: 83.8510% ( 135) 00:08:04.529 14317.095 - 14417.920: 85.0580% ( 129) 00:08:04.529 14417.920 - 14518.745: 85.9656% ( 97) 00:08:04.529 14518.745 - 14619.569: 86.8076% ( 90) 00:08:04.529 14619.569 - 14720.394: 87.6871% ( 94) 00:08:04.529 14720.394 - 14821.218: 88.5947% ( 97) 00:08:04.529 14821.218 - 14922.043: 89.5865% ( 106) 00:08:04.529 14922.043 - 15022.868: 90.1198% ( 57) 00:08:04.529 15022.868 - 15123.692: 90.5689% ( 48) 00:08:04.529 15123.692 - 15224.517: 91.0367% ( 50) 00:08:04.529 15224.517 - 15325.342: 91.4296% ( 42) 00:08:04.529 15325.342 - 15426.166: 91.8600% ( 46) 00:08:04.529 15426.166 - 15526.991: 92.2062% ( 37) 00:08:04.529 15526.991 - 15627.815: 92.5805% ( 40) 00:08:04.529 15627.815 - 15728.640: 92.9734% ( 42) 00:08:04.529 15728.640 - 15829.465: 93.2354% ( 28) 00:08:04.529 15829.465 - 15930.289: 93.4880% ( 27) 00:08:04.529 15930.289 - 16031.114: 93.7781% ( 31) 00:08:04.529 16031.114 - 16131.938: 94.0588% ( 30) 00:08:04.529 16131.938 - 16232.763: 94.3675% ( 33) 00:08:04.529 16232.763 - 16333.588: 94.6201% ( 27) 00:08:04.529 16333.588 - 16434.412: 94.8728% ( 27) 00:08:04.529 16434.412 - 16535.237: 95.2189% ( 37) 00:08:04.529 16535.237 - 16636.062: 95.4809% ( 28) 00:08:04.529 16636.062 - 16736.886: 95.6493% ( 18) 00:08:04.529 16736.886 - 16837.711: 95.7522% ( 11) 00:08:04.530 16837.711 - 16938.535: 95.9207% ( 18) 00:08:04.530 16938.535 - 17039.360: 96.0797% ( 17) 00:08:04.530 17039.360 - 17140.185: 96.1546% ( 8) 00:08:04.530 17140.185 - 17241.009: 96.2201% ( 7) 00:08:04.530 17241.009 - 17341.834: 96.3323% ( 12) 00:08:04.530 17341.834 - 17442.658: 96.4446% ( 12) 00:08:04.530 17442.658 - 17543.483: 96.7159% ( 29) 00:08:04.530 17543.483 - 17644.308: 96.9405% ( 24) 00:08:04.530 17644.308 - 17745.132: 97.1838% ( 26) 00:08:04.530 17745.132 - 17845.957: 97.2586% ( 8) 00:08:04.530 17845.957 - 17946.782: 97.3709% ( 12) 00:08:04.530 17946.782 - 18047.606: 97.5206% ( 16) 00:08:04.530 18047.606 - 18148.431: 97.6609% ( 15) 00:08:04.530 18148.431 - 18249.255: 97.7919% ( 14) 00:08:04.530 18249.255 - 18350.080: 97.9978% ( 22) 00:08:04.530 18350.080 - 18450.905: 98.1475% ( 16) 00:08:04.530 18450.905 - 18551.729: 98.2784% ( 14) 00:08:04.530 18551.729 - 18652.554: 98.3907% ( 12) 00:08:04.530 18652.554 - 18753.378: 98.4469% ( 6) 00:08:04.530 18753.378 - 18854.203: 98.4936% ( 5) 00:08:04.530 18854.203 - 18955.028: 98.5404% ( 5) 00:08:04.530 18955.028 - 19055.852: 98.5872% ( 5) 00:08:04.530 19055.852 - 19156.677: 98.6246% ( 4) 00:08:04.530 19156.677 - 19257.502: 98.6901% ( 7) 00:08:04.530 19257.502 - 19358.326: 98.7088% ( 2) 00:08:04.530 19358.326 - 19459.151: 98.7650% ( 6) 00:08:04.530 19459.151 - 19559.975: 98.8024% ( 4) 00:08:04.530 20669.046 - 20769.871: 98.8118% ( 1) 00:08:04.530 20769.871 - 20870.695: 98.8492% ( 4) 00:08:04.530 20870.695 - 20971.520: 98.9053% ( 6) 00:08:04.530 20971.520 - 21072.345: 98.9521% ( 5) 00:08:04.530 21072.345 - 21173.169: 98.9989% ( 5) 00:08:04.530 21173.169 - 21273.994: 99.0550% ( 6) 00:08:04.530 21273.994 - 21374.818: 99.1112% ( 6) 00:08:04.530 21374.818 - 21475.643: 99.1579% ( 5) 00:08:04.530 21475.643 - 21576.468: 99.2141% ( 6) 00:08:04.530 21576.468 - 21677.292: 99.2702% ( 6) 00:08:04.530 21677.292 - 21778.117: 99.3263% ( 6) 00:08:04.530 21778.117 - 21878.942: 99.3825% ( 6) 00:08:04.530 21878.942 - 21979.766: 99.4012% ( 2) 00:08:04.530 30045.735 - 30247.385: 99.4106% ( 1) 00:08:04.530 30247.385 - 30449.034: 99.5041% ( 10) 00:08:04.530 30449.034 - 30650.683: 99.5603% ( 6) 00:08:04.530 31860.578 - 32062.228: 99.6070% ( 5) 00:08:04.530 32062.228 - 32263.877: 99.7100% ( 11) 00:08:04.530 32263.877 - 32465.526: 99.8222% ( 12) 00:08:04.530 32465.526 - 32667.175: 99.9251% ( 11) 00:08:04.530 32667.175 - 32868.825: 100.0000% ( 8) 00:08:04.530 00:08:04.530 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:04.530 ============================================================================== 00:08:04.530 Range in us Cumulative IO count 00:08:04.530 6225.920 - 6251.126: 0.0187% ( 2) 00:08:04.530 6251.126 - 6276.332: 0.0374% ( 2) 00:08:04.530 6276.332 - 6301.538: 0.0561% ( 2) 00:08:04.530 6301.538 - 6326.745: 0.0842% ( 3) 00:08:04.530 6326.745 - 6351.951: 0.1497% ( 7) 00:08:04.530 6351.951 - 6377.157: 0.2433% ( 10) 00:08:04.530 6377.157 - 6402.363: 0.5707% ( 35) 00:08:04.530 6402.363 - 6427.569: 0.7485% ( 19) 00:08:04.530 6427.569 - 6452.775: 0.9076% ( 17) 00:08:04.530 6452.775 - 6503.188: 1.2725% ( 39) 00:08:04.530 6503.188 - 6553.600: 1.4034% ( 14) 00:08:04.530 6553.600 - 6604.012: 1.4783% ( 8) 00:08:04.530 6604.012 - 6654.425: 1.5625% ( 9) 00:08:04.530 6654.425 - 6704.837: 1.6841% ( 13) 00:08:04.530 6704.837 - 6755.249: 1.9087% ( 24) 00:08:04.530 6755.249 - 6805.662: 2.2923% ( 41) 00:08:04.530 6805.662 - 6856.074: 2.6010% ( 33) 00:08:04.530 6856.074 - 6906.486: 2.9940% ( 42) 00:08:04.530 6906.486 - 6956.898: 3.9296% ( 100) 00:08:04.530 6956.898 - 7007.311: 4.6407% ( 76) 00:08:04.530 7007.311 - 7057.723: 5.0618% ( 45) 00:08:04.530 7057.723 - 7108.135: 5.2395% ( 19) 00:08:04.530 7108.135 - 7158.548: 5.3612% ( 13) 00:08:04.530 7158.548 - 7208.960: 5.4079% ( 5) 00:08:04.530 7208.960 - 7259.372: 5.4641% ( 6) 00:08:04.530 7259.372 - 7309.785: 5.5015% ( 4) 00:08:04.530 7309.785 - 7360.197: 5.5670% ( 7) 00:08:04.530 7360.197 - 7410.609: 5.7635% ( 21) 00:08:04.530 7410.609 - 7461.022: 6.1377% ( 40) 00:08:04.530 7461.022 - 7511.434: 6.9330% ( 85) 00:08:04.530 7511.434 - 7561.846: 7.3166% ( 41) 00:08:04.530 7561.846 - 7612.258: 7.9248% ( 65) 00:08:04.530 7612.258 - 7662.671: 8.3365% ( 44) 00:08:04.530 7662.671 - 7713.083: 8.7575% ( 45) 00:08:04.530 7713.083 - 7763.495: 9.1037% ( 37) 00:08:04.530 7763.495 - 7813.908: 9.4592% ( 38) 00:08:04.530 7813.908 - 7864.320: 9.8709% ( 44) 00:08:04.530 7864.320 - 7914.732: 10.2171% ( 37) 00:08:04.530 7914.732 - 7965.145: 10.7129% ( 53) 00:08:04.530 7965.145 - 8015.557: 11.5737% ( 92) 00:08:04.530 8015.557 - 8065.969: 12.1070% ( 57) 00:08:04.530 8065.969 - 8116.382: 12.3222% ( 23) 00:08:04.530 8116.382 - 8166.794: 12.5468% ( 24) 00:08:04.530 8166.794 - 8217.206: 12.7152% ( 18) 00:08:04.530 8217.206 - 8267.618: 13.0052% ( 31) 00:08:04.530 8267.618 - 8318.031: 13.1082% ( 11) 00:08:04.530 8318.031 - 8368.443: 13.2017% ( 10) 00:08:04.530 8368.443 - 8418.855: 13.2672% ( 7) 00:08:04.530 8418.855 - 8469.268: 13.3421% ( 8) 00:08:04.530 8469.268 - 8519.680: 13.4356% ( 10) 00:08:04.530 8519.680 - 8570.092: 13.5573% ( 13) 00:08:04.530 8570.092 - 8620.505: 13.6134% ( 6) 00:08:04.530 8620.505 - 8670.917: 13.6976% ( 9) 00:08:04.530 8670.917 - 8721.329: 13.7350% ( 4) 00:08:04.530 8721.329 - 8771.742: 13.9409% ( 22) 00:08:04.530 8771.742 - 8822.154: 14.2590% ( 34) 00:08:04.530 8822.154 - 8872.566: 14.5210% ( 28) 00:08:04.530 8872.566 - 8922.978: 14.8952% ( 40) 00:08:04.530 8922.978 - 8973.391: 15.3537% ( 49) 00:08:04.530 8973.391 - 9023.803: 15.9338% ( 62) 00:08:04.530 9023.803 - 9074.215: 16.5232% ( 63) 00:08:04.530 9074.215 - 9124.628: 17.0284% ( 54) 00:08:04.530 9124.628 - 9175.040: 17.6553% ( 67) 00:08:04.530 9175.040 - 9225.452: 18.3664% ( 76) 00:08:04.530 9225.452 - 9275.865: 19.1617% ( 85) 00:08:04.530 9275.865 - 9326.277: 19.7979% ( 68) 00:08:04.530 9326.277 - 9376.689: 20.2751% ( 51) 00:08:04.530 9376.689 - 9427.102: 20.8552% ( 62) 00:08:04.530 9427.102 - 9477.514: 21.4353% ( 62) 00:08:04.530 9477.514 - 9527.926: 21.9686% ( 57) 00:08:04.530 9527.926 - 9578.338: 22.5299% ( 60) 00:08:04.530 9578.338 - 9628.751: 23.0165% ( 52) 00:08:04.530 9628.751 - 9679.163: 23.3907% ( 40) 00:08:04.530 9679.163 - 9729.575: 23.8772% ( 52) 00:08:04.530 9729.575 - 9779.988: 24.4386% ( 60) 00:08:04.530 9779.988 - 9830.400: 24.9158% ( 51) 00:08:04.530 9830.400 - 9880.812: 25.3743% ( 49) 00:08:04.530 9880.812 - 9931.225: 26.0011% ( 67) 00:08:04.530 9931.225 - 9981.637: 26.5344% ( 57) 00:08:04.530 9981.637 - 10032.049: 27.0210% ( 52) 00:08:04.530 10032.049 - 10082.462: 27.6946% ( 72) 00:08:04.530 10082.462 - 10132.874: 28.1811% ( 52) 00:08:04.530 10132.874 - 10183.286: 28.6677% ( 52) 00:08:04.530 10183.286 - 10233.698: 29.1542% ( 52) 00:08:04.530 10233.698 - 10284.111: 29.7530% ( 64) 00:08:04.530 10284.111 - 10334.523: 30.3237% ( 61) 00:08:04.530 10334.523 - 10384.935: 30.8290% ( 54) 00:08:04.530 10384.935 - 10435.348: 31.4652% ( 68) 00:08:04.530 10435.348 - 10485.760: 32.0640% ( 64) 00:08:04.530 10485.760 - 10536.172: 32.6254% ( 60) 00:08:04.530 10536.172 - 10586.585: 33.2335% ( 65) 00:08:04.530 10586.585 - 10636.997: 33.7107% ( 51) 00:08:04.530 10636.997 - 10687.409: 34.0288% ( 34) 00:08:04.530 10687.409 - 10737.822: 34.4499% ( 45) 00:08:04.530 10737.822 - 10788.234: 34.7586% ( 33) 00:08:04.530 10788.234 - 10838.646: 35.0487% ( 31) 00:08:04.530 10838.646 - 10889.058: 35.3387% ( 31) 00:08:04.530 10889.058 - 10939.471: 35.6100% ( 29) 00:08:04.530 10939.471 - 10989.883: 35.9469% ( 36) 00:08:04.530 10989.883 - 11040.295: 36.1808% ( 25) 00:08:04.530 11040.295 - 11090.708: 36.4334% ( 27) 00:08:04.530 11090.708 - 11141.120: 36.6673% ( 25) 00:08:04.530 11141.120 - 11191.532: 36.9199% ( 27) 00:08:04.530 11191.532 - 11241.945: 37.1538% ( 25) 00:08:04.530 11241.945 - 11292.357: 37.4345% ( 30) 00:08:04.530 11292.357 - 11342.769: 37.7526% ( 34) 00:08:04.530 11342.769 - 11393.182: 38.2579% ( 54) 00:08:04.530 11393.182 - 11443.594: 38.9596% ( 75) 00:08:04.530 11443.594 - 11494.006: 39.4835% ( 56) 00:08:04.530 11494.006 - 11544.418: 40.1104% ( 67) 00:08:04.530 11544.418 - 11594.831: 40.6999% ( 63) 00:08:04.530 11594.831 - 11645.243: 41.4764% ( 83) 00:08:04.530 11645.243 - 11695.655: 42.2343% ( 81) 00:08:04.530 11695.655 - 11746.068: 43.1418% ( 97) 00:08:04.530 11746.068 - 11796.480: 44.1055% ( 103) 00:08:04.530 11796.480 - 11846.892: 45.1722% ( 114) 00:08:04.530 11846.892 - 11897.305: 46.0516% ( 94) 00:08:04.530 11897.305 - 11947.717: 46.8844% ( 89) 00:08:04.530 11947.717 - 11998.129: 47.6984% ( 87) 00:08:04.530 11998.129 - 12048.542: 48.6433% ( 101) 00:08:04.530 12048.542 - 12098.954: 49.3731% ( 78) 00:08:04.530 12098.954 - 12149.366: 50.2433% ( 93) 00:08:04.530 12149.366 - 12199.778: 51.1134% ( 93) 00:08:04.530 12199.778 - 12250.191: 52.0303% ( 98) 00:08:04.530 12250.191 - 12300.603: 52.8443% ( 87) 00:08:04.530 12300.603 - 12351.015: 53.5273% ( 73) 00:08:04.530 12351.015 - 12401.428: 54.2290% ( 75) 00:08:04.530 12401.428 - 12451.840: 54.9588% ( 78) 00:08:04.530 12451.840 - 12502.252: 55.7167% ( 81) 00:08:04.530 12502.252 - 12552.665: 56.3903% ( 72) 00:08:04.530 12552.665 - 12603.077: 57.2043% ( 87) 00:08:04.531 12603.077 - 12653.489: 58.1400% ( 100) 00:08:04.531 12653.489 - 12703.902: 59.1224% ( 105) 00:08:04.531 12703.902 - 12754.314: 60.1703% ( 112) 00:08:04.531 12754.314 - 12804.726: 60.9936% ( 88) 00:08:04.531 12804.726 - 12855.138: 61.8825% ( 95) 00:08:04.531 12855.138 - 12905.551: 62.8462% ( 103) 00:08:04.531 12905.551 - 13006.375: 65.0730% ( 238) 00:08:04.531 13006.375 - 13107.200: 66.9162% ( 197) 00:08:04.531 13107.200 - 13208.025: 68.6658% ( 187) 00:08:04.531 13208.025 - 13308.849: 70.2283% ( 167) 00:08:04.531 13308.849 - 13409.674: 71.5382% ( 140) 00:08:04.531 13409.674 - 13510.498: 72.6890% ( 123) 00:08:04.531 13510.498 - 13611.323: 73.6995% ( 108) 00:08:04.531 13611.323 - 13712.148: 74.5977% ( 96) 00:08:04.531 13712.148 - 13812.972: 75.5707% ( 104) 00:08:04.531 13812.972 - 13913.797: 76.6654% ( 117) 00:08:04.531 13913.797 - 14014.622: 78.2279% ( 167) 00:08:04.531 14014.622 - 14115.446: 79.9121% ( 180) 00:08:04.531 14115.446 - 14216.271: 81.5681% ( 177) 00:08:04.531 14216.271 - 14317.095: 82.9809% ( 151) 00:08:04.531 14317.095 - 14417.920: 84.2814% ( 139) 00:08:04.531 14417.920 - 14518.745: 85.4416% ( 124) 00:08:04.531 14518.745 - 14619.569: 86.6392% ( 128) 00:08:04.531 14619.569 - 14720.394: 87.6310% ( 106) 00:08:04.531 14720.394 - 14821.218: 88.5011% ( 93) 00:08:04.531 14821.218 - 14922.043: 89.2309% ( 78) 00:08:04.531 14922.043 - 15022.868: 89.7081% ( 51) 00:08:04.531 15022.868 - 15123.692: 90.1853% ( 51) 00:08:04.531 15123.692 - 15224.517: 90.7373% ( 59) 00:08:04.531 15224.517 - 15325.342: 91.2144% ( 51) 00:08:04.531 15325.342 - 15426.166: 91.6542% ( 47) 00:08:04.531 15426.166 - 15526.991: 92.0472% ( 42) 00:08:04.531 15526.991 - 15627.815: 92.4682% ( 45) 00:08:04.531 15627.815 - 15728.640: 92.9266% ( 49) 00:08:04.531 15728.640 - 15829.465: 93.3383% ( 44) 00:08:04.531 15829.465 - 15930.289: 93.7781% ( 47) 00:08:04.531 15930.289 - 16031.114: 94.1149% ( 36) 00:08:04.531 16031.114 - 16131.938: 94.3114% ( 21) 00:08:04.531 16131.938 - 16232.763: 94.4891% ( 19) 00:08:04.531 16232.763 - 16333.588: 94.7324% ( 26) 00:08:04.531 16333.588 - 16434.412: 95.0131% ( 30) 00:08:04.531 16434.412 - 16535.237: 95.2657% ( 27) 00:08:04.531 16535.237 - 16636.062: 95.3967% ( 14) 00:08:04.531 16636.062 - 16736.886: 95.4996% ( 11) 00:08:04.531 16736.886 - 16837.711: 95.6119% ( 12) 00:08:04.531 16837.711 - 16938.535: 95.7710% ( 17) 00:08:04.531 16938.535 - 17039.360: 96.0329% ( 28) 00:08:04.531 17039.360 - 17140.185: 96.2668% ( 25) 00:08:04.531 17140.185 - 17241.009: 96.4446% ( 19) 00:08:04.531 17241.009 - 17341.834: 96.6037% ( 17) 00:08:04.531 17341.834 - 17442.658: 96.7440% ( 15) 00:08:04.531 17442.658 - 17543.483: 96.8656% ( 13) 00:08:04.531 17543.483 - 17644.308: 96.9686% ( 11) 00:08:04.531 17644.308 - 17745.132: 97.0902% ( 13) 00:08:04.531 17745.132 - 17845.957: 97.2680% ( 19) 00:08:04.531 17845.957 - 17946.782: 97.4738% ( 22) 00:08:04.531 17946.782 - 18047.606: 97.7171% ( 26) 00:08:04.531 18047.606 - 18148.431: 97.9323% ( 23) 00:08:04.531 18148.431 - 18249.255: 98.1942% ( 28) 00:08:04.531 18249.255 - 18350.080: 98.3346% ( 15) 00:08:04.531 18350.080 - 18450.905: 98.4562% ( 13) 00:08:04.531 18450.905 - 18551.729: 98.5685% ( 12) 00:08:04.531 18551.729 - 18652.554: 98.6340% ( 7) 00:08:04.531 18652.554 - 18753.378: 98.6995% ( 7) 00:08:04.531 18753.378 - 18854.203: 98.7650% ( 7) 00:08:04.531 18854.203 - 18955.028: 98.8024% ( 4) 00:08:04.531 20467.397 - 20568.222: 98.8398% ( 4) 00:08:04.531 20568.222 - 20669.046: 98.9053% ( 7) 00:08:04.531 20669.046 - 20769.871: 98.9615% ( 6) 00:08:04.531 20769.871 - 20870.695: 99.0269% ( 7) 00:08:04.531 20870.695 - 20971.520: 99.0737% ( 5) 00:08:04.531 20971.520 - 21072.345: 99.1392% ( 7) 00:08:04.531 21072.345 - 21173.169: 99.1954% ( 6) 00:08:04.531 21173.169 - 21273.994: 99.2609% ( 7) 00:08:04.531 21273.994 - 21374.818: 99.3263% ( 7) 00:08:04.531 21374.818 - 21475.643: 99.3825% ( 6) 00:08:04.531 21475.643 - 21576.468: 99.4012% ( 2) 00:08:04.531 31053.982 - 31255.631: 99.5041% ( 11) 00:08:04.531 31255.631 - 31457.280: 99.6257% ( 13) 00:08:04.531 31457.280 - 31658.929: 99.7567% ( 14) 00:08:04.531 31658.929 - 31860.578: 99.8784% ( 13) 00:08:04.531 31860.578 - 32062.228: 100.0000% ( 13) 00:08:04.531 00:08:04.531 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:04.531 ============================================================================== 00:08:04.531 Range in us Cumulative IO count 00:08:04.531 4839.582 - 4864.788: 0.0094% ( 1) 00:08:04.531 4965.612 - 4990.818: 0.0281% ( 2) 00:08:04.531 4990.818 - 5016.025: 0.0468% ( 2) 00:08:04.531 5016.025 - 5041.231: 0.0655% ( 2) 00:08:04.531 5041.231 - 5066.437: 0.0842% ( 2) 00:08:04.531 5066.437 - 5091.643: 0.1029% ( 2) 00:08:04.531 5091.643 - 5116.849: 0.1123% ( 1) 00:08:04.531 5116.849 - 5142.055: 0.1403% ( 3) 00:08:04.531 5142.055 - 5167.262: 0.1871% ( 5) 00:08:04.531 5167.262 - 5192.468: 0.2246% ( 4) 00:08:04.531 5192.468 - 5217.674: 0.3088% ( 9) 00:08:04.531 5217.674 - 5242.880: 0.3836% ( 8) 00:08:04.531 5242.880 - 5268.086: 0.4304% ( 5) 00:08:04.531 5268.086 - 5293.292: 0.4585% ( 3) 00:08:04.531 5293.292 - 5318.498: 0.4772% ( 2) 00:08:04.531 5318.498 - 5343.705: 0.4865% ( 1) 00:08:04.531 5343.705 - 5368.911: 0.5052% ( 2) 00:08:04.531 5368.911 - 5394.117: 0.5240% ( 2) 00:08:04.531 5394.117 - 5419.323: 0.5427% ( 2) 00:08:04.531 5419.323 - 5444.529: 0.5520% ( 1) 00:08:04.531 5444.529 - 5469.735: 0.5707% ( 2) 00:08:04.531 5469.735 - 5494.942: 0.5894% ( 2) 00:08:04.531 5494.942 - 5520.148: 0.5988% ( 1) 00:08:04.531 6200.714 - 6225.920: 0.6082% ( 1) 00:08:04.531 6225.920 - 6251.126: 0.6362% ( 3) 00:08:04.531 6251.126 - 6276.332: 0.6456% ( 1) 00:08:04.531 6276.332 - 6301.538: 0.6830% ( 4) 00:08:04.531 6301.538 - 6326.745: 0.7111% ( 3) 00:08:04.531 6326.745 - 6351.951: 0.7579% ( 5) 00:08:04.531 6351.951 - 6377.157: 0.7859% ( 3) 00:08:04.531 6377.157 - 6402.363: 0.8421% ( 6) 00:08:04.531 6402.363 - 6427.569: 0.9918% ( 16) 00:08:04.531 6427.569 - 6452.775: 1.0292% ( 4) 00:08:04.531 6452.775 - 6503.188: 1.1040% ( 8) 00:08:04.531 6503.188 - 6553.600: 1.1602% ( 6) 00:08:04.531 6553.600 - 6604.012: 1.2257% ( 7) 00:08:04.531 6604.012 - 6654.425: 1.3286% ( 11) 00:08:04.531 6654.425 - 6704.837: 1.4128% ( 9) 00:08:04.531 6704.837 - 6755.249: 1.6093% ( 21) 00:08:04.531 6755.249 - 6805.662: 2.0303% ( 45) 00:08:04.531 6805.662 - 6856.074: 2.4326% ( 43) 00:08:04.531 6856.074 - 6906.486: 2.8256% ( 42) 00:08:04.531 6906.486 - 6956.898: 3.1624% ( 36) 00:08:04.531 6956.898 - 7007.311: 3.4993% ( 36) 00:08:04.531 7007.311 - 7057.723: 4.3694% ( 93) 00:08:04.531 7057.723 - 7108.135: 4.9588% ( 63) 00:08:04.531 7108.135 - 7158.548: 5.4454% ( 52) 00:08:04.531 7158.548 - 7208.960: 5.6793% ( 25) 00:08:04.531 7208.960 - 7259.372: 5.8851% ( 22) 00:08:04.531 7259.372 - 7309.785: 6.0161% ( 14) 00:08:04.531 7309.785 - 7360.197: 6.1471% ( 14) 00:08:04.531 7360.197 - 7410.609: 6.3436% ( 21) 00:08:04.531 7410.609 - 7461.022: 6.5775% ( 25) 00:08:04.531 7461.022 - 7511.434: 6.8862% ( 33) 00:08:04.531 7511.434 - 7561.846: 7.4102% ( 56) 00:08:04.531 7561.846 - 7612.258: 8.1774% ( 82) 00:08:04.531 7612.258 - 7662.671: 8.7107% ( 57) 00:08:04.531 7662.671 - 7713.083: 9.4405% ( 78) 00:08:04.531 7713.083 - 7763.495: 9.8896% ( 48) 00:08:04.531 7763.495 - 7813.908: 10.2358% ( 37) 00:08:04.531 7813.908 - 7864.320: 10.3855% ( 16) 00:08:04.531 7864.320 - 7914.732: 10.4978% ( 12) 00:08:04.531 7914.732 - 7965.145: 10.6287% ( 14) 00:08:04.532 7965.145 - 8015.557: 10.7410% ( 12) 00:08:04.532 8015.557 - 8065.969: 11.1340% ( 42) 00:08:04.532 8065.969 - 8116.382: 11.3866% ( 27) 00:08:04.532 8116.382 - 8166.794: 11.6954% ( 33) 00:08:04.532 8166.794 - 8217.206: 11.9480% ( 27) 00:08:04.532 8217.206 - 8267.618: 12.1725% ( 24) 00:08:04.532 8267.618 - 8318.031: 12.4064% ( 25) 00:08:04.532 8318.031 - 8368.443: 12.7246% ( 34) 00:08:04.532 8368.443 - 8418.855: 13.0427% ( 34) 00:08:04.532 8418.855 - 8469.268: 13.1456% ( 11) 00:08:04.532 8469.268 - 8519.680: 13.2391% ( 10) 00:08:04.532 8519.680 - 8570.092: 13.3421% ( 11) 00:08:04.532 8570.092 - 8620.505: 13.5105% ( 18) 00:08:04.532 8620.505 - 8670.917: 13.7444% ( 25) 00:08:04.532 8670.917 - 8721.329: 14.0625% ( 34) 00:08:04.532 8721.329 - 8771.742: 14.5771% ( 55) 00:08:04.532 8771.742 - 8822.154: 14.9233% ( 37) 00:08:04.532 8822.154 - 8872.566: 15.4753% ( 59) 00:08:04.532 8872.566 - 8922.978: 15.9618% ( 52) 00:08:04.532 8922.978 - 8973.391: 16.7290% ( 82) 00:08:04.532 8973.391 - 9023.803: 17.3933% ( 71) 00:08:04.532 9023.803 - 9074.215: 17.8518% ( 49) 00:08:04.532 9074.215 - 9124.628: 18.3383% ( 52) 00:08:04.532 9124.628 - 9175.040: 18.9278% ( 63) 00:08:04.532 9175.040 - 9225.452: 19.5546% ( 67) 00:08:04.532 9225.452 - 9275.865: 20.1160% ( 60) 00:08:04.532 9275.865 - 9326.277: 20.6774% ( 60) 00:08:04.532 9326.277 - 9376.689: 21.2013% ( 56) 00:08:04.532 9376.689 - 9427.102: 21.7814% ( 62) 00:08:04.532 9427.102 - 9477.514: 22.3615% ( 62) 00:08:04.532 9477.514 - 9527.926: 22.6890% ( 35) 00:08:04.532 9527.926 - 9578.338: 23.0352% ( 37) 00:08:04.532 9578.338 - 9628.751: 23.3346% ( 32) 00:08:04.532 9628.751 - 9679.163: 23.8305% ( 53) 00:08:04.532 9679.163 - 9729.575: 24.1392% ( 33) 00:08:04.532 9729.575 - 9779.988: 24.6819% ( 58) 00:08:04.532 9779.988 - 9830.400: 25.1123% ( 46) 00:08:04.532 9830.400 - 9880.812: 25.6082% ( 53) 00:08:04.532 9880.812 - 9931.225: 26.0853% ( 51) 00:08:04.532 9931.225 - 9981.637: 26.5438% ( 49) 00:08:04.532 9981.637 - 10032.049: 27.0490% ( 54) 00:08:04.532 10032.049 - 10082.462: 27.5823% ( 57) 00:08:04.532 10082.462 - 10132.874: 28.4712% ( 95) 00:08:04.532 10132.874 - 10183.286: 29.0606% ( 63) 00:08:04.532 10183.286 - 10233.698: 29.5752% ( 55) 00:08:04.532 10233.698 - 10284.111: 30.1927% ( 66) 00:08:04.532 10284.111 - 10334.523: 30.7448% ( 59) 00:08:04.532 10334.523 - 10384.935: 31.3436% ( 64) 00:08:04.532 10384.935 - 10435.348: 32.3073% ( 103) 00:08:04.532 10435.348 - 10485.760: 32.8874% ( 62) 00:08:04.532 10485.760 - 10536.172: 33.1680% ( 30) 00:08:04.532 10536.172 - 10586.585: 33.4674% ( 32) 00:08:04.532 10586.585 - 10636.997: 33.7388% ( 29) 00:08:04.532 10636.997 - 10687.409: 34.1972% ( 49) 00:08:04.532 10687.409 - 10737.822: 34.5621% ( 39) 00:08:04.532 10737.822 - 10788.234: 35.1329% ( 61) 00:08:04.532 10788.234 - 10838.646: 35.5352% ( 43) 00:08:04.532 10838.646 - 10889.058: 35.8626% ( 35) 00:08:04.532 10889.058 - 10939.471: 36.1621% ( 32) 00:08:04.532 10939.471 - 10989.883: 36.5363% ( 40) 00:08:04.532 10989.883 - 11040.295: 37.1351% ( 64) 00:08:04.532 11040.295 - 11090.708: 37.6123% ( 51) 00:08:04.532 11090.708 - 11141.120: 38.0614% ( 48) 00:08:04.532 11141.120 - 11191.532: 38.4824% ( 45) 00:08:04.532 11191.532 - 11241.945: 38.8660% ( 41) 00:08:04.532 11241.945 - 11292.357: 39.2309% ( 39) 00:08:04.532 11292.357 - 11342.769: 39.5022% ( 29) 00:08:04.532 11342.769 - 11393.182: 39.8765% ( 40) 00:08:04.532 11393.182 - 11443.594: 40.3162% ( 47) 00:08:04.532 11443.594 - 11494.006: 40.7747% ( 49) 00:08:04.532 11494.006 - 11544.418: 41.3080% ( 57) 00:08:04.532 11544.418 - 11594.831: 41.8413% ( 57) 00:08:04.532 11594.831 - 11645.243: 42.4588% ( 66) 00:08:04.532 11645.243 - 11695.655: 42.9173% ( 49) 00:08:04.532 11695.655 - 11746.068: 43.5161% ( 64) 00:08:04.532 11746.068 - 11796.480: 44.4517% ( 100) 00:08:04.532 11796.480 - 11846.892: 45.0692% ( 66) 00:08:04.532 11846.892 - 11897.305: 45.5745% ( 54) 00:08:04.532 11897.305 - 11947.717: 46.2762% ( 75) 00:08:04.532 11947.717 - 11998.129: 47.0621% ( 84) 00:08:04.532 11998.129 - 12048.542: 48.1100% ( 112) 00:08:04.532 12048.542 - 12098.954: 49.0363% ( 99) 00:08:04.532 12098.954 - 12149.366: 49.8222% ( 84) 00:08:04.532 12149.366 - 12199.778: 50.7766% ( 102) 00:08:04.532 12199.778 - 12250.191: 51.7964% ( 109) 00:08:04.532 12250.191 - 12300.603: 52.7414% ( 101) 00:08:04.532 12300.603 - 12351.015: 53.4712% ( 78) 00:08:04.532 12351.015 - 12401.428: 54.1542% ( 73) 00:08:04.532 12401.428 - 12451.840: 54.9588% ( 86) 00:08:04.532 12451.840 - 12502.252: 55.8103% ( 91) 00:08:04.532 12502.252 - 12552.665: 56.7552% ( 101) 00:08:04.532 12552.665 - 12603.077: 58.1119% ( 145) 00:08:04.532 12603.077 - 12653.489: 59.6463% ( 164) 00:08:04.532 12653.489 - 12703.902: 60.7129% ( 114) 00:08:04.532 12703.902 - 12754.314: 61.5176% ( 86) 00:08:04.532 12754.314 - 12804.726: 62.6310% ( 119) 00:08:04.532 12804.726 - 12855.138: 63.6789% ( 112) 00:08:04.532 12855.138 - 12905.551: 64.8016% ( 120) 00:08:04.532 12905.551 - 13006.375: 66.7197% ( 205) 00:08:04.532 13006.375 - 13107.200: 68.6190% ( 203) 00:08:04.532 13107.200 - 13208.025: 70.5838% ( 210) 00:08:04.532 13208.025 - 13308.849: 72.4738% ( 202) 00:08:04.532 13308.849 - 13409.674: 73.8118% ( 143) 00:08:04.532 13409.674 - 13510.498: 74.9158% ( 118) 00:08:04.532 13510.498 - 13611.323: 75.9076% ( 106) 00:08:04.532 13611.323 - 13712.148: 76.7683% ( 92) 00:08:04.532 13712.148 - 13812.972: 77.6665% ( 96) 00:08:04.532 13812.972 - 13913.797: 78.7799% ( 119) 00:08:04.532 13913.797 - 14014.622: 79.8372% ( 113) 00:08:04.532 14014.622 - 14115.446: 81.2126% ( 147) 00:08:04.532 14115.446 - 14216.271: 82.5505% ( 143) 00:08:04.532 14216.271 - 14317.095: 83.4487% ( 96) 00:08:04.532 14317.095 - 14417.920: 84.0943% ( 69) 00:08:04.532 14417.920 - 14518.745: 84.6650% ( 61) 00:08:04.532 14518.745 - 14619.569: 85.1422% ( 51) 00:08:04.532 14619.569 - 14720.394: 85.5726% ( 46) 00:08:04.532 14720.394 - 14821.218: 85.9749% ( 43) 00:08:04.532 14821.218 - 14922.043: 86.3960% ( 45) 00:08:04.532 14922.043 - 15022.868: 86.9386% ( 58) 00:08:04.532 15022.868 - 15123.692: 87.4719% ( 57) 00:08:04.532 15123.692 - 15224.517: 88.0240% ( 59) 00:08:04.532 15224.517 - 15325.342: 88.6602% ( 68) 00:08:04.532 15325.342 - 15426.166: 89.3993% ( 79) 00:08:04.532 15426.166 - 15526.991: 90.1665% ( 82) 00:08:04.532 15526.991 - 15627.815: 90.8028% ( 68) 00:08:04.532 15627.815 - 15728.640: 91.4484% ( 69) 00:08:04.532 15728.640 - 15829.465: 92.1688% ( 77) 00:08:04.532 15829.465 - 15930.289: 92.8705% ( 75) 00:08:04.532 15930.289 - 16031.114: 93.6097% ( 79) 00:08:04.532 16031.114 - 16131.938: 94.1243% ( 55) 00:08:04.532 16131.938 - 16232.763: 94.4611% ( 36) 00:08:04.532 16232.763 - 16333.588: 94.7511% ( 31) 00:08:04.532 16333.588 - 16434.412: 94.9102% ( 17) 00:08:04.532 16434.412 - 16535.237: 95.0412% ( 14) 00:08:04.532 16535.237 - 16636.062: 95.1628% ( 13) 00:08:04.532 16636.062 - 16736.886: 95.2844% ( 13) 00:08:04.532 16736.886 - 16837.711: 95.4716% ( 20) 00:08:04.532 16837.711 - 16938.535: 95.6587% ( 20) 00:08:04.532 16938.535 - 17039.360: 95.8177% ( 17) 00:08:04.532 17039.360 - 17140.185: 95.9862% ( 18) 00:08:04.532 17140.185 - 17241.009: 96.1359% ( 16) 00:08:04.532 17241.009 - 17341.834: 96.4727% ( 36) 00:08:04.532 17341.834 - 17442.658: 96.7721% ( 32) 00:08:04.532 17442.658 - 17543.483: 97.2212% ( 48) 00:08:04.532 17543.483 - 17644.308: 97.5206% ( 32) 00:08:04.532 17644.308 - 17745.132: 97.6703% ( 16) 00:08:04.532 17745.132 - 17845.957: 97.8106% ( 15) 00:08:04.532 17845.957 - 17946.782: 97.9229% ( 12) 00:08:04.532 17946.782 - 18047.606: 98.0165% ( 10) 00:08:04.532 18047.606 - 18148.431: 98.1007% ( 9) 00:08:04.532 18148.431 - 18249.255: 98.1849% ( 9) 00:08:04.532 18249.255 - 18350.080: 98.2597% ( 8) 00:08:04.532 18350.080 - 18450.905: 98.3252% ( 7) 00:08:04.533 18450.905 - 18551.729: 98.3720% ( 5) 00:08:04.533 18551.729 - 18652.554: 98.4188% ( 5) 00:08:04.533 18652.554 - 18753.378: 98.6433% ( 24) 00:08:04.533 18753.378 - 18854.203: 98.6808% ( 4) 00:08:04.533 18854.203 - 18955.028: 98.7182% ( 4) 00:08:04.533 18955.028 - 19055.852: 98.7650% ( 5) 00:08:04.533 19055.852 - 19156.677: 98.8024% ( 4) 00:08:04.533 21979.766 - 22080.591: 98.8118% ( 1) 00:08:04.533 22080.591 - 22181.415: 98.8772% ( 7) 00:08:04.533 22181.415 - 22282.240: 98.9427% ( 7) 00:08:04.533 22282.240 - 22383.065: 99.0082% ( 7) 00:08:04.533 22383.065 - 22483.889: 99.0644% ( 6) 00:08:04.533 22483.889 - 22584.714: 99.1299% ( 7) 00:08:04.533 22584.714 - 22685.538: 99.1860% ( 6) 00:08:04.533 22685.538 - 22786.363: 99.2515% ( 7) 00:08:04.533 22786.363 - 22887.188: 99.2983% ( 5) 00:08:04.533 22887.188 - 22988.012: 99.3638% ( 7) 00:08:04.533 22988.012 - 23088.837: 99.4012% ( 4) 00:08:04.533 32062.228 - 32263.877: 99.4293% ( 3) 00:08:04.533 32263.877 - 32465.526: 99.5415% ( 12) 00:08:04.533 32465.526 - 32667.175: 99.6538% ( 12) 00:08:04.533 32667.175 - 32868.825: 99.7661% ( 12) 00:08:04.533 32868.825 - 33070.474: 99.8784% ( 12) 00:08:04.533 33070.474 - 33272.123: 99.9906% ( 12) 00:08:04.533 33272.123 - 33473.772: 100.0000% ( 1) 00:08:04.533 00:08:04.533 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:04.533 ============================================================================== 00:08:04.533 Range in us Cumulative IO count 00:08:04.533 4713.551 - 4738.757: 0.0187% ( 2) 00:08:04.533 4738.757 - 4763.963: 0.0374% ( 2) 00:08:04.533 4763.963 - 4789.169: 0.0842% ( 5) 00:08:04.533 4789.169 - 4814.375: 0.1216% ( 4) 00:08:04.533 4814.375 - 4839.582: 0.1965% ( 8) 00:08:04.533 4839.582 - 4864.788: 0.2526% ( 6) 00:08:04.533 4864.788 - 4889.994: 0.3462% ( 10) 00:08:04.533 4889.994 - 4915.200: 0.3555% ( 1) 00:08:04.533 4915.200 - 4940.406: 0.3743% ( 2) 00:08:04.533 4940.406 - 4965.612: 0.4023% ( 3) 00:08:04.533 4965.612 - 4990.818: 0.4210% ( 2) 00:08:04.533 4990.818 - 5016.025: 0.4397% ( 2) 00:08:04.533 5016.025 - 5041.231: 0.4585% ( 2) 00:08:04.533 5041.231 - 5066.437: 0.4772% ( 2) 00:08:04.533 5066.437 - 5091.643: 0.4959% ( 2) 00:08:04.533 5091.643 - 5116.849: 0.5240% ( 3) 00:08:04.533 5116.849 - 5142.055: 0.5427% ( 2) 00:08:04.533 5142.055 - 5167.262: 0.5614% ( 2) 00:08:04.533 5167.262 - 5192.468: 0.5707% ( 1) 00:08:04.533 5192.468 - 5217.674: 0.5894% ( 2) 00:08:04.533 5217.674 - 5242.880: 0.5988% ( 1) 00:08:04.533 6074.683 - 6099.889: 0.6082% ( 1) 00:08:04.533 6150.302 - 6175.508: 0.6175% ( 1) 00:08:04.533 6175.508 - 6200.714: 0.6456% ( 3) 00:08:04.533 6200.714 - 6225.920: 0.6643% ( 2) 00:08:04.533 6225.920 - 6251.126: 0.7017% ( 4) 00:08:04.533 6251.126 - 6276.332: 0.7391% ( 4) 00:08:04.533 6276.332 - 6301.538: 0.7859% ( 5) 00:08:04.533 6301.538 - 6326.745: 0.8795% ( 10) 00:08:04.533 6326.745 - 6351.951: 0.9918% ( 12) 00:08:04.533 6351.951 - 6377.157: 1.0292% ( 4) 00:08:04.533 6377.157 - 6402.363: 1.0573% ( 3) 00:08:04.533 6402.363 - 6427.569: 1.0853% ( 3) 00:08:04.533 6427.569 - 6452.775: 1.1228% ( 4) 00:08:04.533 6452.775 - 6503.188: 1.1789% ( 6) 00:08:04.533 6503.188 - 6553.600: 1.2163% ( 4) 00:08:04.533 6553.600 - 6604.012: 1.2818% ( 7) 00:08:04.533 6604.012 - 6654.425: 1.3660% ( 9) 00:08:04.533 6654.425 - 6704.837: 1.4596% ( 10) 00:08:04.533 6704.837 - 6755.249: 1.6654% ( 22) 00:08:04.533 6755.249 - 6805.662: 1.7777% ( 12) 00:08:04.533 6805.662 - 6856.074: 1.9742% ( 21) 00:08:04.533 6856.074 - 6906.486: 2.2829% ( 33) 00:08:04.533 6906.486 - 6956.898: 2.9098% ( 67) 00:08:04.533 6956.898 - 7007.311: 3.4244% ( 55) 00:08:04.533 7007.311 - 7057.723: 3.9203% ( 53) 00:08:04.533 7057.723 - 7108.135: 4.5752% ( 70) 00:08:04.533 7108.135 - 7158.548: 4.8840% ( 33) 00:08:04.533 7158.548 - 7208.960: 5.2676% ( 41) 00:08:04.533 7208.960 - 7259.372: 5.6980% ( 46) 00:08:04.533 7259.372 - 7309.785: 6.0254% ( 35) 00:08:04.533 7309.785 - 7360.197: 6.4091% ( 41) 00:08:04.533 7360.197 - 7410.609: 7.1482% ( 79) 00:08:04.533 7410.609 - 7461.022: 7.6254% ( 51) 00:08:04.533 7461.022 - 7511.434: 7.8125% ( 20) 00:08:04.533 7511.434 - 7561.846: 8.0090% ( 21) 00:08:04.533 7561.846 - 7612.258: 8.2710% ( 28) 00:08:04.533 7612.258 - 7662.671: 8.5423% ( 29) 00:08:04.533 7662.671 - 7713.083: 9.0007% ( 49) 00:08:04.533 7713.083 - 7763.495: 9.3469% ( 37) 00:08:04.533 7763.495 - 7813.908: 9.8709% ( 56) 00:08:04.533 7813.908 - 7864.320: 10.4323% ( 60) 00:08:04.533 7864.320 - 7914.732: 11.0030% ( 61) 00:08:04.533 7914.732 - 7965.145: 11.4989% ( 53) 00:08:04.533 7965.145 - 8015.557: 11.8451% ( 37) 00:08:04.533 8015.557 - 8065.969: 12.3035% ( 49) 00:08:04.533 8065.969 - 8116.382: 12.5187% ( 23) 00:08:04.533 8116.382 - 8166.794: 12.7713% ( 27) 00:08:04.533 8166.794 - 8217.206: 13.0240% ( 27) 00:08:04.533 8217.206 - 8267.618: 13.3234% ( 32) 00:08:04.533 8267.618 - 8318.031: 13.5385% ( 23) 00:08:04.533 8318.031 - 8368.443: 13.6695% ( 14) 00:08:04.533 8368.443 - 8418.855: 13.7631% ( 10) 00:08:04.533 8418.855 - 8469.268: 13.8567% ( 10) 00:08:04.533 8469.268 - 8519.680: 13.9596% ( 11) 00:08:04.533 8519.680 - 8570.092: 14.0906% ( 14) 00:08:04.533 8570.092 - 8620.505: 14.2777% ( 20) 00:08:04.533 8620.505 - 8670.917: 14.7081% ( 46) 00:08:04.533 8670.917 - 8721.329: 14.8578% ( 16) 00:08:04.533 8721.329 - 8771.742: 15.0168% ( 17) 00:08:04.533 8771.742 - 8822.154: 15.2227% ( 22) 00:08:04.533 8822.154 - 8872.566: 15.5501% ( 35) 00:08:04.533 8872.566 - 8922.978: 15.9431% ( 42) 00:08:04.533 8922.978 - 8973.391: 16.5138% ( 61) 00:08:04.533 8973.391 - 9023.803: 17.3372% ( 88) 00:08:04.533 9023.803 - 9074.215: 17.8799% ( 58) 00:08:04.533 9074.215 - 9124.628: 18.3477% ( 50) 00:08:04.533 9124.628 - 9175.040: 18.7781% ( 46) 00:08:04.533 9175.040 - 9225.452: 19.3769% ( 64) 00:08:04.533 9225.452 - 9275.865: 20.0318% ( 70) 00:08:04.533 9275.865 - 9326.277: 20.4061% ( 40) 00:08:04.533 9326.277 - 9376.689: 20.7803% ( 40) 00:08:04.533 9376.689 - 9427.102: 21.1639% ( 41) 00:08:04.533 9427.102 - 9477.514: 21.4820% ( 34) 00:08:04.533 9477.514 - 9527.926: 21.8376% ( 38) 00:08:04.533 9527.926 - 9578.338: 22.3615% ( 56) 00:08:04.533 9578.338 - 9628.751: 22.8948% ( 57) 00:08:04.533 9628.751 - 9679.163: 23.4375% ( 58) 00:08:04.533 9679.163 - 9729.575: 23.9895% ( 59) 00:08:04.533 9729.575 - 9779.988: 24.5322% ( 58) 00:08:04.533 9779.988 - 9830.400: 24.9158% ( 41) 00:08:04.533 9830.400 - 9880.812: 25.3555% ( 47) 00:08:04.533 9880.812 - 9931.225: 25.7017% ( 37) 00:08:04.533 9931.225 - 9981.637: 26.0105% ( 33) 00:08:04.533 9981.637 - 10032.049: 26.3473% ( 36) 00:08:04.533 10032.049 - 10082.462: 26.6841% ( 36) 00:08:04.533 10082.462 - 10132.874: 27.1145% ( 46) 00:08:04.533 10132.874 - 10183.286: 27.6010% ( 52) 00:08:04.533 10183.286 - 10233.698: 28.0034% ( 43) 00:08:04.533 10233.698 - 10284.111: 28.6022% ( 64) 00:08:04.533 10284.111 - 10334.523: 29.3600% ( 81) 00:08:04.533 10334.523 - 10384.935: 30.0992% ( 79) 00:08:04.533 10384.935 - 10435.348: 30.6980% ( 64) 00:08:04.533 10435.348 - 10485.760: 31.5213% ( 88) 00:08:04.533 10485.760 - 10536.172: 32.2231% ( 75) 00:08:04.533 10536.172 - 10586.585: 32.7564% ( 57) 00:08:04.533 10586.585 - 10636.997: 33.3832% ( 67) 00:08:04.533 10636.997 - 10687.409: 33.9165% ( 57) 00:08:04.533 10687.409 - 10737.822: 34.3656% ( 48) 00:08:04.533 10737.822 - 10788.234: 34.8241% ( 49) 00:08:04.533 10788.234 - 10838.646: 35.3574% ( 57) 00:08:04.533 10838.646 - 10889.058: 35.6755% ( 34) 00:08:04.533 10889.058 - 10939.471: 35.9188% ( 26) 00:08:04.533 10939.471 - 10989.883: 36.2930% ( 40) 00:08:04.533 10989.883 - 11040.295: 36.6112% ( 34) 00:08:04.533 11040.295 - 11090.708: 37.0415% ( 46) 00:08:04.533 11090.708 - 11141.120: 37.4813% ( 47) 00:08:04.533 11141.120 - 11191.532: 38.0427% ( 60) 00:08:04.533 11191.532 - 11241.945: 38.6321% ( 63) 00:08:04.533 11241.945 - 11292.357: 39.2028% ( 61) 00:08:04.533 11292.357 - 11342.769: 39.8391% ( 68) 00:08:04.533 11342.769 - 11393.182: 40.3911% ( 59) 00:08:04.533 11393.182 - 11443.594: 40.8402% ( 48) 00:08:04.533 11443.594 - 11494.006: 41.3080% ( 50) 00:08:04.533 11494.006 - 11544.418: 41.7945% ( 52) 00:08:04.533 11544.418 - 11594.831: 42.2811% ( 52) 00:08:04.533 11594.831 - 11645.243: 43.0483% ( 82) 00:08:04.533 11645.243 - 11695.655: 43.9278% ( 94) 00:08:04.533 11695.655 - 11746.068: 44.8447% ( 98) 00:08:04.533 11746.068 - 11796.480: 45.4435% ( 64) 00:08:04.533 11796.480 - 11846.892: 46.0236% ( 62) 00:08:04.533 11846.892 - 11897.305: 46.7066% ( 73) 00:08:04.533 11897.305 - 11947.717: 47.3147% ( 65) 00:08:04.533 11947.717 - 11998.129: 48.1942% ( 94) 00:08:04.533 11998.129 - 12048.542: 49.1579% ( 103) 00:08:04.533 12048.542 - 12098.954: 50.1310% ( 104) 00:08:04.533 12098.954 - 12149.366: 51.0198% ( 95) 00:08:04.533 12149.366 - 12199.778: 51.7683% ( 80) 00:08:04.533 12199.778 - 12250.191: 52.3204% ( 59) 00:08:04.533 12250.191 - 12300.603: 52.9472% ( 67) 00:08:04.533 12300.603 - 12351.015: 53.5647% ( 66) 00:08:04.533 12351.015 - 12401.428: 54.2103% ( 69) 00:08:04.533 12401.428 - 12451.840: 54.8653% ( 70) 00:08:04.533 12451.840 - 12502.252: 55.9506% ( 116) 00:08:04.533 12502.252 - 12552.665: 56.9517% ( 107) 00:08:04.533 12552.665 - 12603.077: 57.8780% ( 99) 00:08:04.533 12603.077 - 12653.489: 58.9446% ( 114) 00:08:04.533 12653.489 - 12703.902: 59.9551% ( 108) 00:08:04.533 12703.902 - 12754.314: 60.9001% ( 101) 00:08:04.534 12754.314 - 12804.726: 61.9760% ( 115) 00:08:04.534 12804.726 - 12855.138: 63.3514% ( 147) 00:08:04.534 12855.138 - 12905.551: 64.6052% ( 134) 00:08:04.534 12905.551 - 13006.375: 66.8881% ( 244) 00:08:04.534 13006.375 - 13107.200: 68.9184% ( 217) 00:08:04.534 13107.200 - 13208.025: 70.6400% ( 184) 00:08:04.534 13208.025 - 13308.849: 72.1276% ( 159) 00:08:04.534 13308.849 - 13409.674: 73.3065% ( 126) 00:08:04.534 13409.674 - 13510.498: 74.6445% ( 143) 00:08:04.534 13510.498 - 13611.323: 75.7766% ( 121) 00:08:04.534 13611.323 - 13712.148: 76.8058% ( 110) 00:08:04.534 13712.148 - 13812.972: 77.9847% ( 126) 00:08:04.534 13812.972 - 13913.797: 79.0419% ( 113) 00:08:04.534 13913.797 - 14014.622: 80.3518% ( 140) 00:08:04.534 14014.622 - 14115.446: 81.2781% ( 99) 00:08:04.534 14115.446 - 14216.271: 82.2043% ( 99) 00:08:04.534 14216.271 - 14317.095: 83.0277% ( 88) 00:08:04.534 14317.095 - 14417.920: 83.5610% ( 57) 00:08:04.534 14417.920 - 14518.745: 83.9914% ( 46) 00:08:04.534 14518.745 - 14619.569: 84.5153% ( 56) 00:08:04.534 14619.569 - 14720.394: 85.1703% ( 70) 00:08:04.534 14720.394 - 14821.218: 86.0311% ( 92) 00:08:04.534 14821.218 - 14922.043: 86.8638% ( 89) 00:08:04.534 14922.043 - 15022.868: 87.4251% ( 60) 00:08:04.534 15022.868 - 15123.692: 87.9491% ( 56) 00:08:04.534 15123.692 - 15224.517: 88.4637% ( 55) 00:08:04.534 15224.517 - 15325.342: 88.9876% ( 56) 00:08:04.534 15325.342 - 15426.166: 89.5584% ( 61) 00:08:04.534 15426.166 - 15526.991: 90.2133% ( 70) 00:08:04.534 15526.991 - 15627.815: 90.8308% ( 66) 00:08:04.534 15627.815 - 15728.640: 91.4484% ( 66) 00:08:04.534 15728.640 - 15829.465: 92.1594% ( 76) 00:08:04.534 15829.465 - 15930.289: 92.8612% ( 75) 00:08:04.534 15930.289 - 16031.114: 93.6284% ( 82) 00:08:04.534 16031.114 - 16131.938: 94.1523% ( 56) 00:08:04.534 16131.938 - 16232.763: 94.6576% ( 54) 00:08:04.534 16232.763 - 16333.588: 95.0692% ( 44) 00:08:04.534 16333.588 - 16434.412: 95.3219% ( 27) 00:08:04.534 16434.412 - 16535.237: 95.6400% ( 34) 00:08:04.534 16535.237 - 16636.062: 95.9581% ( 34) 00:08:04.534 16636.062 - 16736.886: 96.1639% ( 22) 00:08:04.534 16736.886 - 16837.711: 96.5101% ( 37) 00:08:04.534 16837.711 - 16938.535: 96.7627% ( 27) 00:08:04.534 16938.535 - 17039.360: 96.8844% ( 13) 00:08:04.534 17039.360 - 17140.185: 96.9966% ( 12) 00:08:04.534 17140.185 - 17241.009: 97.0060% ( 1) 00:08:04.534 17341.834 - 17442.658: 97.0434% ( 4) 00:08:04.534 17442.658 - 17543.483: 97.1276% ( 9) 00:08:04.534 17543.483 - 17644.308: 97.2493% ( 13) 00:08:04.534 17644.308 - 17745.132: 97.2867% ( 4) 00:08:04.534 17745.132 - 17845.957: 97.4177% ( 14) 00:08:04.534 17845.957 - 17946.782: 97.5861% ( 18) 00:08:04.534 17946.782 - 18047.606: 97.8387% ( 27) 00:08:04.534 18047.606 - 18148.431: 98.0539% ( 23) 00:08:04.534 18148.431 - 18249.255: 98.2972% ( 26) 00:08:04.534 18249.255 - 18350.080: 98.5311% ( 25) 00:08:04.534 18350.080 - 18450.905: 98.6995% ( 18) 00:08:04.534 18450.905 - 18551.729: 98.7930% ( 10) 00:08:04.534 18551.729 - 18652.554: 98.8024% ( 1) 00:08:04.534 22383.065 - 22483.889: 98.8305% ( 3) 00:08:04.534 22483.889 - 22584.714: 98.8866% ( 6) 00:08:04.534 22584.714 - 22685.538: 98.9521% ( 7) 00:08:04.534 22685.538 - 22786.363: 99.0176% ( 7) 00:08:04.534 22786.363 - 22887.188: 99.0831% ( 7) 00:08:04.534 22887.188 - 22988.012: 99.1392% ( 6) 00:08:04.534 22988.012 - 23088.837: 99.2047% ( 7) 00:08:04.534 23088.837 - 23189.662: 99.2702% ( 7) 00:08:04.534 23189.662 - 23290.486: 99.3263% ( 6) 00:08:04.534 23290.486 - 23391.311: 99.3918% ( 7) 00:08:04.534 23391.311 - 23492.135: 99.4012% ( 1) 00:08:04.534 32263.877 - 32465.526: 99.5041% ( 11) 00:08:04.534 32465.526 - 32667.175: 99.6164% ( 12) 00:08:04.534 32667.175 - 32868.825: 99.7287% ( 12) 00:08:04.534 32868.825 - 33070.474: 99.8409% ( 12) 00:08:04.534 33070.474 - 33272.123: 99.9532% ( 12) 00:08:04.534 33272.123 - 33473.772: 100.0000% ( 5) 00:08:04.534 00:08:04.534 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:04.534 ============================================================================== 00:08:04.534 Range in us Cumulative IO count 00:08:04.534 4335.458 - 4360.665: 0.0186% ( 2) 00:08:04.534 4360.665 - 4385.871: 0.1302% ( 12) 00:08:04.534 4385.871 - 4411.077: 0.1953% ( 7) 00:08:04.534 4411.077 - 4436.283: 0.2976% ( 11) 00:08:04.534 4436.283 - 4461.489: 0.3906% ( 10) 00:08:04.534 4461.489 - 4486.695: 0.4278% ( 4) 00:08:04.534 4486.695 - 4511.902: 0.4464% ( 2) 00:08:04.534 4511.902 - 4537.108: 0.4743% ( 3) 00:08:04.534 4537.108 - 4562.314: 0.4929% ( 2) 00:08:04.534 4562.314 - 4587.520: 0.5115% ( 2) 00:08:04.534 4587.520 - 4612.726: 0.5394% ( 3) 00:08:04.534 4612.726 - 4637.932: 0.5487% ( 1) 00:08:04.534 4637.932 - 4663.138: 0.5766% ( 3) 00:08:04.534 4663.138 - 4688.345: 0.5859% ( 1) 00:08:04.534 4688.345 - 4713.551: 0.5952% ( 1) 00:08:04.534 5923.446 - 5948.652: 0.6045% ( 1) 00:08:04.534 5999.065 - 6024.271: 0.6138% ( 1) 00:08:04.534 6074.683 - 6099.889: 0.6231% ( 1) 00:08:04.534 6125.095 - 6150.302: 0.6417% ( 2) 00:08:04.534 6150.302 - 6175.508: 0.6510% ( 1) 00:08:04.534 6175.508 - 6200.714: 0.6696% ( 2) 00:08:04.534 6200.714 - 6225.920: 0.6789% ( 1) 00:08:04.534 6225.920 - 6251.126: 0.6975% ( 2) 00:08:04.534 6251.126 - 6276.332: 0.7347% ( 4) 00:08:04.534 6276.332 - 6301.538: 0.7626% ( 3) 00:08:04.534 6301.538 - 6326.745: 0.8371% ( 8) 00:08:04.534 6326.745 - 6351.951: 0.8836% ( 5) 00:08:04.534 6351.951 - 6377.157: 0.9394% ( 6) 00:08:04.534 6377.157 - 6402.363: 1.1440% ( 22) 00:08:04.534 6402.363 - 6427.569: 1.2184% ( 8) 00:08:04.534 6427.569 - 6452.775: 1.2556% ( 4) 00:08:04.534 6452.775 - 6503.188: 1.3207% ( 7) 00:08:04.534 6503.188 - 6553.600: 1.4137% ( 10) 00:08:04.534 6553.600 - 6604.012: 1.4509% ( 4) 00:08:04.534 6604.012 - 6654.425: 1.4974% ( 5) 00:08:04.534 6654.425 - 6704.837: 1.5811% ( 9) 00:08:04.534 6704.837 - 6755.249: 1.6927% ( 12) 00:08:04.534 6755.249 - 6805.662: 2.0368% ( 37) 00:08:04.534 6805.662 - 6856.074: 2.2693% ( 25) 00:08:04.534 6856.074 - 6906.486: 2.5205% ( 27) 00:08:04.534 6906.486 - 6956.898: 2.9111% ( 42) 00:08:04.534 6956.898 - 7007.311: 3.4412% ( 57) 00:08:04.534 7007.311 - 7057.723: 3.8411% ( 43) 00:08:04.534 7057.723 - 7108.135: 4.1388% ( 32) 00:08:04.534 7108.135 - 7158.548: 4.6131% ( 51) 00:08:04.534 7158.548 - 7208.960: 5.4501% ( 90) 00:08:04.534 7208.960 - 7259.372: 6.1849% ( 79) 00:08:04.534 7259.372 - 7309.785: 6.7615% ( 62) 00:08:04.534 7309.785 - 7360.197: 7.0406% ( 30) 00:08:04.534 7360.197 - 7410.609: 7.3196% ( 30) 00:08:04.534 7410.609 - 7461.022: 7.7660% ( 48) 00:08:04.534 7461.022 - 7511.434: 8.0915% ( 35) 00:08:04.534 7511.434 - 7561.846: 8.5193% ( 46) 00:08:04.534 7561.846 - 7612.258: 8.7798% ( 28) 00:08:04.534 7612.258 - 7662.671: 9.0495% ( 29) 00:08:04.534 7662.671 - 7713.083: 9.2448% ( 21) 00:08:04.534 7713.083 - 7763.495: 9.5238% ( 30) 00:08:04.534 7763.495 - 7813.908: 9.8772% ( 38) 00:08:04.534 7813.908 - 7864.320: 10.5469% ( 72) 00:08:04.534 7864.320 - 7914.732: 11.2351% ( 74) 00:08:04.534 7914.732 - 7965.145: 11.6443% ( 44) 00:08:04.534 7965.145 - 8015.557: 12.1931% ( 59) 00:08:04.534 8015.557 - 8065.969: 12.5744% ( 41) 00:08:04.534 8065.969 - 8116.382: 12.9092% ( 36) 00:08:04.534 8116.382 - 8166.794: 13.1231% ( 23) 00:08:04.534 8166.794 - 8217.206: 13.3092% ( 20) 00:08:04.534 8217.206 - 8267.618: 13.6626% ( 38) 00:08:04.534 8267.618 - 8318.031: 13.7184% ( 6) 00:08:04.534 8318.031 - 8368.443: 13.8114% ( 10) 00:08:04.534 8368.443 - 8418.855: 13.9509% ( 15) 00:08:04.534 8418.855 - 8469.268: 14.0904% ( 15) 00:08:04.534 8469.268 - 8519.680: 14.3229% ( 25) 00:08:04.534 8519.680 - 8570.092: 14.5182% ( 21) 00:08:04.534 8570.092 - 8620.505: 14.6949% ( 19) 00:08:04.534 8620.505 - 8670.917: 14.8344% ( 15) 00:08:04.534 8670.917 - 8721.329: 15.0763% ( 26) 00:08:04.534 8721.329 - 8771.742: 15.4204% ( 37) 00:08:04.534 8771.742 - 8822.154: 15.7087% ( 31) 00:08:04.534 8822.154 - 8872.566: 16.2016% ( 53) 00:08:04.534 8872.566 - 8922.978: 16.6016% ( 43) 00:08:04.534 8922.978 - 8973.391: 16.9550% ( 38) 00:08:04.534 8973.391 - 9023.803: 17.4200% ( 50) 00:08:04.534 9023.803 - 9074.215: 17.8943% ( 51) 00:08:04.534 9074.215 - 9124.628: 18.4059% ( 55) 00:08:04.534 9124.628 - 9175.040: 18.8058% ( 43) 00:08:04.534 9175.040 - 9225.452: 19.1964% ( 42) 00:08:04.534 9225.452 - 9275.865: 19.5685% ( 40) 00:08:04.534 9275.865 - 9326.277: 20.0149% ( 48) 00:08:04.534 9326.277 - 9376.689: 20.3962% ( 41) 00:08:04.534 9376.689 - 9427.102: 20.6845% ( 31) 00:08:04.534 9427.102 - 9477.514: 20.9635% ( 30) 00:08:04.534 9477.514 - 9527.926: 21.3635% ( 43) 00:08:04.534 9527.926 - 9578.338: 21.7448% ( 41) 00:08:04.534 9578.338 - 9628.751: 22.3493% ( 65) 00:08:04.534 9628.751 - 9679.163: 22.6469% ( 32) 00:08:04.534 9679.163 - 9729.575: 23.0841% ( 47) 00:08:04.534 9729.575 - 9779.988: 23.5398% ( 49) 00:08:04.534 9779.988 - 9830.400: 23.9490% ( 44) 00:08:04.534 9830.400 - 9880.812: 24.3583% ( 44) 00:08:04.534 9880.812 - 9931.225: 24.8884% ( 57) 00:08:04.534 9931.225 - 9981.637: 25.2976% ( 44) 00:08:04.534 9981.637 - 10032.049: 25.7812% ( 52) 00:08:04.534 10032.049 - 10082.462: 26.2370% ( 49) 00:08:04.534 10082.462 - 10132.874: 26.6369% ( 43) 00:08:04.534 10132.874 - 10183.286: 27.1763% ( 58) 00:08:04.534 10183.286 - 10233.698: 27.6321% ( 49) 00:08:04.534 10233.698 - 10284.111: 27.9483% ( 34) 00:08:04.534 10284.111 - 10334.523: 28.4505% ( 54) 00:08:04.535 10334.523 - 10384.935: 29.0086% ( 60) 00:08:04.535 10384.935 - 10435.348: 29.5945% ( 63) 00:08:04.535 10435.348 - 10485.760: 30.1897% ( 64) 00:08:04.535 10485.760 - 10536.172: 30.6734% ( 52) 00:08:04.535 10536.172 - 10586.585: 31.2221% ( 59) 00:08:04.535 10586.585 - 10636.997: 32.2080% ( 106) 00:08:04.535 10636.997 - 10687.409: 33.0543% ( 91) 00:08:04.535 10687.409 - 10737.822: 33.6031% ( 59) 00:08:04.535 10737.822 - 10788.234: 34.1983% ( 64) 00:08:04.535 10788.234 - 10838.646: 34.8121% ( 66) 00:08:04.535 10838.646 - 10889.058: 35.4725% ( 71) 00:08:04.535 10889.058 - 10939.471: 35.9933% ( 56) 00:08:04.535 10939.471 - 10989.883: 36.5048% ( 55) 00:08:04.535 10989.883 - 11040.295: 36.9234% ( 45) 00:08:04.535 11040.295 - 11090.708: 37.5372% ( 66) 00:08:04.535 11090.708 - 11141.120: 37.9743% ( 47) 00:08:04.535 11141.120 - 11191.532: 38.4115% ( 47) 00:08:04.535 11191.532 - 11241.945: 38.7370% ( 35) 00:08:04.535 11241.945 - 11292.357: 39.1276% ( 42) 00:08:04.535 11292.357 - 11342.769: 39.4903% ( 39) 00:08:04.535 11342.769 - 11393.182: 39.7507% ( 28) 00:08:04.535 11393.182 - 11443.594: 40.0949% ( 37) 00:08:04.535 11443.594 - 11494.006: 40.4855% ( 42) 00:08:04.535 11494.006 - 11544.418: 40.8482% ( 39) 00:08:04.535 11544.418 - 11594.831: 41.3504% ( 54) 00:08:04.535 11594.831 - 11645.243: 42.1782% ( 89) 00:08:04.535 11645.243 - 11695.655: 42.9408% ( 82) 00:08:04.535 11695.655 - 11746.068: 43.9081% ( 104) 00:08:04.535 11746.068 - 11796.480: 44.7731% ( 93) 00:08:04.535 11796.480 - 11846.892: 45.4706% ( 75) 00:08:04.535 11846.892 - 11897.305: 46.4286% ( 103) 00:08:04.535 11897.305 - 11947.717: 47.3679% ( 101) 00:08:04.535 11947.717 - 11998.129: 48.2608% ( 96) 00:08:04.535 11998.129 - 12048.542: 49.0234% ( 82) 00:08:04.535 12048.542 - 12098.954: 49.6559% ( 68) 00:08:04.535 12098.954 - 12149.366: 50.3534% ( 75) 00:08:04.535 12149.366 - 12199.778: 51.1254% ( 83) 00:08:04.535 12199.778 - 12250.191: 51.8880% ( 82) 00:08:04.535 12250.191 - 12300.603: 52.7995% ( 98) 00:08:04.535 12300.603 - 12351.015: 53.6086% ( 87) 00:08:04.535 12351.015 - 12401.428: 54.6317% ( 110) 00:08:04.535 12401.428 - 12451.840: 55.4036% ( 83) 00:08:04.535 12451.840 - 12502.252: 56.0082% ( 65) 00:08:04.535 12502.252 - 12552.665: 56.6406% ( 68) 00:08:04.535 12552.665 - 12603.077: 57.2824% ( 69) 00:08:04.535 12603.077 - 12653.489: 58.1008% ( 88) 00:08:04.535 12653.489 - 12703.902: 58.8170% ( 77) 00:08:04.535 12703.902 - 12754.314: 59.5052% ( 74) 00:08:04.535 12754.314 - 12804.726: 60.2772% ( 83) 00:08:04.535 12804.726 - 12855.138: 60.9840% ( 76) 00:08:04.535 12855.138 - 12905.551: 61.9885% ( 108) 00:08:04.535 12905.551 - 13006.375: 64.5926% ( 280) 00:08:04.535 13006.375 - 13107.200: 67.1875% ( 279) 00:08:04.535 13107.200 - 13208.025: 69.4010% ( 238) 00:08:04.535 13208.025 - 13308.849: 71.7076% ( 248) 00:08:04.535 13308.849 - 13409.674: 73.2980% ( 171) 00:08:04.535 13409.674 - 13510.498: 74.7675% ( 158) 00:08:04.535 13510.498 - 13611.323: 76.1254% ( 146) 00:08:04.535 13611.323 - 13712.148: 77.3531% ( 132) 00:08:04.535 13712.148 - 13812.972: 78.4691% ( 120) 00:08:04.535 13812.972 - 13913.797: 79.2597% ( 85) 00:08:04.535 13913.797 - 14014.622: 79.7712% ( 55) 00:08:04.535 14014.622 - 14115.446: 80.5339% ( 82) 00:08:04.535 14115.446 - 14216.271: 81.4546% ( 99) 00:08:04.535 14216.271 - 14317.095: 82.5521% ( 118) 00:08:04.535 14317.095 - 14417.920: 83.2775% ( 78) 00:08:04.535 14417.920 - 14518.745: 83.8635% ( 63) 00:08:04.535 14518.745 - 14619.569: 84.5889% ( 78) 00:08:04.535 14619.569 - 14720.394: 85.2865% ( 75) 00:08:04.535 14720.394 - 14821.218: 86.0212% ( 79) 00:08:04.535 14821.218 - 14922.043: 87.0629% ( 112) 00:08:04.535 14922.043 - 15022.868: 87.9092% ( 91) 00:08:04.535 15022.868 - 15123.692: 88.7277% ( 88) 00:08:04.535 15123.692 - 15224.517: 89.5182% ( 85) 00:08:04.535 15224.517 - 15325.342: 90.2902% ( 83) 00:08:04.535 15325.342 - 15426.166: 90.9412% ( 70) 00:08:04.535 15426.166 - 15526.991: 91.5644% ( 67) 00:08:04.535 15526.991 - 15627.815: 92.0759% ( 55) 00:08:04.535 15627.815 - 15728.640: 92.5595% ( 52) 00:08:04.535 15728.640 - 15829.465: 93.0432% ( 52) 00:08:04.535 15829.465 - 15930.289: 93.4245% ( 41) 00:08:04.535 15930.289 - 16031.114: 93.8337% ( 44) 00:08:04.535 16031.114 - 16131.938: 94.3173% ( 52) 00:08:04.535 16131.938 - 16232.763: 94.6801% ( 39) 00:08:04.535 16232.763 - 16333.588: 95.0893% ( 44) 00:08:04.535 16333.588 - 16434.412: 95.4706% ( 41) 00:08:04.535 16434.412 - 16535.237: 95.7961% ( 35) 00:08:04.535 16535.237 - 16636.062: 96.2054% ( 44) 00:08:04.535 16636.062 - 16736.886: 96.6425% ( 47) 00:08:04.535 16736.886 - 16837.711: 96.8936% ( 27) 00:08:04.535 16837.711 - 16938.535: 97.0331% ( 15) 00:08:04.535 16938.535 - 17039.360: 97.1819% ( 16) 00:08:04.535 17039.360 - 17140.185: 97.2935% ( 12) 00:08:04.535 17140.185 - 17241.009: 97.4702% ( 19) 00:08:04.535 17241.009 - 17341.834: 97.6097% ( 15) 00:08:04.535 17341.834 - 17442.658: 97.7493% ( 15) 00:08:04.535 17442.658 - 17543.483: 97.8330% ( 9) 00:08:04.535 17543.483 - 17644.308: 97.9353% ( 11) 00:08:04.535 17644.308 - 17745.132: 97.9818% ( 5) 00:08:04.535 17745.132 - 17845.957: 98.0748% ( 10) 00:08:04.535 17845.957 - 17946.782: 98.2050% ( 14) 00:08:04.535 17946.782 - 18047.606: 98.3259% ( 13) 00:08:04.535 18047.606 - 18148.431: 98.5956% ( 29) 00:08:04.535 18148.431 - 18249.255: 98.7630% ( 18) 00:08:04.535 18249.255 - 18350.080: 98.9304% ( 18) 00:08:04.535 18350.080 - 18450.905: 99.1722% ( 26) 00:08:04.535 18450.905 - 18551.729: 99.2839% ( 12) 00:08:04.535 18551.729 - 18652.554: 99.3676% ( 9) 00:08:04.535 18652.554 - 18753.378: 99.4048% ( 4) 00:08:04.535 23290.486 - 23391.311: 99.4606% ( 6) 00:08:04.535 23391.311 - 23492.135: 99.5164% ( 6) 00:08:04.535 23492.135 - 23592.960: 99.5815% ( 7) 00:08:04.535 23592.960 - 23693.785: 99.6373% ( 6) 00:08:04.535 23693.785 - 23794.609: 99.7024% ( 7) 00:08:04.535 23794.609 - 23895.434: 99.7582% ( 6) 00:08:04.535 23895.434 - 23996.258: 99.8233% ( 7) 00:08:04.535 23996.258 - 24097.083: 99.8884% ( 7) 00:08:04.535 24097.083 - 24197.908: 99.9442% ( 6) 00:08:04.535 24197.908 - 24298.732: 100.0000% ( 6) 00:08:04.535 00:08:04.535 04:55:42 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:08:04.535 00:08:04.535 real 0m2.400s 00:08:04.535 user 0m2.115s 00:08:04.535 sys 0m0.172s 00:08:04.535 04:55:42 nvme.nvme_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:04.535 04:55:42 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:08:04.535 ************************************ 00:08:04.535 END TEST nvme_perf 00:08:04.535 ************************************ 00:08:04.535 04:55:42 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:04.535 04:55:42 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:04.535 04:55:42 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:04.535 04:55:42 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:04.535 ************************************ 00:08:04.535 START TEST nvme_hello_world 00:08:04.535 ************************************ 00:08:04.535 04:55:42 nvme.nvme_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:04.796 Initializing NVMe Controllers 00:08:04.796 Attached to 0000:00:13.0 00:08:04.796 Namespace ID: 1 size: 1GB 00:08:04.796 Attached to 0000:00:10.0 00:08:04.796 Namespace ID: 1 size: 6GB 00:08:04.796 Attached to 0000:00:11.0 00:08:04.796 Namespace ID: 1 size: 5GB 00:08:04.796 Attached to 0000:00:12.0 00:08:04.796 Namespace ID: 1 size: 4GB 00:08:04.796 Namespace ID: 2 size: 4GB 00:08:04.796 Namespace ID: 3 size: 4GB 00:08:04.796 Initialization complete. 00:08:04.796 INFO: using host memory buffer for IO 00:08:04.796 Hello world! 00:08:04.796 INFO: using host memory buffer for IO 00:08:04.796 Hello world! 00:08:04.796 INFO: using host memory buffer for IO 00:08:04.796 Hello world! 00:08:04.796 INFO: using host memory buffer for IO 00:08:04.796 Hello world! 00:08:04.796 INFO: using host memory buffer for IO 00:08:04.796 Hello world! 00:08:04.796 INFO: using host memory buffer for IO 00:08:04.796 Hello world! 00:08:04.796 00:08:04.796 real 0m0.181s 00:08:04.796 user 0m0.064s 00:08:04.796 sys 0m0.074s 00:08:04.796 04:55:42 nvme.nvme_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:04.796 ************************************ 00:08:04.796 END TEST nvme_hello_world 00:08:04.796 ************************************ 00:08:04.796 04:55:42 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:08:04.796 04:55:42 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:04.796 04:55:42 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:04.796 04:55:42 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:04.796 04:55:42 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:04.796 ************************************ 00:08:04.796 START TEST nvme_sgl 00:08:04.796 ************************************ 00:08:04.796 04:55:42 nvme.nvme_sgl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:05.058 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:08:05.058 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:08:05.058 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:08:05.058 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:08:05.058 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:08:05.058 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:08:05.058 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:08:05.058 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:08:05.058 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:08:05.058 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:08:05.058 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:08:05.058 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:08:05.058 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:08:05.058 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:08:05.058 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:08:05.058 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:08:05.058 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:08:05.058 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:08:05.058 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:08:05.058 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:08:05.058 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:08:05.058 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:08:05.058 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:08:05.058 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:08:05.058 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:08:05.058 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:08:05.058 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:08:05.058 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:08:05.058 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:08:05.058 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:08:05.058 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:08:05.058 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:08:05.058 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:08:05.058 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:08:05.058 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:08:05.058 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:08:05.058 NVMe Readv/Writev Request test 00:08:05.058 Attached to 0000:00:13.0 00:08:05.058 Attached to 0000:00:10.0 00:08:05.058 Attached to 0000:00:11.0 00:08:05.058 Attached to 0000:00:12.0 00:08:05.058 0000:00:10.0: build_io_request_2 test passed 00:08:05.058 0000:00:10.0: build_io_request_4 test passed 00:08:05.058 0000:00:10.0: build_io_request_5 test passed 00:08:05.058 0000:00:10.0: build_io_request_6 test passed 00:08:05.058 0000:00:10.0: build_io_request_7 test passed 00:08:05.058 0000:00:10.0: build_io_request_10 test passed 00:08:05.058 0000:00:11.0: build_io_request_2 test passed 00:08:05.058 0000:00:11.0: build_io_request_4 test passed 00:08:05.058 0000:00:11.0: build_io_request_5 test passed 00:08:05.058 0000:00:11.0: build_io_request_6 test passed 00:08:05.058 0000:00:11.0: build_io_request_7 test passed 00:08:05.058 0000:00:11.0: build_io_request_10 test passed 00:08:05.058 Cleaning up... 00:08:05.058 00:08:05.058 real 0m0.232s 00:08:05.058 user 0m0.132s 00:08:05.058 sys 0m0.065s 00:08:05.058 04:55:43 nvme.nvme_sgl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:05.058 ************************************ 00:08:05.058 END TEST nvme_sgl 00:08:05.058 ************************************ 00:08:05.058 04:55:43 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:08:05.058 04:55:43 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:05.058 04:55:43 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:05.058 04:55:43 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:05.058 04:55:43 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:05.058 ************************************ 00:08:05.058 START TEST nvme_e2edp 00:08:05.058 ************************************ 00:08:05.058 04:55:43 nvme.nvme_e2edp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:05.319 NVMe Write/Read with End-to-End data protection test 00:08:05.319 Attached to 0000:00:13.0 00:08:05.319 Attached to 0000:00:10.0 00:08:05.319 Attached to 0000:00:11.0 00:08:05.319 Attached to 0000:00:12.0 00:08:05.319 Cleaning up... 00:08:05.319 00:08:05.319 real 0m0.158s 00:08:05.319 user 0m0.050s 00:08:05.319 sys 0m0.067s 00:08:05.319 ************************************ 00:08:05.319 END TEST nvme_e2edp 00:08:05.319 ************************************ 00:08:05.319 04:55:43 nvme.nvme_e2edp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:05.319 04:55:43 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:08:05.319 04:55:43 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:05.319 04:55:43 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:05.319 04:55:43 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:05.319 04:55:43 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:05.319 ************************************ 00:08:05.319 START TEST nvme_reserve 00:08:05.319 ************************************ 00:08:05.319 04:55:43 nvme.nvme_reserve -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:05.581 ===================================================== 00:08:05.581 NVMe Controller at PCI bus 0, device 19, function 0 00:08:05.581 ===================================================== 00:08:05.581 Reservations: Not Supported 00:08:05.581 ===================================================== 00:08:05.581 NVMe Controller at PCI bus 0, device 16, function 0 00:08:05.581 ===================================================== 00:08:05.581 Reservations: Not Supported 00:08:05.581 ===================================================== 00:08:05.581 NVMe Controller at PCI bus 0, device 17, function 0 00:08:05.581 ===================================================== 00:08:05.581 Reservations: Not Supported 00:08:05.581 ===================================================== 00:08:05.581 NVMe Controller at PCI bus 0, device 18, function 0 00:08:05.581 ===================================================== 00:08:05.581 Reservations: Not Supported 00:08:05.581 Reservation test passed 00:08:05.581 00:08:05.581 real 0m0.170s 00:08:05.581 user 0m0.050s 00:08:05.581 sys 0m0.075s 00:08:05.581 04:55:43 nvme.nvme_reserve -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:05.581 ************************************ 00:08:05.581 END TEST nvme_reserve 00:08:05.581 ************************************ 00:08:05.581 04:55:43 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:08:05.581 04:55:43 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:05.581 04:55:43 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:05.581 04:55:43 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:05.581 04:55:43 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:05.581 ************************************ 00:08:05.581 START TEST nvme_err_injection 00:08:05.581 ************************************ 00:08:05.581 04:55:43 nvme.nvme_err_injection -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:05.842 NVMe Error Injection test 00:08:05.842 Attached to 0000:00:13.0 00:08:05.842 Attached to 0000:00:10.0 00:08:05.842 Attached to 0000:00:11.0 00:08:05.842 Attached to 0000:00:12.0 00:08:05.842 0000:00:13.0: get features failed as expected 00:08:05.842 0000:00:10.0: get features failed as expected 00:08:05.842 0000:00:11.0: get features failed as expected 00:08:05.842 0000:00:12.0: get features failed as expected 00:08:05.842 0000:00:12.0: get features successfully as expected 00:08:05.842 0000:00:13.0: get features successfully as expected 00:08:05.842 0000:00:10.0: get features successfully as expected 00:08:05.842 0000:00:11.0: get features successfully as expected 00:08:05.842 0000:00:12.0: read failed as expected 00:08:05.842 0000:00:13.0: read failed as expected 00:08:05.842 0000:00:10.0: read failed as expected 00:08:05.842 0000:00:11.0: read failed as expected 00:08:05.842 0000:00:12.0: read successfully as expected 00:08:05.842 0000:00:13.0: read successfully as expected 00:08:05.842 0000:00:10.0: read successfully as expected 00:08:05.842 0000:00:11.0: read successfully as expected 00:08:05.842 Cleaning up... 00:08:05.842 00:08:05.842 real 0m0.177s 00:08:05.842 user 0m0.052s 00:08:05.842 sys 0m0.081s 00:08:05.842 ************************************ 00:08:05.842 END TEST nvme_err_injection 00:08:05.842 ************************************ 00:08:05.842 04:55:43 nvme.nvme_err_injection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:05.842 04:55:43 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:08:05.842 04:55:43 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:05.842 04:55:43 nvme -- common/autotest_common.sh@1101 -- # '[' 9 -le 1 ']' 00:08:05.842 04:55:43 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:05.842 04:55:43 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:05.842 ************************************ 00:08:05.842 START TEST nvme_overhead 00:08:05.842 ************************************ 00:08:05.842 04:55:43 nvme.nvme_overhead -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:07.234 Initializing NVMe Controllers 00:08:07.234 Attached to 0000:00:13.0 00:08:07.234 Attached to 0000:00:10.0 00:08:07.234 Attached to 0000:00:11.0 00:08:07.234 Attached to 0000:00:12.0 00:08:07.234 Initialization complete. Launching workers. 00:08:07.234 submit (in ns) avg, min, max = 12309.6, 10069.2, 80984.6 00:08:07.234 complete (in ns) avg, min, max = 8026.2, 7355.4, 132099.2 00:08:07.234 00:08:07.234 Submit histogram 00:08:07.234 ================ 00:08:07.234 Range in us Cumulative Count 00:08:07.234 10.043 - 10.092: 0.0290% ( 1) 00:08:07.234 10.338 - 10.388: 0.0580% ( 1) 00:08:07.234 10.535 - 10.585: 0.0871% ( 1) 00:08:07.234 10.585 - 10.634: 0.1451% ( 2) 00:08:07.234 10.634 - 10.683: 0.2322% ( 3) 00:08:07.234 10.683 - 10.732: 0.2612% ( 1) 00:08:07.234 10.732 - 10.782: 0.2902% ( 1) 00:08:07.234 10.880 - 10.929: 0.3772% ( 3) 00:08:07.234 10.929 - 10.978: 0.4933% ( 4) 00:08:07.234 10.978 - 11.028: 0.7545% ( 9) 00:08:07.234 11.028 - 11.077: 1.6541% ( 31) 00:08:07.234 11.077 - 11.126: 4.0046% ( 81) 00:08:07.234 11.126 - 11.175: 9.1701% ( 178) 00:08:07.234 11.175 - 11.225: 16.2797% ( 245) 00:08:07.234 11.225 - 11.274: 25.2467% ( 309) 00:08:07.234 11.274 - 11.323: 34.3006% ( 312) 00:08:07.234 11.323 - 11.372: 42.9774% ( 299) 00:08:07.234 11.372 - 11.422: 50.2902% ( 252) 00:08:07.234 11.422 - 11.471: 56.4132% ( 211) 00:08:07.234 11.471 - 11.520: 60.7081% ( 148) 00:08:07.234 11.520 - 11.569: 64.1613% ( 119) 00:08:07.234 11.569 - 11.618: 66.7150% ( 88) 00:08:07.234 11.618 - 11.668: 68.8915% ( 75) 00:08:07.234 11.668 - 11.717: 70.4005% ( 52) 00:08:07.234 11.717 - 11.766: 71.5322% ( 39) 00:08:07.234 11.766 - 11.815: 72.9541% ( 49) 00:08:07.234 11.815 - 11.865: 73.8828% ( 32) 00:08:07.234 11.865 - 11.914: 74.9275% ( 36) 00:08:07.234 11.914 - 11.963: 76.2623% ( 46) 00:08:07.234 11.963 - 12.012: 77.4811% ( 42) 00:08:07.234 12.012 - 12.062: 78.3227% ( 29) 00:08:07.234 12.062 - 12.111: 79.4254% ( 38) 00:08:07.234 12.111 - 12.160: 80.3540% ( 32) 00:08:07.234 12.160 - 12.209: 81.4568% ( 38) 00:08:07.234 12.209 - 12.258: 82.3564% ( 31) 00:08:07.234 12.258 - 12.308: 82.7336% ( 13) 00:08:07.234 12.308 - 12.357: 83.1109% ( 13) 00:08:07.234 12.357 - 12.406: 83.2850% ( 6) 00:08:07.234 12.406 - 12.455: 83.5171% ( 8) 00:08:07.234 12.455 - 12.505: 83.6622% ( 5) 00:08:07.234 12.505 - 12.554: 83.8363% ( 6) 00:08:07.234 12.554 - 12.603: 83.9234% ( 3) 00:08:07.234 12.603 - 12.702: 84.0975% ( 6) 00:08:07.234 12.702 - 12.800: 84.1846% ( 3) 00:08:07.234 12.800 - 12.898: 84.4457% ( 9) 00:08:07.234 12.898 - 12.997: 84.6198% ( 6) 00:08:07.234 12.997 - 13.095: 84.8520% ( 8) 00:08:07.234 13.095 - 13.194: 85.0261% ( 6) 00:08:07.234 13.194 - 13.292: 85.1132% ( 3) 00:08:07.234 13.292 - 13.391: 85.2873% ( 6) 00:08:07.234 13.391 - 13.489: 85.4324% ( 5) 00:08:07.234 13.489 - 13.588: 85.5194% ( 3) 00:08:07.234 13.588 - 13.686: 85.6355% ( 4) 00:08:07.234 13.686 - 13.785: 85.6936% ( 2) 00:08:07.234 13.785 - 13.883: 85.8677% ( 6) 00:08:07.234 13.883 - 13.982: 85.9257% ( 2) 00:08:07.234 13.982 - 14.080: 86.0128% ( 3) 00:08:07.234 14.080 - 14.178: 86.1579% ( 5) 00:08:07.234 14.178 - 14.277: 86.1869% ( 1) 00:08:07.234 14.277 - 14.375: 86.3610% ( 6) 00:08:07.234 14.375 - 14.474: 86.6222% ( 9) 00:08:07.234 14.474 - 14.572: 86.9704% ( 12) 00:08:07.234 14.572 - 14.671: 87.2606% ( 10) 00:08:07.234 14.671 - 14.769: 87.6378% ( 13) 00:08:07.234 14.769 - 14.868: 88.0151% ( 13) 00:08:07.234 14.868 - 14.966: 88.3633% ( 12) 00:08:07.234 14.966 - 15.065: 88.6245% ( 9) 00:08:07.234 15.065 - 15.163: 89.0888% ( 16) 00:08:07.234 15.163 - 15.262: 89.5241% ( 15) 00:08:07.234 15.262 - 15.360: 89.9884% ( 16) 00:08:07.234 15.360 - 15.458: 90.5107% ( 18) 00:08:07.234 15.458 - 15.557: 91.0331% ( 18) 00:08:07.234 15.557 - 15.655: 91.3233% ( 10) 00:08:07.234 15.655 - 15.754: 91.6425% ( 11) 00:08:07.234 15.754 - 15.852: 92.0197% ( 13) 00:08:07.234 15.852 - 15.951: 92.1938% ( 6) 00:08:07.234 15.951 - 16.049: 92.6001% ( 14) 00:08:07.234 16.049 - 16.148: 92.8613% ( 9) 00:08:07.234 16.148 - 16.246: 93.2676% ( 14) 00:08:07.234 16.246 - 16.345: 93.5868% ( 11) 00:08:07.234 16.345 - 16.443: 93.8479% ( 9) 00:08:07.234 16.443 - 16.542: 94.1091% ( 9) 00:08:07.234 16.542 - 16.640: 94.2252% ( 4) 00:08:07.234 16.640 - 16.738: 94.3993% ( 6) 00:08:07.234 16.738 - 16.837: 94.5154% ( 4) 00:08:07.234 16.837 - 16.935: 94.6315% ( 4) 00:08:07.234 16.935 - 17.034: 94.6895% ( 2) 00:08:07.234 17.034 - 17.132: 94.8346% ( 5) 00:08:07.234 17.132 - 17.231: 95.0087% ( 6) 00:08:07.234 17.231 - 17.329: 95.2118% ( 7) 00:08:07.234 17.329 - 17.428: 95.3279% ( 4) 00:08:07.234 17.428 - 17.526: 95.5311% ( 7) 00:08:07.234 17.526 - 17.625: 95.8212% ( 10) 00:08:07.234 17.625 - 17.723: 96.0534% ( 8) 00:08:07.234 17.723 - 17.822: 96.1985% ( 5) 00:08:07.234 17.822 - 17.920: 96.3726% ( 6) 00:08:07.234 17.920 - 18.018: 96.4887% ( 4) 00:08:07.234 18.018 - 18.117: 96.5177% ( 1) 00:08:07.234 18.117 - 18.215: 96.7789% ( 9) 00:08:07.234 18.215 - 18.314: 97.0981% ( 11) 00:08:07.234 18.314 - 18.412: 97.5044% ( 14) 00:08:07.234 18.412 - 18.511: 97.6494% ( 5) 00:08:07.234 18.511 - 18.609: 97.7945% ( 5) 00:08:07.234 18.609 - 18.708: 97.8816% ( 3) 00:08:07.234 18.708 - 18.806: 98.2589% ( 13) 00:08:07.234 18.806 - 18.905: 98.3749% ( 4) 00:08:07.234 18.905 - 19.003: 98.4039% ( 1) 00:08:07.234 19.200 - 19.298: 98.4330% ( 1) 00:08:07.234 19.397 - 19.495: 98.4620% ( 1) 00:08:07.234 19.495 - 19.594: 98.6361% ( 6) 00:08:07.234 19.594 - 19.692: 98.8683% ( 8) 00:08:07.234 19.692 - 19.791: 98.9843% ( 4) 00:08:07.234 19.791 - 19.889: 99.0424% ( 2) 00:08:07.234 19.889 - 19.988: 99.1004% ( 2) 00:08:07.234 19.988 - 20.086: 99.1875% ( 3) 00:08:07.234 20.086 - 20.185: 99.2455% ( 2) 00:08:07.234 20.185 - 20.283: 99.2745% ( 1) 00:08:07.234 20.283 - 20.382: 99.3035% ( 1) 00:08:07.234 20.382 - 20.480: 99.3326% ( 1) 00:08:07.234 20.578 - 20.677: 99.3906% ( 2) 00:08:07.234 20.775 - 20.874: 99.4196% ( 1) 00:08:07.234 20.972 - 21.071: 99.4486% ( 1) 00:08:07.234 21.760 - 21.858: 99.4777% ( 1) 00:08:07.234 21.858 - 21.957: 99.5067% ( 1) 00:08:07.234 22.351 - 22.449: 99.5357% ( 1) 00:08:07.234 22.548 - 22.646: 99.5647% ( 1) 00:08:07.234 22.745 - 22.843: 99.5937% ( 1) 00:08:07.234 24.025 - 24.123: 99.6228% ( 1) 00:08:07.234 24.517 - 24.615: 99.6808% ( 2) 00:08:07.234 25.600 - 25.797: 99.7098% ( 1) 00:08:07.234 25.797 - 25.994: 99.7388% ( 1) 00:08:07.234 31.114 - 31.311: 99.7678% ( 1) 00:08:07.234 38.006 - 38.203: 99.7969% ( 1) 00:08:07.234 39.188 - 39.385: 99.8259% ( 1) 00:08:07.234 41.354 - 41.551: 99.8549% ( 1) 00:08:07.234 46.671 - 46.868: 99.8839% ( 1) 00:08:07.234 53.563 - 53.957: 99.9129% ( 1) 00:08:07.234 63.803 - 64.197: 99.9420% ( 1) 00:08:07.234 74.437 - 74.831: 99.9710% ( 1) 00:08:07.234 80.738 - 81.132: 100.0000% ( 1) 00:08:07.234 00:08:07.234 Complete histogram 00:08:07.234 ================== 00:08:07.234 Range in us Cumulative Count 00:08:07.234 7.335 - 7.385: 0.3772% ( 13) 00:08:07.234 7.385 - 7.434: 3.1921% ( 97) 00:08:07.235 7.434 - 7.483: 12.3912% ( 317) 00:08:07.235 7.483 - 7.532: 27.3070% ( 514) 00:08:07.235 7.532 - 7.582: 43.3546% ( 553) 00:08:07.235 7.582 - 7.631: 56.3262% ( 447) 00:08:07.235 7.631 - 7.680: 66.5119% ( 351) 00:08:07.235 7.680 - 7.729: 72.9541% ( 222) 00:08:07.235 7.729 - 7.778: 76.7557% ( 131) 00:08:07.235 7.778 - 7.828: 78.6999% ( 67) 00:08:07.235 7.828 - 7.877: 79.6576% ( 33) 00:08:07.235 7.877 - 7.926: 80.2960% ( 22) 00:08:07.235 7.926 - 7.975: 81.1376% ( 29) 00:08:07.235 7.975 - 8.025: 82.7046% ( 54) 00:08:07.235 8.025 - 8.074: 83.9524% ( 43) 00:08:07.235 8.074 - 8.123: 85.4324% ( 51) 00:08:07.235 8.123 - 8.172: 86.6222% ( 41) 00:08:07.235 8.172 - 8.222: 87.8700% ( 43) 00:08:07.235 8.222 - 8.271: 88.8276% ( 33) 00:08:07.235 8.271 - 8.320: 89.5531% ( 25) 00:08:07.235 8.320 - 8.369: 90.3076% ( 26) 00:08:07.235 8.369 - 8.418: 90.8299% ( 18) 00:08:07.235 8.418 - 8.468: 91.3523% ( 18) 00:08:07.235 8.468 - 8.517: 91.9037% ( 19) 00:08:07.235 8.517 - 8.566: 92.3099% ( 14) 00:08:07.235 8.566 - 8.615: 92.6582% ( 12) 00:08:07.235 8.615 - 8.665: 92.9483% ( 10) 00:08:07.235 8.665 - 8.714: 93.4997% ( 19) 00:08:07.235 8.714 - 8.763: 93.9350% ( 15) 00:08:07.235 8.763 - 8.812: 94.2832% ( 12) 00:08:07.235 8.812 - 8.862: 94.8346% ( 19) 00:08:07.235 8.862 - 8.911: 95.1538% ( 11) 00:08:07.235 8.911 - 8.960: 95.2699% ( 4) 00:08:07.235 8.960 - 9.009: 95.4440% ( 6) 00:08:07.235 9.009 - 9.058: 95.6181% ( 6) 00:08:07.235 9.058 - 9.108: 95.7632% ( 5) 00:08:07.235 9.108 - 9.157: 95.9663% ( 7) 00:08:07.235 9.157 - 9.206: 95.9954% ( 1) 00:08:07.235 9.255 - 9.305: 96.1114% ( 4) 00:08:07.235 9.305 - 9.354: 96.1405% ( 1) 00:08:07.235 9.403 - 9.452: 96.2565% ( 4) 00:08:07.235 9.698 - 9.748: 96.2855% ( 1) 00:08:07.235 10.043 - 10.092: 96.4016% ( 4) 00:08:07.235 10.092 - 10.142: 96.4887% ( 3) 00:08:07.235 10.142 - 10.191: 96.5177% ( 1) 00:08:07.235 10.191 - 10.240: 96.5757% ( 2) 00:08:07.235 10.240 - 10.289: 96.6048% ( 1) 00:08:07.235 10.289 - 10.338: 96.6338% ( 1) 00:08:07.235 10.338 - 10.388: 96.7499% ( 4) 00:08:07.235 10.388 - 10.437: 96.8079% ( 2) 00:08:07.235 10.437 - 10.486: 96.8369% ( 1) 00:08:07.235 10.486 - 10.535: 96.8950% ( 2) 00:08:07.235 10.585 - 10.634: 96.9530% ( 2) 00:08:07.235 10.634 - 10.683: 97.0110% ( 2) 00:08:07.235 10.683 - 10.732: 97.0400% ( 1) 00:08:07.235 10.732 - 10.782: 97.1271% ( 3) 00:08:07.235 10.880 - 10.929: 97.1561% ( 1) 00:08:07.235 10.929 - 10.978: 97.1851% ( 1) 00:08:07.235 10.978 - 11.028: 97.2432% ( 2) 00:08:07.235 11.028 - 11.077: 97.3012% ( 2) 00:08:07.235 11.077 - 11.126: 97.3883% ( 3) 00:08:07.235 11.126 - 11.175: 97.4463% ( 2) 00:08:07.235 11.175 - 11.225: 97.4753% ( 1) 00:08:07.235 11.225 - 11.274: 97.5044% ( 1) 00:08:07.235 11.274 - 11.323: 97.5334% ( 1) 00:08:07.235 11.323 - 11.372: 97.6494% ( 4) 00:08:07.235 11.372 - 11.422: 97.7075% ( 2) 00:08:07.235 11.422 - 11.471: 97.7365% ( 1) 00:08:07.235 11.471 - 11.520: 97.7945% ( 2) 00:08:07.235 11.815 - 11.865: 97.8236% ( 1) 00:08:07.235 12.258 - 12.308: 97.8526% ( 1) 00:08:07.235 13.194 - 13.292: 97.8816% ( 1) 00:08:07.235 13.292 - 13.391: 97.9396% ( 2) 00:08:07.235 13.391 - 13.489: 98.0557% ( 4) 00:08:07.235 13.489 - 13.588: 98.1428% ( 3) 00:08:07.235 13.588 - 13.686: 98.2008% ( 2) 00:08:07.235 13.686 - 13.785: 98.2879% ( 3) 00:08:07.235 13.785 - 13.883: 98.4620% ( 6) 00:08:07.235 13.883 - 13.982: 98.5200% ( 2) 00:08:07.235 13.982 - 14.080: 98.5781% ( 2) 00:08:07.235 14.080 - 14.178: 98.6361% ( 2) 00:08:07.235 14.178 - 14.277: 98.7232% ( 3) 00:08:07.235 14.277 - 14.375: 98.7522% ( 1) 00:08:07.235 14.375 - 14.474: 98.8392% ( 3) 00:08:07.235 14.474 - 14.572: 98.8973% ( 2) 00:08:07.235 14.572 - 14.671: 98.9843% ( 3) 00:08:07.235 14.671 - 14.769: 99.0424% ( 2) 00:08:07.235 14.769 - 14.868: 99.1004% ( 2) 00:08:07.235 14.868 - 14.966: 99.1584% ( 2) 00:08:07.235 15.065 - 15.163: 99.1875% ( 1) 00:08:07.235 15.360 - 15.458: 99.2165% ( 1) 00:08:07.235 15.655 - 15.754: 99.2455% ( 1) 00:08:07.235 15.951 - 16.049: 99.2745% ( 1) 00:08:07.235 16.738 - 16.837: 99.3035% ( 1) 00:08:07.235 19.102 - 19.200: 99.3326% ( 1) 00:08:07.235 19.200 - 19.298: 99.3616% ( 1) 00:08:07.235 19.791 - 19.889: 99.3906% ( 1) 00:08:07.235 19.889 - 19.988: 99.4196% ( 1) 00:08:07.235 20.480 - 20.578: 99.4486% ( 1) 00:08:07.235 20.677 - 20.775: 99.4777% ( 1) 00:08:07.235 20.874 - 20.972: 99.5067% ( 1) 00:08:07.235 21.662 - 21.760: 99.5647% ( 2) 00:08:07.235 21.858 - 21.957: 99.5937% ( 1) 00:08:07.235 21.957 - 22.055: 99.6518% ( 2) 00:08:07.235 23.335 - 23.434: 99.6808% ( 1) 00:08:07.235 23.729 - 23.828: 99.7098% ( 1) 00:08:07.235 25.600 - 25.797: 99.7388% ( 1) 00:08:07.235 28.948 - 29.145: 99.7678% ( 1) 00:08:07.235 29.145 - 29.342: 99.7969% ( 1) 00:08:07.235 36.628 - 36.825: 99.8259% ( 1) 00:08:07.235 38.597 - 38.794: 99.8549% ( 1) 00:08:07.235 40.566 - 40.763: 99.8839% ( 1) 00:08:07.235 47.655 - 47.852: 99.9129% ( 1) 00:08:07.235 50.412 - 50.806: 99.9420% ( 1) 00:08:07.235 51.988 - 52.382: 99.9710% ( 1) 00:08:07.235 131.545 - 132.332: 100.0000% ( 1) 00:08:07.235 00:08:07.235 00:08:07.235 real 0m1.172s 00:08:07.235 user 0m1.046s 00:08:07.235 sys 0m0.080s 00:08:07.235 04:55:45 nvme.nvme_overhead -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:07.235 ************************************ 00:08:07.235 END TEST nvme_overhead 00:08:07.235 ************************************ 00:08:07.235 04:55:45 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:08:07.235 04:55:45 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:07.235 04:55:45 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:08:07.235 04:55:45 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:07.235 04:55:45 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:07.235 ************************************ 00:08:07.235 START TEST nvme_arbitration 00:08:07.235 ************************************ 00:08:07.235 04:55:45 nvme.nvme_arbitration -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:10.546 Initializing NVMe Controllers 00:08:10.547 Attached to 0000:00:13.0 00:08:10.547 Attached to 0000:00:10.0 00:08:10.547 Attached to 0000:00:11.0 00:08:10.547 Attached to 0000:00:12.0 00:08:10.547 Associating QEMU NVMe Ctrl (12343 ) with lcore 0 00:08:10.547 Associating QEMU NVMe Ctrl (12340 ) with lcore 1 00:08:10.547 Associating QEMU NVMe Ctrl (12341 ) with lcore 2 00:08:10.547 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:08:10.547 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:08:10.547 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:08:10.547 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:08:10.547 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:08:10.547 Initialization complete. Launching workers. 00:08:10.547 Starting thread on core 1 with urgent priority queue 00:08:10.547 Starting thread on core 2 with urgent priority queue 00:08:10.547 Starting thread on core 3 with urgent priority queue 00:08:10.547 Starting thread on core 0 with urgent priority queue 00:08:10.547 QEMU NVMe Ctrl (12343 ) core 0: 5674.67 IO/s 17.62 secs/100000 ios 00:08:10.547 QEMU NVMe Ctrl (12342 ) core 0: 5674.67 IO/s 17.62 secs/100000 ios 00:08:10.547 QEMU NVMe Ctrl (12340 ) core 1: 5781.33 IO/s 17.30 secs/100000 ios 00:08:10.547 QEMU NVMe Ctrl (12342 ) core 1: 5781.33 IO/s 17.30 secs/100000 ios 00:08:10.547 QEMU NVMe Ctrl (12341 ) core 2: 5387.00 IO/s 18.56 secs/100000 ios 00:08:10.547 QEMU NVMe Ctrl (12342 ) core 3: 5930.67 IO/s 16.86 secs/100000 ios 00:08:10.547 ======================================================== 00:08:10.547 00:08:10.547 00:08:10.547 real 0m3.193s 00:08:10.547 user 0m9.004s 00:08:10.547 sys 0m0.099s 00:08:10.547 ************************************ 00:08:10.547 04:55:48 nvme.nvme_arbitration -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:10.547 04:55:48 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:08:10.547 END TEST nvme_arbitration 00:08:10.547 ************************************ 00:08:10.547 04:55:48 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:10.547 04:55:48 nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:10.547 04:55:48 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:10.547 04:55:48 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:10.547 ************************************ 00:08:10.547 START TEST nvme_single_aen 00:08:10.547 ************************************ 00:08:10.547 04:55:48 nvme.nvme_single_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:10.547 Asynchronous Event Request test 00:08:10.547 Attached to 0000:00:13.0 00:08:10.547 Attached to 0000:00:10.0 00:08:10.547 Attached to 0000:00:11.0 00:08:10.547 Attached to 0000:00:12.0 00:08:10.547 Reset controller to setup AER completions for this process 00:08:10.547 Registering asynchronous event callbacks... 00:08:10.547 Getting orig temperature thresholds of all controllers 00:08:10.547 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:10.547 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:10.547 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:10.547 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:10.547 Setting all controllers temperature threshold low to trigger AER 00:08:10.547 Waiting for all controllers temperature threshold to be set lower 00:08:10.547 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:10.547 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:10.547 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:10.547 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:10.547 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:10.547 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:10.547 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:10.547 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:10.547 Waiting for all controllers to trigger AER and reset threshold 00:08:10.547 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:10.547 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:10.547 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:10.547 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:10.547 Cleaning up... 00:08:10.547 00:08:10.547 real 0m0.169s 00:08:10.547 user 0m0.050s 00:08:10.547 sys 0m0.081s 00:08:10.547 04:55:48 nvme.nvme_single_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:10.547 04:55:48 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:08:10.547 ************************************ 00:08:10.547 END TEST nvme_single_aen 00:08:10.547 ************************************ 00:08:10.547 04:55:48 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:08:10.547 04:55:48 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:10.547 04:55:48 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:10.547 04:55:48 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:10.547 ************************************ 00:08:10.547 START TEST nvme_doorbell_aers 00:08:10.547 ************************************ 00:08:10.547 04:55:48 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1125 -- # nvme_doorbell_aers 00:08:10.547 04:55:48 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:08:10.547 04:55:48 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:08:10.547 04:55:48 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:08:10.547 04:55:48 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:08:10.547 04:55:48 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # bdfs=() 00:08:10.547 04:55:48 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # local bdfs 00:08:10.547 04:55:48 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:10.547 04:55:48 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:10.547 04:55:48 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:08:10.547 04:55:48 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:08:10.547 04:55:48 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:10.547 04:55:48 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:10.547 04:55:48 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:10.808 [2024-12-06 04:55:48.842759] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75104) is not found. Dropping the request. 00:08:20.801 Executing: test_write_invalid_db 00:08:20.801 Waiting for AER completion... 00:08:20.801 Failure: test_write_invalid_db 00:08:20.801 00:08:20.801 Executing: test_invalid_db_write_overflow_sq 00:08:20.801 Waiting for AER completion... 00:08:20.801 Failure: test_invalid_db_write_overflow_sq 00:08:20.801 00:08:20.801 Executing: test_invalid_db_write_overflow_cq 00:08:20.801 Waiting for AER completion... 00:08:20.801 Failure: test_invalid_db_write_overflow_cq 00:08:20.801 00:08:20.801 04:55:58 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:20.801 04:55:58 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:20.801 [2024-12-06 04:55:58.870697] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75104) is not found. Dropping the request. 00:08:30.795 Executing: test_write_invalid_db 00:08:30.795 Waiting for AER completion... 00:08:30.795 Failure: test_write_invalid_db 00:08:30.795 00:08:30.795 Executing: test_invalid_db_write_overflow_sq 00:08:30.795 Waiting for AER completion... 00:08:30.795 Failure: test_invalid_db_write_overflow_sq 00:08:30.795 00:08:30.795 Executing: test_invalid_db_write_overflow_cq 00:08:30.795 Waiting for AER completion... 00:08:30.795 Failure: test_invalid_db_write_overflow_cq 00:08:30.795 00:08:30.795 04:56:08 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:30.795 04:56:08 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:30.795 [2024-12-06 04:56:08.914870] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75104) is not found. Dropping the request. 00:08:40.765 Executing: test_write_invalid_db 00:08:40.766 Waiting for AER completion... 00:08:40.766 Failure: test_write_invalid_db 00:08:40.766 00:08:40.766 Executing: test_invalid_db_write_overflow_sq 00:08:40.766 Waiting for AER completion... 00:08:40.766 Failure: test_invalid_db_write_overflow_sq 00:08:40.766 00:08:40.766 Executing: test_invalid_db_write_overflow_cq 00:08:40.766 Waiting for AER completion... 00:08:40.766 Failure: test_invalid_db_write_overflow_cq 00:08:40.766 00:08:40.766 04:56:18 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:40.766 04:56:18 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:40.766 [2024-12-06 04:56:18.918917] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75104) is not found. Dropping the request. 00:08:50.738 Executing: test_write_invalid_db 00:08:50.738 Waiting for AER completion... 00:08:50.738 Failure: test_write_invalid_db 00:08:50.738 00:08:50.738 Executing: test_invalid_db_write_overflow_sq 00:08:50.738 Waiting for AER completion... 00:08:50.738 Failure: test_invalid_db_write_overflow_sq 00:08:50.738 00:08:50.738 Executing: test_invalid_db_write_overflow_cq 00:08:50.738 Waiting for AER completion... 00:08:50.738 Failure: test_invalid_db_write_overflow_cq 00:08:50.738 00:08:50.738 00:08:50.738 real 0m40.176s 00:08:50.738 user 0m34.249s 00:08:50.738 sys 0m5.526s 00:08:50.738 04:56:28 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:50.738 04:56:28 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:08:50.738 ************************************ 00:08:50.738 END TEST nvme_doorbell_aers 00:08:50.738 ************************************ 00:08:50.738 04:56:28 nvme -- nvme/nvme.sh@97 -- # uname 00:08:50.738 04:56:28 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:08:50.738 04:56:28 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:50.738 04:56:28 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:08:50.738 04:56:28 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:50.738 04:56:28 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:50.738 ************************************ 00:08:50.738 START TEST nvme_multi_aen 00:08:50.738 ************************************ 00:08:50.739 04:56:28 nvme.nvme_multi_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:50.997 [2024-12-06 04:56:28.969426] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75104) is not found. Dropping the request. 00:08:50.997 [2024-12-06 04:56:28.969504] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75104) is not found. Dropping the request. 00:08:50.997 [2024-12-06 04:56:28.969519] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75104) is not found. Dropping the request. 00:08:50.997 [2024-12-06 04:56:28.970816] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75104) is not found. Dropping the request. 00:08:50.997 [2024-12-06 04:56:28.970845] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75104) is not found. Dropping the request. 00:08:50.997 [2024-12-06 04:56:28.970854] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75104) is not found. Dropping the request. 00:08:50.997 [2024-12-06 04:56:28.972055] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75104) is not found. Dropping the request. 00:08:50.998 [2024-12-06 04:56:28.972084] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75104) is not found. Dropping the request. 00:08:50.998 [2024-12-06 04:56:28.972093] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75104) is not found. Dropping the request. 00:08:50.998 [2024-12-06 04:56:28.973220] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75104) is not found. Dropping the request. 00:08:50.998 [2024-12-06 04:56:28.973247] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75104) is not found. Dropping the request. 00:08:50.998 [2024-12-06 04:56:28.973256] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75104) is not found. Dropping the request. 00:08:50.998 Child process pid: 75630 00:08:50.998 [Child] Asynchronous Event Request test 00:08:50.998 [Child] Attached to 0000:00:13.0 00:08:50.998 [Child] Attached to 0000:00:10.0 00:08:50.998 [Child] Attached to 0000:00:11.0 00:08:50.998 [Child] Attached to 0000:00:12.0 00:08:50.998 [Child] Registering asynchronous event callbacks... 00:08:50.998 [Child] Getting orig temperature thresholds of all controllers 00:08:50.998 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:50.998 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:50.998 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:50.998 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:50.998 [Child] Waiting for all controllers to trigger AER and reset threshold 00:08:50.998 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:50.998 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:50.998 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:50.998 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:50.998 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:50.998 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:50.998 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:50.998 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:50.998 [Child] Cleaning up... 00:08:50.998 Asynchronous Event Request test 00:08:50.998 Attached to 0000:00:13.0 00:08:50.998 Attached to 0000:00:10.0 00:08:50.998 Attached to 0000:00:11.0 00:08:50.998 Attached to 0000:00:12.0 00:08:50.998 Reset controller to setup AER completions for this process 00:08:50.998 Registering asynchronous event callbacks... 00:08:50.998 Getting orig temperature thresholds of all controllers 00:08:50.998 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:50.998 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:50.998 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:50.998 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:50.998 Setting all controllers temperature threshold low to trigger AER 00:08:50.998 Waiting for all controllers temperature threshold to be set lower 00:08:50.998 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:50.998 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:50.998 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:50.998 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:50.998 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:50.998 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:50.998 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:50.998 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:50.998 Waiting for all controllers to trigger AER and reset threshold 00:08:50.998 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:50.998 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:50.998 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:50.998 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:50.998 Cleaning up... 00:08:50.998 00:08:50.998 real 0m0.350s 00:08:50.998 user 0m0.126s 00:08:50.998 sys 0m0.134s 00:08:50.998 04:56:29 nvme.nvme_multi_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:50.998 04:56:29 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:08:50.998 ************************************ 00:08:50.998 END TEST nvme_multi_aen 00:08:50.998 ************************************ 00:08:50.998 04:56:29 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:50.998 04:56:29 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:50.998 04:56:29 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:50.998 04:56:29 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:50.998 ************************************ 00:08:50.998 START TEST nvme_startup 00:08:50.998 ************************************ 00:08:50.998 04:56:29 nvme.nvme_startup -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:51.380 Initializing NVMe Controllers 00:08:51.380 Attached to 0000:00:13.0 00:08:51.380 Attached to 0000:00:10.0 00:08:51.380 Attached to 0000:00:11.0 00:08:51.381 Attached to 0000:00:12.0 00:08:51.381 Initialization complete. 00:08:51.381 Time used:116043.719 (us). 00:08:51.381 00:08:51.381 real 0m0.166s 00:08:51.381 user 0m0.050s 00:08:51.381 sys 0m0.075s 00:08:51.381 04:56:29 nvme.nvme_startup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:51.381 04:56:29 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:08:51.381 ************************************ 00:08:51.381 END TEST nvme_startup 00:08:51.381 ************************************ 00:08:51.381 04:56:29 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:08:51.381 04:56:29 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:51.381 04:56:29 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:51.381 04:56:29 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:51.381 ************************************ 00:08:51.381 START TEST nvme_multi_secondary 00:08:51.381 ************************************ 00:08:51.381 04:56:29 nvme.nvme_multi_secondary -- common/autotest_common.sh@1125 -- # nvme_multi_secondary 00:08:51.381 04:56:29 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=75681 00:08:51.381 04:56:29 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=75682 00:08:51.381 04:56:29 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:08:51.381 04:56:29 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:08:51.381 04:56:29 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:54.691 Initializing NVMe Controllers 00:08:54.691 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:54.691 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:54.691 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:54.691 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:54.691 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:54.691 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:54.691 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:54.691 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:54.691 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:54.691 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:54.691 Initialization complete. Launching workers. 00:08:54.691 ======================================================== 00:08:54.691 Latency(us) 00:08:54.691 Device Information : IOPS MiB/s Average min max 00:08:54.691 PCIE (0000:00:13.0) NSID 1 from core 1: 8006.44 31.28 1997.96 684.81 6781.91 00:08:54.691 PCIE (0000:00:10.0) NSID 1 from core 1: 8002.11 31.26 1998.02 685.58 6310.83 00:08:54.691 PCIE (0000:00:11.0) NSID 1 from core 1: 8004.11 31.27 1998.46 716.72 5785.27 00:08:54.691 PCIE (0000:00:12.0) NSID 1 from core 1: 8004.44 31.27 1998.39 661.94 5397.44 00:08:54.691 PCIE (0000:00:12.0) NSID 2 from core 1: 8004.77 31.27 1998.32 686.40 5737.24 00:08:54.691 PCIE (0000:00:12.0) NSID 3 from core 1: 8004.44 31.27 1998.42 671.54 6131.07 00:08:54.691 ======================================================== 00:08:54.691 Total : 48026.31 187.60 1998.26 661.94 6781.91 00:08:54.691 00:08:54.691 Initializing NVMe Controllers 00:08:54.691 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:54.691 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:54.691 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:54.691 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:54.691 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:54.691 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:54.691 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:54.691 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:54.691 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:54.691 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:54.691 Initialization complete. Launching workers. 00:08:54.691 ======================================================== 00:08:54.691 Latency(us) 00:08:54.691 Device Information : IOPS MiB/s Average min max 00:08:54.691 PCIE (0000:00:13.0) NSID 1 from core 2: 3280.18 12.81 4877.33 1154.32 13226.58 00:08:54.691 PCIE (0000:00:10.0) NSID 1 from core 2: 3280.18 12.81 4875.43 1131.33 13819.46 00:08:54.691 PCIE (0000:00:11.0) NSID 1 from core 2: 3280.18 12.81 4876.94 1084.47 13668.78 00:08:54.691 PCIE (0000:00:12.0) NSID 1 from core 2: 3280.18 12.81 4874.89 1075.33 12667.62 00:08:54.691 PCIE (0000:00:12.0) NSID 2 from core 2: 3280.18 12.81 4870.73 1100.27 12516.30 00:08:54.691 PCIE (0000:00:12.0) NSID 3 from core 2: 3280.18 12.81 4870.29 938.25 13964.41 00:08:54.691 ======================================================== 00:08:54.691 Total : 19681.08 76.88 4874.27 938.25 13964.41 00:08:54.691 00:08:54.691 04:56:32 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 75681 00:08:56.591 Initializing NVMe Controllers 00:08:56.591 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:56.591 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:56.591 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:56.591 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:56.591 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:56.591 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:56.591 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:56.591 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:56.591 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:56.591 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:56.591 Initialization complete. Launching workers. 00:08:56.591 ======================================================== 00:08:56.591 Latency(us) 00:08:56.591 Device Information : IOPS MiB/s Average min max 00:08:56.591 PCIE (0000:00:13.0) NSID 1 from core 0: 10993.32 42.94 1455.02 699.80 6250.72 00:08:56.591 PCIE (0000:00:10.0) NSID 1 from core 0: 10990.12 42.93 1454.60 691.17 6321.48 00:08:56.591 PCIE (0000:00:11.0) NSID 1 from core 0: 10989.12 42.93 1455.53 680.56 7032.25 00:08:56.591 PCIE (0000:00:12.0) NSID 1 from core 0: 10993.32 42.94 1454.93 555.27 7365.77 00:08:56.591 PCIE (0000:00:12.0) NSID 2 from core 0: 10993.32 42.94 1454.90 508.11 6968.54 00:08:56.591 PCIE (0000:00:12.0) NSID 3 from core 0: 10993.32 42.94 1454.87 416.26 6412.48 00:08:56.591 ======================================================== 00:08:56.591 Total : 65952.50 257.63 1454.97 416.26 7365.77 00:08:56.591 00:08:56.591 04:56:34 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 75682 00:08:56.591 04:56:34 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=75751 00:08:56.591 04:56:34 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:08:56.591 04:56:34 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=75752 00:08:56.591 04:56:34 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:08:56.591 04:56:34 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:59.868 Initializing NVMe Controllers 00:08:59.868 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:59.868 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:59.868 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:59.868 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:59.868 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:59.868 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:59.868 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:59.868 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:59.868 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:59.868 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:59.868 Initialization complete. Launching workers. 00:08:59.868 ======================================================== 00:08:59.868 Latency(us) 00:08:59.868 Device Information : IOPS MiB/s Average min max 00:08:59.868 PCIE (0000:00:13.0) NSID 1 from core 1: 7927.03 30.96 2017.96 743.35 5238.93 00:08:59.868 PCIE (0000:00:10.0) NSID 1 from core 1: 7927.03 30.96 2017.08 720.27 5407.63 00:08:59.868 PCIE (0000:00:11.0) NSID 1 from core 1: 7927.03 30.96 2018.01 749.31 5729.14 00:08:59.868 PCIE (0000:00:12.0) NSID 1 from core 1: 7927.03 30.96 2018.05 739.62 5762.23 00:08:59.868 PCIE (0000:00:12.0) NSID 2 from core 1: 7927.03 30.96 2017.95 752.24 5700.04 00:08:59.868 PCIE (0000:00:12.0) NSID 3 from core 1: 7927.03 30.96 2017.94 744.10 5225.46 00:08:59.868 ======================================================== 00:08:59.868 Total : 47562.20 185.79 2017.83 720.27 5762.23 00:08:59.868 00:08:59.868 Initializing NVMe Controllers 00:08:59.868 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:59.868 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:59.868 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:59.868 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:59.868 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:59.868 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:59.868 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:59.868 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:59.868 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:59.868 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:59.868 Initialization complete. Launching workers. 00:08:59.868 ======================================================== 00:08:59.868 Latency(us) 00:08:59.868 Device Information : IOPS MiB/s Average min max 00:08:59.868 PCIE (0000:00:13.0) NSID 1 from core 0: 7962.34 31.10 2008.98 719.58 6357.81 00:08:59.868 PCIE (0000:00:10.0) NSID 1 from core 0: 7962.34 31.10 2008.01 713.69 6448.70 00:08:59.868 PCIE (0000:00:11.0) NSID 1 from core 0: 7962.34 31.10 2008.86 662.23 6506.70 00:08:59.868 PCIE (0000:00:12.0) NSID 1 from core 0: 7962.34 31.10 2008.77 527.14 6768.26 00:08:59.868 PCIE (0000:00:12.0) NSID 2 from core 0: 7962.34 31.10 2008.69 462.00 6221.48 00:08:59.868 PCIE (0000:00:12.0) NSID 3 from core 0: 7962.34 31.10 2008.61 390.63 6354.54 00:08:59.868 ======================================================== 00:08:59.868 Total : 47774.04 186.62 2008.65 390.63 6768.26 00:08:59.868 00:09:01.764 Initializing NVMe Controllers 00:09:01.764 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:01.764 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:01.764 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:01.764 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:01.764 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:09:01.764 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:09:01.764 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:09:01.764 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:09:01.764 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:09:01.764 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:09:01.764 Initialization complete. Launching workers. 00:09:01.764 ======================================================== 00:09:01.764 Latency(us) 00:09:01.764 Device Information : IOPS MiB/s Average min max 00:09:01.764 PCIE (0000:00:13.0) NSID 1 from core 2: 4710.06 18.40 3396.27 761.95 13427.52 00:09:01.764 PCIE (0000:00:10.0) NSID 1 from core 2: 4710.06 18.40 3394.90 742.58 12860.94 00:09:01.764 PCIE (0000:00:11.0) NSID 1 from core 2: 4710.06 18.40 3396.23 745.80 12840.04 00:09:01.764 PCIE (0000:00:12.0) NSID 1 from core 2: 4710.06 18.40 3396.35 675.00 12884.64 00:09:01.764 PCIE (0000:00:12.0) NSID 2 from core 2: 4710.06 18.40 3396.31 577.60 12302.30 00:09:01.764 PCIE (0000:00:12.0) NSID 3 from core 2: 4710.06 18.40 3396.43 489.77 12265.24 00:09:01.764 ======================================================== 00:09:01.764 Total : 28260.35 110.39 3396.08 489.77 13427.52 00:09:01.764 00:09:01.764 04:56:39 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 75751 00:09:01.764 04:56:39 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 75752 00:09:01.764 00:09:01.764 real 0m10.542s 00:09:01.764 user 0m18.257s 00:09:01.764 sys 0m0.485s 00:09:01.764 04:56:39 nvme.nvme_multi_secondary -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:01.764 04:56:39 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:09:01.764 ************************************ 00:09:01.764 END TEST nvme_multi_secondary 00:09:01.764 ************************************ 00:09:02.024 04:56:39 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:09:02.024 04:56:39 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:09:02.024 04:56:39 nvme -- common/autotest_common.sh@1089 -- # [[ -e /proc/74719 ]] 00:09:02.024 04:56:39 nvme -- common/autotest_common.sh@1090 -- # kill 74719 00:09:02.024 04:56:39 nvme -- common/autotest_common.sh@1091 -- # wait 74719 00:09:02.024 [2024-12-06 04:56:39.999345] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75629) is not found. Dropping the request. 00:09:02.024 [2024-12-06 04:56:39.999440] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75629) is not found. Dropping the request. 00:09:02.024 [2024-12-06 04:56:39.999465] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75629) is not found. Dropping the request. 00:09:02.024 [2024-12-06 04:56:39.999491] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75629) is not found. Dropping the request. 00:09:02.024 [2024-12-06 04:56:40.000167] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75629) is not found. Dropping the request. 00:09:02.024 [2024-12-06 04:56:40.000215] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75629) is not found. Dropping the request. 00:09:02.024 [2024-12-06 04:56:40.000236] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75629) is not found. Dropping the request. 00:09:02.024 [2024-12-06 04:56:40.000259] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75629) is not found. Dropping the request. 00:09:02.024 [2024-12-06 04:56:40.000987] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75629) is not found. Dropping the request. 00:09:02.024 [2024-12-06 04:56:40.001049] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75629) is not found. Dropping the request. 00:09:02.024 [2024-12-06 04:56:40.001073] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75629) is not found. Dropping the request. 00:09:02.024 [2024-12-06 04:56:40.001101] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75629) is not found. Dropping the request. 00:09:02.024 [2024-12-06 04:56:40.001771] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75629) is not found. Dropping the request. 00:09:02.024 [2024-12-06 04:56:40.001831] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75629) is not found. Dropping the request. 00:09:02.024 [2024-12-06 04:56:40.001852] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75629) is not found. Dropping the request. 00:09:02.024 [2024-12-06 04:56:40.001875] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75629) is not found. Dropping the request. 00:09:02.024 04:56:40 nvme -- common/autotest_common.sh@1093 -- # rm -f /var/run/spdk_stub0 00:09:02.024 04:56:40 nvme -- common/autotest_common.sh@1097 -- # echo 2 00:09:02.024 04:56:40 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:09:02.024 04:56:40 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:02.024 04:56:40 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:02.024 04:56:40 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:02.024 ************************************ 00:09:02.024 START TEST bdev_nvme_reset_stuck_adm_cmd 00:09:02.024 ************************************ 00:09:02.024 04:56:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:09:02.024 * Looking for test storage... 00:09:02.024 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:02.024 04:56:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:02.024 04:56:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:02.024 04:56:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1681 -- # lcov --version 00:09:02.024 04:56:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:02.024 04:56:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:02.024 04:56:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:02.024 04:56:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:02.024 04:56:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:09:02.024 04:56:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:09:02.024 04:56:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:09:02.024 04:56:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:09:02.024 04:56:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:09:02.024 04:56:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:09:02.024 04:56:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:09:02.024 04:56:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:02.024 04:56:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:09:02.024 04:56:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:09:02.024 04:56:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:02.025 04:56:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:02.025 04:56:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:09:02.025 04:56:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:09:02.025 04:56:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:02.025 04:56:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:09:02.025 04:56:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:09:02.025 04:56:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:09:02.025 04:56:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:09:02.025 04:56:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:02.025 04:56:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:09:02.025 04:56:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:09:02.025 04:56:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:02.025 04:56:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:02.025 04:56:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:09:02.025 04:56:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:02.025 04:56:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:02.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.025 --rc genhtml_branch_coverage=1 00:09:02.025 --rc genhtml_function_coverage=1 00:09:02.025 --rc genhtml_legend=1 00:09:02.025 --rc geninfo_all_blocks=1 00:09:02.025 --rc geninfo_unexecuted_blocks=1 00:09:02.025 00:09:02.025 ' 00:09:02.025 04:56:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:02.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.025 --rc genhtml_branch_coverage=1 00:09:02.025 --rc genhtml_function_coverage=1 00:09:02.025 --rc genhtml_legend=1 00:09:02.025 --rc geninfo_all_blocks=1 00:09:02.025 --rc geninfo_unexecuted_blocks=1 00:09:02.025 00:09:02.025 ' 00:09:02.025 04:56:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:02.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.025 --rc genhtml_branch_coverage=1 00:09:02.025 --rc genhtml_function_coverage=1 00:09:02.025 --rc genhtml_legend=1 00:09:02.025 --rc geninfo_all_blocks=1 00:09:02.025 --rc geninfo_unexecuted_blocks=1 00:09:02.025 00:09:02.025 ' 00:09:02.025 04:56:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:02.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.025 --rc genhtml_branch_coverage=1 00:09:02.025 --rc genhtml_function_coverage=1 00:09:02.025 --rc genhtml_legend=1 00:09:02.025 --rc geninfo_all_blocks=1 00:09:02.025 --rc geninfo_unexecuted_blocks=1 00:09:02.025 00:09:02.025 ' 00:09:02.025 04:56:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:09:02.025 04:56:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:09:02.025 04:56:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:09:02.025 04:56:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:09:02.025 04:56:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:09:02.025 04:56:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:09:02.025 04:56:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # bdfs=() 00:09:02.025 04:56:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # local bdfs 00:09:02.025 04:56:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:09:02.025 04:56:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:09:02.025 04:56:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # bdfs=() 00:09:02.025 04:56:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # local bdfs 00:09:02.025 04:56:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:02.025 04:56:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:09:02.025 04:56:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:02.283 04:56:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:09:02.283 04:56:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:02.283 04:56:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:09:02.283 04:56:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:09:02.283 04:56:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:09:02.283 04:56:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=75912 00:09:02.283 04:56:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:02.283 04:56:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 75912 00:09:02.283 04:56:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@831 -- # '[' -z 75912 ']' 00:09:02.283 04:56:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:02.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:02.283 04:56:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:02.283 04:56:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:09:02.283 04:56:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:02.283 04:56:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:02.283 04:56:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:02.283 [2024-12-06 04:56:40.344429] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:09:02.284 [2024-12-06 04:56:40.344534] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75912 ] 00:09:02.284 [2024-12-06 04:56:40.490286] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:02.542 [2024-12-06 04:56:40.524103] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:02.542 [2024-12-06 04:56:40.524275] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:09:02.542 [2024-12-06 04:56:40.524401] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.542 [2024-12-06 04:56:40.524481] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:09:03.109 04:56:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:03.109 04:56:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # return 0 00:09:03.109 04:56:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:09:03.109 04:56:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.109 04:56:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:03.109 nvme0n1 00:09:03.109 04:56:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.109 04:56:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:09:03.109 04:56:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_WMT4A.txt 00:09:03.109 04:56:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:09:03.109 04:56:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.109 04:56:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:03.109 true 00:09:03.109 04:56:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.109 04:56:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:09:03.109 04:56:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1733461001 00:09:03.109 04:56:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=75930 00:09:03.109 04:56:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:03.109 04:56:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:09:03.109 04:56:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:09:05.056 04:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:09:05.056 04:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.056 04:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:05.056 [2024-12-06 04:56:43.226535] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:09:05.056 [2024-12-06 04:56:43.227128] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:09:05.056 [2024-12-06 04:56:43.227169] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:05.056 [2024-12-06 04:56:43.227185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:05.056 [2024-12-06 04:56:43.228867] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:05.056 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 75930 00:09:05.056 04:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.056 04:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 75930 00:09:05.056 04:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 75930 00:09:05.056 04:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:09:05.056 04:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:09:05.056 04:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:09:05.056 04:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.056 04:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:05.366 04:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.366 04:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:09:05.366 04:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_WMT4A.txt 00:09:05.366 04:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:09:05.366 04:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:09:05.366 04:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:05.366 04:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:05.366 04:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:05.366 04:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:05.366 04:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:05.366 04:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:05.366 04:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:09:05.366 04:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:09:05.366 04:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:09:05.366 04:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:05.366 04:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:05.366 04:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:05.366 04:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:05.366 04:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:05.366 04:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:05.366 04:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:09:05.366 04:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:09:05.366 04:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_WMT4A.txt 00:09:05.366 04:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 75912 00:09:05.366 04:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@950 -- # '[' -z 75912 ']' 00:09:05.366 04:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # kill -0 75912 00:09:05.366 04:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # uname 00:09:05.366 04:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:05.366 04:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75912 00:09:05.366 killing process with pid 75912 00:09:05.366 04:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:05.366 04:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:05.366 04:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75912' 00:09:05.366 04:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@969 -- # kill 75912 00:09:05.366 04:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@974 -- # wait 75912 00:09:05.625 04:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:09:05.625 04:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:09:05.625 00:09:05.625 real 0m3.520s 00:09:05.625 user 0m12.535s 00:09:05.625 sys 0m0.457s 00:09:05.625 04:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:05.625 04:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:05.625 ************************************ 00:09:05.625 END TEST bdev_nvme_reset_stuck_adm_cmd 00:09:05.625 ************************************ 00:09:05.625 04:56:43 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:09:05.625 04:56:43 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:09:05.625 04:56:43 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:05.625 04:56:43 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:05.625 04:56:43 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:05.625 ************************************ 00:09:05.625 START TEST nvme_fio 00:09:05.625 ************************************ 00:09:05.625 04:56:43 nvme.nvme_fio -- common/autotest_common.sh@1125 -- # nvme_fio_test 00:09:05.625 04:56:43 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:09:05.625 04:56:43 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:09:05.625 04:56:43 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:09:05.625 04:56:43 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # bdfs=() 00:09:05.625 04:56:43 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # local bdfs 00:09:05.625 04:56:43 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:05.625 04:56:43 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:05.625 04:56:43 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:09:05.625 04:56:43 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:09:05.625 04:56:43 nvme.nvme_fio -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:05.625 04:56:43 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:09:05.625 04:56:43 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:09:05.625 04:56:43 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:05.625 04:56:43 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:05.625 04:56:43 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:05.884 04:56:43 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:05.884 04:56:43 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:05.884 04:56:44 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:05.884 04:56:44 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:05.884 04:56:44 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:05.884 04:56:44 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:09:05.884 04:56:44 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:05.884 04:56:44 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:09:05.884 04:56:44 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:05.884 04:56:44 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:09:05.884 04:56:44 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:09:05.884 04:56:44 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:09:05.884 04:56:44 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:09:05.884 04:56:44 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:05.884 04:56:44 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:09:05.884 04:56:44 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:05.884 04:56:44 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:05.884 04:56:44 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:09:05.884 04:56:44 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:05.884 04:56:44 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:06.142 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:06.142 fio-3.35 00:09:06.142 Starting 1 thread 00:09:12.698 00:09:12.699 test: (groupid=0, jobs=1): err= 0: pid=76057: Fri Dec 6 04:56:50 2024 00:09:12.699 read: IOPS=23.2k, BW=90.7MiB/s (95.1MB/s)(181MiB/2001msec) 00:09:12.699 slat (nsec): min=3443, max=82456, avg=4868.40, stdev=1933.97 00:09:12.699 clat (usec): min=239, max=9313, avg=2752.47, stdev=710.05 00:09:12.699 lat (usec): min=244, max=9358, avg=2757.34, stdev=711.17 00:09:12.699 clat percentiles (usec): 00:09:12.699 | 1.00th=[ 2073], 5.00th=[ 2245], 10.00th=[ 2311], 20.00th=[ 2409], 00:09:12.699 | 30.00th=[ 2474], 40.00th=[ 2507], 50.00th=[ 2573], 60.00th=[ 2638], 00:09:12.699 | 70.00th=[ 2704], 80.00th=[ 2835], 90.00th=[ 3195], 95.00th=[ 4146], 00:09:12.699 | 99.00th=[ 6063], 99.50th=[ 6390], 99.90th=[ 7504], 99.95th=[ 7635], 00:09:12.699 | 99.99th=[ 8848] 00:09:12.699 bw ( KiB/s): min=85376, max=99192, per=99.21%, avg=92120.00, stdev=6913.84, samples=3 00:09:12.699 iops : min=21344, max=24798, avg=23030.00, stdev=1728.46, samples=3 00:09:12.699 write: IOPS=23.1k, BW=90.1MiB/s (94.5MB/s)(180MiB/2001msec); 0 zone resets 00:09:12.699 slat (nsec): min=3568, max=46906, avg=5119.50, stdev=1852.59 00:09:12.699 clat (usec): min=223, max=8887, avg=2759.52, stdev=699.23 00:09:12.699 lat (usec): min=227, max=8901, avg=2764.64, stdev=700.29 00:09:12.699 clat percentiles (usec): 00:09:12.699 | 1.00th=[ 2073], 5.00th=[ 2278], 10.00th=[ 2343], 20.00th=[ 2409], 00:09:12.699 | 30.00th=[ 2474], 40.00th=[ 2540], 50.00th=[ 2573], 60.00th=[ 2638], 00:09:12.699 | 70.00th=[ 2737], 80.00th=[ 2835], 90.00th=[ 3195], 95.00th=[ 4113], 00:09:12.699 | 99.00th=[ 6063], 99.50th=[ 6390], 99.90th=[ 7373], 99.95th=[ 7832], 00:09:12.699 | 99.99th=[ 8717] 00:09:12.699 bw ( KiB/s): min=85088, max=99896, per=99.90%, avg=92194.67, stdev=7421.89, samples=3 00:09:12.699 iops : min=21272, max=24974, avg=23048.67, stdev=1855.47, samples=3 00:09:12.699 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.02% 00:09:12.699 lat (msec) : 2=0.65%, 4=93.97%, 10=5.33% 00:09:12.699 cpu : usr=99.30%, sys=0.00%, ctx=28, majf=0, minf=625 00:09:12.699 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:12.699 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:12.699 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:12.699 issued rwts: total=46450,46166,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:12.699 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:12.699 00:09:12.699 Run status group 0 (all jobs): 00:09:12.699 READ: bw=90.7MiB/s (95.1MB/s), 90.7MiB/s-90.7MiB/s (95.1MB/s-95.1MB/s), io=181MiB (190MB), run=2001-2001msec 00:09:12.699 WRITE: bw=90.1MiB/s (94.5MB/s), 90.1MiB/s-90.1MiB/s (94.5MB/s-94.5MB/s), io=180MiB (189MB), run=2001-2001msec 00:09:12.699 ----------------------------------------------------- 00:09:12.699 Suppressions used: 00:09:12.699 count bytes template 00:09:12.699 1 32 /usr/src/fio/parse.c 00:09:12.699 1 8 libtcmalloc_minimal.so 00:09:12.699 ----------------------------------------------------- 00:09:12.699 00:09:12.699 04:56:50 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:12.699 04:56:50 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:12.699 04:56:50 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:12.699 04:56:50 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:12.699 04:56:50 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:12.699 04:56:50 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:12.958 04:56:51 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:12.958 04:56:51 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:12.958 04:56:51 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:12.958 04:56:51 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:09:12.958 04:56:51 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:12.958 04:56:51 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:09:12.958 04:56:51 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:12.958 04:56:51 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:09:12.958 04:56:51 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:09:12.958 04:56:51 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:09:12.958 04:56:51 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:12.958 04:56:51 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:09:12.958 04:56:51 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:09:12.958 04:56:51 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:12.958 04:56:51 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:12.958 04:56:51 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:09:12.958 04:56:51 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:12.958 04:56:51 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:12.958 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:12.958 fio-3.35 00:09:12.958 Starting 1 thread 00:09:19.537 00:09:19.537 test: (groupid=0, jobs=1): err= 0: pid=76109: Fri Dec 6 04:56:57 2024 00:09:19.537 read: IOPS=22.6k, BW=88.3MiB/s (92.6MB/s)(177MiB/2001msec) 00:09:19.537 slat (nsec): min=3410, max=59828, avg=5129.18, stdev=2241.66 00:09:19.537 clat (usec): min=243, max=10778, avg=2821.26, stdev=852.92 00:09:19.537 lat (usec): min=248, max=10829, avg=2826.39, stdev=854.30 00:09:19.537 clat percentiles (usec): 00:09:19.537 | 1.00th=[ 1942], 5.00th=[ 2212], 10.00th=[ 2278], 20.00th=[ 2376], 00:09:19.537 | 30.00th=[ 2442], 40.00th=[ 2507], 50.00th=[ 2573], 60.00th=[ 2638], 00:09:19.537 | 70.00th=[ 2737], 80.00th=[ 2900], 90.00th=[ 3720], 95.00th=[ 5014], 00:09:19.537 | 99.00th=[ 6194], 99.50th=[ 6521], 99.90th=[ 6980], 99.95th=[ 8455], 00:09:19.537 | 99.99th=[10421] 00:09:19.537 bw ( KiB/s): min=87672, max=93504, per=100.00%, avg=90594.67, stdev=2916.02, samples=3 00:09:19.537 iops : min=21918, max=23376, avg=22648.67, stdev=729.01, samples=3 00:09:19.537 write: IOPS=22.5k, BW=87.8MiB/s (92.1MB/s)(176MiB/2001msec); 0 zone resets 00:09:19.537 slat (usec): min=3, max=112, avg= 5.42, stdev= 2.39 00:09:19.537 clat (usec): min=217, max=10680, avg=2838.61, stdev=860.62 00:09:19.537 lat (usec): min=222, max=10692, avg=2844.03, stdev=862.04 00:09:19.537 clat percentiles (usec): 00:09:19.537 | 1.00th=[ 1958], 5.00th=[ 2245], 10.00th=[ 2311], 20.00th=[ 2376], 00:09:19.537 | 30.00th=[ 2442], 40.00th=[ 2507], 50.00th=[ 2573], 60.00th=[ 2638], 00:09:19.537 | 70.00th=[ 2737], 80.00th=[ 2900], 90.00th=[ 3752], 95.00th=[ 5014], 00:09:19.537 | 99.00th=[ 6259], 99.50th=[ 6587], 99.90th=[ 7111], 99.95th=[ 8717], 00:09:19.537 | 99.99th=[10290] 00:09:19.537 bw ( KiB/s): min=87256, max=92912, per=100.00%, avg=90834.67, stdev=3112.57, samples=3 00:09:19.537 iops : min=21814, max=23228, avg=22708.67, stdev=778.14, samples=3 00:09:19.537 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:09:19.537 lat (msec) : 2=1.24%, 4=89.85%, 10=8.86%, 20=0.02% 00:09:19.537 cpu : usr=98.60%, sys=0.35%, ctx=3, majf=0, minf=626 00:09:19.537 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:19.537 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:19.537 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:19.537 issued rwts: total=45224,44977,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:19.537 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:19.537 00:09:19.537 Run status group 0 (all jobs): 00:09:19.537 READ: bw=88.3MiB/s (92.6MB/s), 88.3MiB/s-88.3MiB/s (92.6MB/s-92.6MB/s), io=177MiB (185MB), run=2001-2001msec 00:09:19.537 WRITE: bw=87.8MiB/s (92.1MB/s), 87.8MiB/s-87.8MiB/s (92.1MB/s-92.1MB/s), io=176MiB (184MB), run=2001-2001msec 00:09:19.537 ----------------------------------------------------- 00:09:19.537 Suppressions used: 00:09:19.537 count bytes template 00:09:19.537 1 32 /usr/src/fio/parse.c 00:09:19.537 1 8 libtcmalloc_minimal.so 00:09:19.537 ----------------------------------------------------- 00:09:19.537 00:09:19.537 04:56:57 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:19.537 04:56:57 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:19.537 04:56:57 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:19.537 04:56:57 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:19.798 04:56:57 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:19.798 04:56:57 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:20.059 04:56:58 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:20.059 04:56:58 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:20.060 04:56:58 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:20.060 04:56:58 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:09:20.060 04:56:58 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:20.060 04:56:58 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:09:20.060 04:56:58 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:20.060 04:56:58 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:09:20.060 04:56:58 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:09:20.060 04:56:58 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:09:20.060 04:56:58 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:20.060 04:56:58 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:09:20.060 04:56:58 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:09:20.060 04:56:58 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:20.060 04:56:58 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:20.060 04:56:58 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:09:20.060 04:56:58 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:20.060 04:56:58 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:20.322 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:20.322 fio-3.35 00:09:20.322 Starting 1 thread 00:09:26.915 00:09:26.915 test: (groupid=0, jobs=1): err= 0: pid=76164: Fri Dec 6 04:57:03 2024 00:09:26.915 read: IOPS=18.0k, BW=70.3MiB/s (73.7MB/s)(141MiB/2001msec) 00:09:26.915 slat (usec): min=3, max=824, avg= 6.01, stdev= 5.12 00:09:26.915 clat (usec): min=191, max=11466, avg=3523.38, stdev=1075.07 00:09:26.915 lat (usec): min=196, max=11515, avg=3529.39, stdev=1076.34 00:09:26.915 clat percentiles (usec): 00:09:26.915 | 1.00th=[ 2024], 5.00th=[ 2442], 10.00th=[ 2606], 20.00th=[ 2802], 00:09:26.915 | 30.00th=[ 2933], 40.00th=[ 3064], 50.00th=[ 3195], 60.00th=[ 3326], 00:09:26.915 | 70.00th=[ 3556], 80.00th=[ 4113], 90.00th=[ 5145], 95.00th=[ 5866], 00:09:26.915 | 99.00th=[ 7046], 99.50th=[ 7701], 99.90th=[ 9241], 99.95th=[10421], 00:09:26.915 | 99.99th=[11338] 00:09:26.915 bw ( KiB/s): min=68424, max=70872, per=97.29%, avg=70034.67, stdev=1395.25, samples=3 00:09:26.915 iops : min=17106, max=17718, avg=17508.67, stdev=348.81, samples=3 00:09:26.915 write: IOPS=18.0k, BW=70.4MiB/s (73.8MB/s)(141MiB/2001msec); 0 zone resets 00:09:26.915 slat (nsec): min=3600, max=81820, avg=6285.38, stdev=3073.08 00:09:26.915 clat (usec): min=199, max=11382, avg=3561.51, stdev=1083.43 00:09:26.915 lat (usec): min=204, max=11398, avg=3567.79, stdev=1084.73 00:09:26.915 clat percentiles (usec): 00:09:26.915 | 1.00th=[ 2057], 5.00th=[ 2474], 10.00th=[ 2638], 20.00th=[ 2835], 00:09:26.915 | 30.00th=[ 2999], 40.00th=[ 3097], 50.00th=[ 3228], 60.00th=[ 3359], 00:09:26.915 | 70.00th=[ 3589], 80.00th=[ 4146], 90.00th=[ 5145], 95.00th=[ 5866], 00:09:26.915 | 99.00th=[ 7177], 99.50th=[ 7832], 99.90th=[ 9765], 99.95th=[10552], 00:09:26.915 | 99.99th=[11207] 00:09:26.915 bw ( KiB/s): min=68216, max=70976, per=97.22%, avg=70048.00, stdev=1586.60, samples=3 00:09:26.915 iops : min=17054, max=17744, avg=17512.00, stdev=396.65, samples=3 00:09:26.915 lat (usec) : 250=0.01%, 500=0.02%, 750=0.02%, 1000=0.02% 00:09:26.915 lat (msec) : 2=0.76%, 4=77.77%, 10=21.33%, 20=0.08% 00:09:26.915 cpu : usr=98.60%, sys=0.25%, ctx=8, majf=0, minf=625 00:09:26.915 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:26.915 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:26.915 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:26.915 issued rwts: total=36012,36042,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:26.915 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:26.915 00:09:26.915 Run status group 0 (all jobs): 00:09:26.915 READ: bw=70.3MiB/s (73.7MB/s), 70.3MiB/s-70.3MiB/s (73.7MB/s-73.7MB/s), io=141MiB (148MB), run=2001-2001msec 00:09:26.915 WRITE: bw=70.4MiB/s (73.8MB/s), 70.4MiB/s-70.4MiB/s (73.8MB/s-73.8MB/s), io=141MiB (148MB), run=2001-2001msec 00:09:26.915 ----------------------------------------------------- 00:09:26.915 Suppressions used: 00:09:26.915 count bytes template 00:09:26.915 1 32 /usr/src/fio/parse.c 00:09:26.915 1 8 libtcmalloc_minimal.so 00:09:26.915 ----------------------------------------------------- 00:09:26.915 00:09:26.915 04:57:04 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:26.915 04:57:04 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:26.915 04:57:04 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:26.915 04:57:04 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:26.915 04:57:04 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:26.915 04:57:04 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:26.915 04:57:04 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:26.915 04:57:04 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:26.915 04:57:04 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:26.915 04:57:04 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:09:26.915 04:57:04 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:26.915 04:57:04 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:09:26.915 04:57:04 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:26.915 04:57:04 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:09:26.915 04:57:04 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:09:26.915 04:57:04 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:09:26.915 04:57:04 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:26.915 04:57:04 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:09:26.915 04:57:04 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:09:26.915 04:57:04 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:26.915 04:57:04 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:26.915 04:57:04 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:09:26.915 04:57:04 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:26.915 04:57:04 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:26.915 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:26.915 fio-3.35 00:09:26.915 Starting 1 thread 00:09:32.208 00:09:32.208 test: (groupid=0, jobs=1): err= 0: pid=76219: Fri Dec 6 04:57:09 2024 00:09:32.208 read: IOPS=14.7k, BW=57.4MiB/s (60.2MB/s)(115MiB/2001msec) 00:09:32.208 slat (usec): min=6, max=111, avg= 7.88, stdev= 3.57 00:09:32.208 clat (usec): min=214, max=12912, avg=4327.78, stdev=1215.76 00:09:32.208 lat (usec): min=221, max=12959, avg=4335.66, stdev=1216.85 00:09:32.208 clat percentiles (usec): 00:09:32.208 | 1.00th=[ 2966], 5.00th=[ 3195], 10.00th=[ 3294], 20.00th=[ 3425], 00:09:32.208 | 30.00th=[ 3523], 40.00th=[ 3654], 50.00th=[ 3785], 60.00th=[ 4015], 00:09:32.208 | 70.00th=[ 4621], 80.00th=[ 5407], 90.00th=[ 6259], 95.00th=[ 6783], 00:09:32.208 | 99.00th=[ 7832], 99.50th=[ 8225], 99.90th=[ 9765], 99.95th=[10683], 00:09:32.208 | 99.99th=[12911] 00:09:32.208 bw ( KiB/s): min=56264, max=58176, per=97.43%, avg=57312.67, stdev=969.38, samples=3 00:09:32.208 iops : min=14066, max=14544, avg=14328.67, stdev=242.49, samples=3 00:09:32.209 write: IOPS=14.7k, BW=57.5MiB/s (60.3MB/s)(115MiB/2001msec); 0 zone resets 00:09:32.209 slat (nsec): min=6301, max=94663, avg=8380.79, stdev=3485.16 00:09:32.209 clat (usec): min=235, max=12850, avg=4342.19, stdev=1217.20 00:09:32.209 lat (usec): min=243, max=12859, avg=4350.57, stdev=1218.25 00:09:32.209 clat percentiles (usec): 00:09:32.209 | 1.00th=[ 2999], 5.00th=[ 3228], 10.00th=[ 3326], 20.00th=[ 3458], 00:09:32.209 | 30.00th=[ 3556], 40.00th=[ 3654], 50.00th=[ 3785], 60.00th=[ 4015], 00:09:32.209 | 70.00th=[ 4621], 80.00th=[ 5473], 90.00th=[ 6259], 95.00th=[ 6783], 00:09:32.209 | 99.00th=[ 7832], 99.50th=[ 8291], 99.90th=[10028], 99.95th=[10814], 00:09:32.209 | 99.99th=[12780] 00:09:32.209 bw ( KiB/s): min=56384, max=58496, per=97.13%, avg=57205.67, stdev=1131.31, samples=3 00:09:32.209 iops : min=14096, max=14624, avg=14301.33, stdev=282.88, samples=3 00:09:32.209 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.02% 00:09:32.209 lat (msec) : 2=0.08%, 4=59.11%, 10=40.66%, 20=0.10% 00:09:32.209 cpu : usr=98.75%, sys=0.05%, ctx=2, majf=0, minf=624 00:09:32.209 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:32.209 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:32.209 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:32.209 issued rwts: total=29426,29461,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:32.209 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:32.209 00:09:32.209 Run status group 0 (all jobs): 00:09:32.209 READ: bw=57.4MiB/s (60.2MB/s), 57.4MiB/s-57.4MiB/s (60.2MB/s-60.2MB/s), io=115MiB (121MB), run=2001-2001msec 00:09:32.209 WRITE: bw=57.5MiB/s (60.3MB/s), 57.5MiB/s-57.5MiB/s (60.3MB/s-60.3MB/s), io=115MiB (121MB), run=2001-2001msec 00:09:32.209 ----------------------------------------------------- 00:09:32.209 Suppressions used: 00:09:32.209 count bytes template 00:09:32.209 1 32 /usr/src/fio/parse.c 00:09:32.209 1 8 libtcmalloc_minimal.so 00:09:32.209 ----------------------------------------------------- 00:09:32.209 00:09:32.209 04:57:09 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:32.209 04:57:09 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:09:32.209 00:09:32.209 real 0m26.222s 00:09:32.209 user 0m18.196s 00:09:32.209 sys 0m12.944s 00:09:32.209 04:57:09 nvme.nvme_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:32.209 ************************************ 00:09:32.209 END TEST nvme_fio 00:09:32.209 ************************************ 00:09:32.209 04:57:09 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:09:32.209 00:09:32.209 real 1m33.131s 00:09:32.209 user 3m32.036s 00:09:32.209 sys 0m22.600s 00:09:32.209 ************************************ 00:09:32.209 END TEST nvme 00:09:32.209 ************************************ 00:09:32.209 04:57:09 nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:32.209 04:57:09 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:32.209 04:57:09 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:09:32.209 04:57:09 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:32.209 04:57:09 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:32.209 04:57:09 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:32.209 04:57:09 -- common/autotest_common.sh@10 -- # set +x 00:09:32.209 ************************************ 00:09:32.209 START TEST nvme_scc 00:09:32.209 ************************************ 00:09:32.209 04:57:09 nvme_scc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:32.209 * Looking for test storage... 00:09:32.209 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:32.209 04:57:10 nvme_scc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:32.209 04:57:10 nvme_scc -- common/autotest_common.sh@1681 -- # lcov --version 00:09:32.209 04:57:10 nvme_scc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:32.209 04:57:10 nvme_scc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:32.209 04:57:10 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:32.209 04:57:10 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:32.209 04:57:10 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:32.209 04:57:10 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:09:32.209 04:57:10 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:09:32.209 04:57:10 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:09:32.209 04:57:10 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:09:32.209 04:57:10 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:09:32.209 04:57:10 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:09:32.209 04:57:10 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:09:32.209 04:57:10 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:32.209 04:57:10 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:09:32.209 04:57:10 nvme_scc -- scripts/common.sh@345 -- # : 1 00:09:32.209 04:57:10 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:32.209 04:57:10 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:32.209 04:57:10 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:09:32.209 04:57:10 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:09:32.209 04:57:10 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:32.209 04:57:10 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:09:32.209 04:57:10 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:32.209 04:57:10 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:09:32.209 04:57:10 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:09:32.209 04:57:10 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:32.209 04:57:10 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:09:32.209 04:57:10 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:32.209 04:57:10 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:32.209 04:57:10 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:32.209 04:57:10 nvme_scc -- scripts/common.sh@368 -- # return 0 00:09:32.209 04:57:10 nvme_scc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:32.209 04:57:10 nvme_scc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:32.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:32.209 --rc genhtml_branch_coverage=1 00:09:32.209 --rc genhtml_function_coverage=1 00:09:32.209 --rc genhtml_legend=1 00:09:32.209 --rc geninfo_all_blocks=1 00:09:32.209 --rc geninfo_unexecuted_blocks=1 00:09:32.209 00:09:32.209 ' 00:09:32.209 04:57:10 nvme_scc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:32.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:32.209 --rc genhtml_branch_coverage=1 00:09:32.209 --rc genhtml_function_coverage=1 00:09:32.209 --rc genhtml_legend=1 00:09:32.209 --rc geninfo_all_blocks=1 00:09:32.209 --rc geninfo_unexecuted_blocks=1 00:09:32.209 00:09:32.209 ' 00:09:32.209 04:57:10 nvme_scc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:32.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:32.209 --rc genhtml_branch_coverage=1 00:09:32.209 --rc genhtml_function_coverage=1 00:09:32.209 --rc genhtml_legend=1 00:09:32.209 --rc geninfo_all_blocks=1 00:09:32.209 --rc geninfo_unexecuted_blocks=1 00:09:32.209 00:09:32.209 ' 00:09:32.209 04:57:10 nvme_scc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:32.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:32.209 --rc genhtml_branch_coverage=1 00:09:32.209 --rc genhtml_function_coverage=1 00:09:32.210 --rc genhtml_legend=1 00:09:32.210 --rc geninfo_all_blocks=1 00:09:32.210 --rc geninfo_unexecuted_blocks=1 00:09:32.210 00:09:32.210 ' 00:09:32.210 04:57:10 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:32.210 04:57:10 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:32.210 04:57:10 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:32.210 04:57:10 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:32.210 04:57:10 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:32.210 04:57:10 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:09:32.210 04:57:10 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:32.210 04:57:10 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:32.210 04:57:10 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:32.210 04:57:10 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.210 04:57:10 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.210 04:57:10 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.210 04:57:10 nvme_scc -- paths/export.sh@5 -- # export PATH 00:09:32.210 04:57:10 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.210 04:57:10 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:09:32.210 04:57:10 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:32.210 04:57:10 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:09:32.210 04:57:10 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:32.210 04:57:10 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:09:32.210 04:57:10 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:32.210 04:57:10 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:32.210 04:57:10 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:32.210 04:57:10 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:09:32.210 04:57:10 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:32.210 04:57:10 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:09:32.210 04:57:10 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:09:32.210 04:57:10 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:09:32.210 04:57:10 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:32.472 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:32.472 Waiting for block devices as requested 00:09:32.765 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:32.765 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:32.765 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:33.026 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:38.321 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:38.321 04:57:16 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:38.321 04:57:16 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:38.321 04:57:16 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:38.321 04:57:16 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:38.321 04:57:16 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.321 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.322 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:38.323 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.324 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:38.325 04:57:16 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:38.325 04:57:16 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:38.325 04:57:16 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:38.326 04:57:16 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:38.326 04:57:16 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:38.326 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.327 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.328 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.329 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:38.330 04:57:16 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:38.330 04:57:16 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:38.330 04:57:16 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:38.330 04:57:16 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.330 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:38.331 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.332 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.333 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.334 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:38.335 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.336 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:38.337 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:38.338 04:57:16 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:38.338 04:57:16 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:38.338 04:57:16 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:38.338 04:57:16 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:38.338 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:38.339 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.340 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:38.341 04:57:16 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:38.341 04:57:16 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:09:38.342 04:57:16 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:38.342 04:57:16 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:09:38.342 04:57:16 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:09:38.342 04:57:16 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:38.342 04:57:16 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:38.342 04:57:16 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:09:38.342 04:57:16 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:09:38.342 04:57:16 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:09:38.342 04:57:16 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:09:38.342 04:57:16 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:09:38.342 04:57:16 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:09:38.342 04:57:16 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:38.342 04:57:16 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:38.342 04:57:16 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:38.342 04:57:16 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:38.342 04:57:16 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:38.342 04:57:16 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:38.342 04:57:16 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:09:38.342 04:57:16 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:38.342 04:57:16 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:09:38.342 04:57:16 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:09:38.342 04:57:16 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:09:38.342 04:57:16 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:09:38.342 04:57:16 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:09:38.342 04:57:16 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:09:38.342 04:57:16 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:38.342 04:57:16 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:38.342 04:57:16 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:38.342 04:57:16 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:38.342 04:57:16 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:38.342 04:57:16 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:38.342 04:57:16 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:09:38.342 04:57:16 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:38.342 04:57:16 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:09:38.342 04:57:16 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:09:38.342 04:57:16 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:09:38.342 04:57:16 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:09:38.342 04:57:16 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:09:38.342 04:57:16 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:09:38.342 04:57:16 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:38.342 04:57:16 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:38.342 04:57:16 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:38.342 04:57:16 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:38.342 04:57:16 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:38.342 04:57:16 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:38.342 04:57:16 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:09:38.342 04:57:16 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:38.342 04:57:16 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:09:38.342 04:57:16 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:09:38.342 04:57:16 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:09:38.342 04:57:16 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:09:38.342 04:57:16 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:09:38.342 04:57:16 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:09:38.342 04:57:16 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:38.342 04:57:16 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:38.342 04:57:16 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:38.342 04:57:16 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:38.342 04:57:16 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:38.342 04:57:16 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:38.342 04:57:16 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:09:38.342 04:57:16 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:09:38.342 04:57:16 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:09:38.342 04:57:16 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:09:38.342 04:57:16 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:09:38.342 04:57:16 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:09:38.342 04:57:16 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:38.913 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:39.485 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:39.485 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:39.485 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:39.485 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:39.746 04:57:17 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:39.746 04:57:17 nvme_scc -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:39.746 04:57:17 nvme_scc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:39.746 04:57:17 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:39.746 ************************************ 00:09:39.746 START TEST nvme_simple_copy 00:09:39.746 ************************************ 00:09:39.746 04:57:17 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:40.008 Initializing NVMe Controllers 00:09:40.008 Attaching to 0000:00:10.0 00:09:40.008 Controller supports SCC. Attached to 0000:00:10.0 00:09:40.008 Namespace ID: 1 size: 6GB 00:09:40.008 Initialization complete. 00:09:40.008 00:09:40.008 Controller QEMU NVMe Ctrl (12340 ) 00:09:40.008 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:09:40.008 Namespace Block Size:4096 00:09:40.008 Writing LBAs 0 to 63 with Random Data 00:09:40.008 Copied LBAs from 0 - 63 to the Destination LBA 256 00:09:40.008 LBAs matching Written Data: 64 00:09:40.008 00:09:40.008 real 0m0.243s 00:09:40.008 user 0m0.090s 00:09:40.008 sys 0m0.052s 00:09:40.008 04:57:18 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:40.008 ************************************ 00:09:40.008 END TEST nvme_simple_copy 00:09:40.008 ************************************ 00:09:40.008 04:57:18 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:09:40.008 00:09:40.008 real 0m8.087s 00:09:40.008 user 0m1.078s 00:09:40.008 sys 0m1.564s 00:09:40.008 ************************************ 00:09:40.008 END TEST nvme_scc 00:09:40.008 ************************************ 00:09:40.008 04:57:18 nvme_scc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:40.008 04:57:18 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:40.008 04:57:18 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:09:40.008 04:57:18 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:09:40.008 04:57:18 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:09:40.008 04:57:18 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:09:40.008 04:57:18 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:09:40.008 04:57:18 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:40.008 04:57:18 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:40.008 04:57:18 -- common/autotest_common.sh@10 -- # set +x 00:09:40.008 ************************************ 00:09:40.008 START TEST nvme_fdp 00:09:40.008 ************************************ 00:09:40.008 04:57:18 nvme_fdp -- common/autotest_common.sh@1125 -- # test/nvme/nvme_fdp.sh 00:09:40.008 * Looking for test storage... 00:09:40.008 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:40.008 04:57:18 nvme_fdp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:40.008 04:57:18 nvme_fdp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:40.008 04:57:18 nvme_fdp -- common/autotest_common.sh@1681 -- # lcov --version 00:09:40.270 04:57:18 nvme_fdp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:40.270 04:57:18 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:40.270 04:57:18 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:40.270 04:57:18 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:40.270 04:57:18 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:09:40.270 04:57:18 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:09:40.270 04:57:18 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:09:40.270 04:57:18 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:09:40.270 04:57:18 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:09:40.270 04:57:18 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:09:40.270 04:57:18 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:09:40.270 04:57:18 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:40.270 04:57:18 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:09:40.270 04:57:18 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:09:40.270 04:57:18 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:40.270 04:57:18 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:40.270 04:57:18 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:09:40.270 04:57:18 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:09:40.270 04:57:18 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:40.270 04:57:18 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:09:40.270 04:57:18 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:40.270 04:57:18 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:09:40.270 04:57:18 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:09:40.270 04:57:18 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:40.270 04:57:18 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:09:40.270 04:57:18 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:40.270 04:57:18 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:40.270 04:57:18 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:40.270 04:57:18 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:09:40.270 04:57:18 nvme_fdp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:40.270 04:57:18 nvme_fdp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:40.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.270 --rc genhtml_branch_coverage=1 00:09:40.270 --rc genhtml_function_coverage=1 00:09:40.270 --rc genhtml_legend=1 00:09:40.270 --rc geninfo_all_blocks=1 00:09:40.270 --rc geninfo_unexecuted_blocks=1 00:09:40.270 00:09:40.270 ' 00:09:40.270 04:57:18 nvme_fdp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:40.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.270 --rc genhtml_branch_coverage=1 00:09:40.270 --rc genhtml_function_coverage=1 00:09:40.270 --rc genhtml_legend=1 00:09:40.270 --rc geninfo_all_blocks=1 00:09:40.270 --rc geninfo_unexecuted_blocks=1 00:09:40.270 00:09:40.270 ' 00:09:40.270 04:57:18 nvme_fdp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:40.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.270 --rc genhtml_branch_coverage=1 00:09:40.270 --rc genhtml_function_coverage=1 00:09:40.270 --rc genhtml_legend=1 00:09:40.270 --rc geninfo_all_blocks=1 00:09:40.270 --rc geninfo_unexecuted_blocks=1 00:09:40.270 00:09:40.270 ' 00:09:40.270 04:57:18 nvme_fdp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:40.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.270 --rc genhtml_branch_coverage=1 00:09:40.270 --rc genhtml_function_coverage=1 00:09:40.270 --rc genhtml_legend=1 00:09:40.270 --rc geninfo_all_blocks=1 00:09:40.270 --rc geninfo_unexecuted_blocks=1 00:09:40.270 00:09:40.270 ' 00:09:40.270 04:57:18 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:40.270 04:57:18 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:40.270 04:57:18 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:40.270 04:57:18 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:40.270 04:57:18 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:40.270 04:57:18 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:09:40.270 04:57:18 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:40.270 04:57:18 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:40.270 04:57:18 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:40.270 04:57:18 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.270 04:57:18 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.270 04:57:18 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.270 04:57:18 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:09:40.270 04:57:18 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.270 04:57:18 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:09:40.270 04:57:18 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:40.271 04:57:18 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:09:40.271 04:57:18 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:40.271 04:57:18 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:09:40.271 04:57:18 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:40.271 04:57:18 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:40.271 04:57:18 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:40.271 04:57:18 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:09:40.271 04:57:18 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:40.271 04:57:18 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:40.533 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:40.794 Waiting for block devices as requested 00:09:40.794 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:40.794 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:41.056 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:41.056 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:46.360 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:46.360 04:57:24 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:46.360 04:57:24 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:46.360 04:57:24 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:46.360 04:57:24 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:46.360 04:57:24 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:46.360 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.361 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.362 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.363 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:46.364 04:57:24 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:46.364 04:57:24 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:46.364 04:57:24 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:46.365 04:57:24 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:46.365 04:57:24 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:46.365 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.366 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.367 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.368 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:46.369 04:57:24 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:46.369 04:57:24 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:46.369 04:57:24 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:46.369 04:57:24 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.369 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.370 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:46.371 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.372 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.373 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:46.374 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:46.375 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.376 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:46.377 04:57:24 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:46.377 04:57:24 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:46.377 04:57:24 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:46.377 04:57:24 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:46.377 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.378 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:46.379 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:46.380 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:46.380 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.380 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.380 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:46.380 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:46.380 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:46.641 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.641 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.641 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.641 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:46.641 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:46.641 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.641 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.641 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.641 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:46.641 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:46.641 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.641 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.641 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.641 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:46.641 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:46.641 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.641 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.641 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:46.641 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:46.641 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:46.641 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.641 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.641 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.641 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:46.641 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:46.641 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.641 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.641 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.641 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:46.641 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:46.641 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.641 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.641 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.641 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:46.641 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:46.641 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.641 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.641 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.641 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:46.641 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:46.641 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.641 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.641 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.641 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:46.641 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:46.641 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.641 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.641 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.641 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:46.641 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:46.641 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.641 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.641 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:46.641 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:46.641 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:46.641 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.641 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.641 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:46.641 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:46.641 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:46.641 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.641 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.641 04:57:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:46.641 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:46.641 04:57:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:46.641 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:46.641 04:57:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.641 04:57:24 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:46.641 04:57:24 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:46.641 04:57:24 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:46.642 04:57:24 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:46.642 04:57:24 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:46.642 04:57:24 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:46.642 04:57:24 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:09:46.642 04:57:24 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:09:46.642 04:57:24 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:46.642 04:57:24 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:09:46.642 04:57:24 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:46.642 04:57:24 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:09:46.642 04:57:24 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:09:46.642 04:57:24 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:46.642 04:57:24 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:46.642 04:57:24 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:09:46.642 04:57:24 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:09:46.642 04:57:24 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:09:46.642 04:57:24 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:09:46.642 04:57:24 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:09:46.642 04:57:24 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:09:46.642 04:57:24 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:46.642 04:57:24 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:46.642 04:57:24 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:46.642 04:57:24 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:46.642 04:57:24 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:46.642 04:57:24 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:46.642 04:57:24 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:46.642 04:57:24 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:09:46.642 04:57:24 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:09:46.642 04:57:24 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:09:46.642 04:57:24 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:09:46.642 04:57:24 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:09:46.642 04:57:24 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:09:46.642 04:57:24 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:46.642 04:57:24 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:46.642 04:57:24 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:46.642 04:57:24 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:46.642 04:57:24 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:46.642 04:57:24 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:46.642 04:57:24 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:46.642 04:57:24 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:09:46.642 04:57:24 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:09:46.642 04:57:24 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:09:46.642 04:57:24 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:09:46.642 04:57:24 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:09:46.642 04:57:24 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:09:46.642 04:57:24 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:46.642 04:57:24 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:46.642 04:57:24 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:09:46.642 04:57:24 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:09:46.642 04:57:24 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:09:46.642 04:57:24 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:46.642 04:57:24 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:09:46.642 04:57:24 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:46.642 04:57:24 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:09:46.642 04:57:24 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:09:46.642 04:57:24 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:09:46.642 04:57:24 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:09:46.642 04:57:24 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:09:46.642 04:57:24 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:09:46.642 04:57:24 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:46.642 04:57:24 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:46.642 04:57:24 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:46.642 04:57:24 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:46.642 04:57:24 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:46.642 04:57:24 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:46.642 04:57:24 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:09:46.642 04:57:24 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:09:46.642 04:57:24 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:09:46.642 04:57:24 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:09:46.642 04:57:24 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:09:46.642 04:57:24 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:46.903 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:47.470 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:47.470 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:47.470 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:47.470 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:47.730 04:57:25 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:47.730 04:57:25 nvme_fdp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:47.730 04:57:25 nvme_fdp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:47.730 04:57:25 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:47.730 ************************************ 00:09:47.730 START TEST nvme_flexible_data_placement 00:09:47.730 ************************************ 00:09:47.730 04:57:25 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:47.990 Initializing NVMe Controllers 00:09:47.990 Attaching to 0000:00:13.0 00:09:47.990 Controller supports FDP Attached to 0000:00:13.0 00:09:47.990 Namespace ID: 1 Endurance Group ID: 1 00:09:47.990 Initialization complete. 00:09:47.990 00:09:47.990 ================================== 00:09:47.990 == FDP tests for Namespace: #01 == 00:09:47.990 ================================== 00:09:47.990 00:09:47.990 Get Feature: FDP: 00:09:47.990 ================= 00:09:47.990 Enabled: Yes 00:09:47.990 FDP configuration Index: 0 00:09:47.990 00:09:47.990 FDP configurations log page 00:09:47.990 =========================== 00:09:47.990 Number of FDP configurations: 1 00:09:47.990 Version: 0 00:09:47.990 Size: 112 00:09:47.990 FDP Configuration Descriptor: 0 00:09:47.990 Descriptor Size: 96 00:09:47.990 Reclaim Group Identifier format: 2 00:09:47.990 FDP Volatile Write Cache: Not Present 00:09:47.990 FDP Configuration: Valid 00:09:47.990 Vendor Specific Size: 0 00:09:47.990 Number of Reclaim Groups: 2 00:09:47.990 Number of Recalim Unit Handles: 8 00:09:47.990 Max Placement Identifiers: 128 00:09:47.990 Number of Namespaces Suppprted: 256 00:09:47.990 Reclaim unit Nominal Size: 6000000 bytes 00:09:47.990 Estimated Reclaim Unit Time Limit: Not Reported 00:09:47.990 RUH Desc #000: RUH Type: Initially Isolated 00:09:47.990 RUH Desc #001: RUH Type: Initially Isolated 00:09:47.990 RUH Desc #002: RUH Type: Initially Isolated 00:09:47.990 RUH Desc #003: RUH Type: Initially Isolated 00:09:47.990 RUH Desc #004: RUH Type: Initially Isolated 00:09:47.990 RUH Desc #005: RUH Type: Initially Isolated 00:09:47.990 RUH Desc #006: RUH Type: Initially Isolated 00:09:47.990 RUH Desc #007: RUH Type: Initially Isolated 00:09:47.990 00:09:47.990 FDP reclaim unit handle usage log page 00:09:47.990 ====================================== 00:09:47.990 Number of Reclaim Unit Handles: 8 00:09:47.990 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:47.990 RUH Usage Desc #001: RUH Attributes: Unused 00:09:47.990 RUH Usage Desc #002: RUH Attributes: Unused 00:09:47.990 RUH Usage Desc #003: RUH Attributes: Unused 00:09:47.990 RUH Usage Desc #004: RUH Attributes: Unused 00:09:47.990 RUH Usage Desc #005: RUH Attributes: Unused 00:09:47.990 RUH Usage Desc #006: RUH Attributes: Unused 00:09:47.990 RUH Usage Desc #007: RUH Attributes: Unused 00:09:47.990 00:09:47.990 FDP statistics log page 00:09:47.990 ======================= 00:09:47.990 Host bytes with metadata written: 1847590912 00:09:47.990 Media bytes with metadata written: 1848610816 00:09:47.990 Media bytes erased: 0 00:09:47.990 00:09:47.990 FDP Reclaim unit handle status 00:09:47.990 ============================== 00:09:47.990 Number of RUHS descriptors: 2 00:09:47.990 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000003e00 00:09:47.990 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:09:47.990 00:09:47.990 FDP write on placement id: 0 success 00:09:47.990 00:09:47.990 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:09:47.990 00:09:47.990 IO mgmt send: RUH update for Placement ID: #0 Success 00:09:47.990 00:09:47.990 Get Feature: FDP Events for Placement handle: #0 00:09:47.990 ======================== 00:09:47.990 Number of FDP Events: 6 00:09:47.990 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:09:47.990 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:09:47.990 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:09:47.990 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:09:47.990 FDP Event: #4 Type: Media Reallocated Enabled: No 00:09:47.990 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:09:47.990 00:09:47.990 FDP events log page 00:09:47.990 =================== 00:09:47.990 Number of FDP events: 1 00:09:47.990 FDP Event #0: 00:09:47.990 Event Type: RU Not Written to Capacity 00:09:47.990 Placement Identifier: Valid 00:09:47.990 NSID: Valid 00:09:47.990 Location: Valid 00:09:47.990 Placement Identifier: 0 00:09:47.990 Event Timestamp: 6 00:09:47.990 Namespace Identifier: 1 00:09:47.990 Reclaim Group Identifier: 0 00:09:47.990 Reclaim Unit Handle Identifier: 0 00:09:47.990 00:09:47.990 FDP test passed 00:09:47.990 00:09:47.990 real 0m0.209s 00:09:47.990 user 0m0.057s 00:09:47.990 sys 0m0.049s 00:09:47.990 ************************************ 00:09:47.990 END TEST nvme_flexible_data_placement 00:09:47.990 04:57:25 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:47.990 04:57:25 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:09:47.990 ************************************ 00:09:47.990 00:09:47.990 real 0m7.910s 00:09:47.990 user 0m1.062s 00:09:47.990 sys 0m1.589s 00:09:47.990 ************************************ 00:09:47.990 END TEST nvme_fdp 00:09:47.990 ************************************ 00:09:47.990 04:57:26 nvme_fdp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:47.990 04:57:26 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:47.990 04:57:26 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:09:47.990 04:57:26 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:47.990 04:57:26 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:47.990 04:57:26 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:47.990 04:57:26 -- common/autotest_common.sh@10 -- # set +x 00:09:47.990 ************************************ 00:09:47.990 START TEST nvme_rpc 00:09:47.990 ************************************ 00:09:47.990 04:57:26 nvme_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:47.990 * Looking for test storage... 00:09:47.990 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:47.990 04:57:26 nvme_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:47.990 04:57:26 nvme_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:09:47.990 04:57:26 nvme_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:48.250 04:57:26 nvme_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:48.250 04:57:26 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:48.250 04:57:26 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:48.250 04:57:26 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:48.250 04:57:26 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:48.250 04:57:26 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:48.250 04:57:26 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:48.250 04:57:26 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:48.250 04:57:26 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:48.250 04:57:26 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:48.250 04:57:26 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:48.250 04:57:26 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:48.250 04:57:26 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:48.250 04:57:26 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:09:48.250 04:57:26 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:48.250 04:57:26 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:48.250 04:57:26 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:48.250 04:57:26 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:09:48.250 04:57:26 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:48.250 04:57:26 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:09:48.250 04:57:26 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:48.250 04:57:26 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:48.250 04:57:26 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:09:48.250 04:57:26 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:48.250 04:57:26 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:09:48.250 04:57:26 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:48.250 04:57:26 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:48.250 04:57:26 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:48.250 04:57:26 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:09:48.250 04:57:26 nvme_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:48.250 04:57:26 nvme_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:48.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.250 --rc genhtml_branch_coverage=1 00:09:48.250 --rc genhtml_function_coverage=1 00:09:48.250 --rc genhtml_legend=1 00:09:48.250 --rc geninfo_all_blocks=1 00:09:48.250 --rc geninfo_unexecuted_blocks=1 00:09:48.250 00:09:48.250 ' 00:09:48.250 04:57:26 nvme_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:48.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.250 --rc genhtml_branch_coverage=1 00:09:48.250 --rc genhtml_function_coverage=1 00:09:48.250 --rc genhtml_legend=1 00:09:48.250 --rc geninfo_all_blocks=1 00:09:48.250 --rc geninfo_unexecuted_blocks=1 00:09:48.250 00:09:48.250 ' 00:09:48.250 04:57:26 nvme_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:48.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.250 --rc genhtml_branch_coverage=1 00:09:48.250 --rc genhtml_function_coverage=1 00:09:48.250 --rc genhtml_legend=1 00:09:48.250 --rc geninfo_all_blocks=1 00:09:48.250 --rc geninfo_unexecuted_blocks=1 00:09:48.250 00:09:48.250 ' 00:09:48.250 04:57:26 nvme_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:48.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.251 --rc genhtml_branch_coverage=1 00:09:48.251 --rc genhtml_function_coverage=1 00:09:48.251 --rc genhtml_legend=1 00:09:48.251 --rc geninfo_all_blocks=1 00:09:48.251 --rc geninfo_unexecuted_blocks=1 00:09:48.251 00:09:48.251 ' 00:09:48.251 04:57:26 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:48.251 04:57:26 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:09:48.251 04:57:26 nvme_rpc -- common/autotest_common.sh@1507 -- # bdfs=() 00:09:48.251 04:57:26 nvme_rpc -- common/autotest_common.sh@1507 -- # local bdfs 00:09:48.251 04:57:26 nvme_rpc -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:09:48.251 04:57:26 nvme_rpc -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:09:48.251 04:57:26 nvme_rpc -- common/autotest_common.sh@1496 -- # bdfs=() 00:09:48.251 04:57:26 nvme_rpc -- common/autotest_common.sh@1496 -- # local bdfs 00:09:48.251 04:57:26 nvme_rpc -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:48.251 04:57:26 nvme_rpc -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:48.251 04:57:26 nvme_rpc -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:09:48.251 04:57:26 nvme_rpc -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:09:48.251 04:57:26 nvme_rpc -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:48.251 04:57:26 nvme_rpc -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:09:48.251 04:57:26 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:09:48.251 04:57:26 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=77581 00:09:48.251 04:57:26 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:09:48.251 04:57:26 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:48.251 04:57:26 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 77581 00:09:48.251 04:57:26 nvme_rpc -- common/autotest_common.sh@831 -- # '[' -z 77581 ']' 00:09:48.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:48.251 04:57:26 nvme_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:48.251 04:57:26 nvme_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:48.251 04:57:26 nvme_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:48.251 04:57:26 nvme_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:48.251 04:57:26 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:48.251 [2024-12-06 04:57:26.434339] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:09:48.251 [2024-12-06 04:57:26.434503] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77581 ] 00:09:48.511 [2024-12-06 04:57:26.574405] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:48.511 [2024-12-06 04:57:26.649041] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:48.511 [2024-12-06 04:57:26.649132] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:49.079 04:57:27 nvme_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:49.079 04:57:27 nvme_rpc -- common/autotest_common.sh@864 -- # return 0 00:09:49.079 04:57:27 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:09:49.337 Nvme0n1 00:09:49.337 04:57:27 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:09:49.337 04:57:27 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:09:49.595 request: 00:09:49.595 { 00:09:49.595 "bdev_name": "Nvme0n1", 00:09:49.595 "filename": "non_existing_file", 00:09:49.595 "method": "bdev_nvme_apply_firmware", 00:09:49.595 "req_id": 1 00:09:49.595 } 00:09:49.595 Got JSON-RPC error response 00:09:49.595 response: 00:09:49.595 { 00:09:49.595 "code": -32603, 00:09:49.595 "message": "open file failed." 00:09:49.595 } 00:09:49.595 04:57:27 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:09:49.595 04:57:27 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:09:49.595 04:57:27 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:09:49.854 04:57:27 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:49.854 04:57:27 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 77581 00:09:49.854 04:57:27 nvme_rpc -- common/autotest_common.sh@950 -- # '[' -z 77581 ']' 00:09:49.854 04:57:27 nvme_rpc -- common/autotest_common.sh@954 -- # kill -0 77581 00:09:49.854 04:57:27 nvme_rpc -- common/autotest_common.sh@955 -- # uname 00:09:49.854 04:57:27 nvme_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:49.854 04:57:27 nvme_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77581 00:09:49.854 04:57:27 nvme_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:49.854 killing process with pid 77581 00:09:49.854 04:57:27 nvme_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:49.854 04:57:27 nvme_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77581' 00:09:49.854 04:57:27 nvme_rpc -- common/autotest_common.sh@969 -- # kill 77581 00:09:49.854 04:57:27 nvme_rpc -- common/autotest_common.sh@974 -- # wait 77581 00:09:50.112 00:09:50.112 real 0m2.206s 00:09:50.112 user 0m4.023s 00:09:50.112 sys 0m0.697s 00:09:50.112 04:57:28 nvme_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:50.112 ************************************ 00:09:50.112 END TEST nvme_rpc 00:09:50.112 ************************************ 00:09:50.112 04:57:28 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:50.370 04:57:28 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:50.370 04:57:28 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:50.370 04:57:28 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:50.370 04:57:28 -- common/autotest_common.sh@10 -- # set +x 00:09:50.370 ************************************ 00:09:50.370 START TEST nvme_rpc_timeouts 00:09:50.370 ************************************ 00:09:50.370 04:57:28 nvme_rpc_timeouts -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:50.370 * Looking for test storage... 00:09:50.370 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:50.370 04:57:28 nvme_rpc_timeouts -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:50.370 04:57:28 nvme_rpc_timeouts -- common/autotest_common.sh@1681 -- # lcov --version 00:09:50.370 04:57:28 nvme_rpc_timeouts -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:50.370 04:57:28 nvme_rpc_timeouts -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:50.370 04:57:28 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:50.370 04:57:28 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:50.370 04:57:28 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:50.370 04:57:28 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:09:50.370 04:57:28 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:09:50.370 04:57:28 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:09:50.370 04:57:28 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:09:50.370 04:57:28 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:09:50.370 04:57:28 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:09:50.370 04:57:28 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:09:50.370 04:57:28 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:50.370 04:57:28 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:09:50.370 04:57:28 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:09:50.370 04:57:28 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:50.370 04:57:28 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:50.370 04:57:28 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:09:50.370 04:57:28 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:09:50.370 04:57:28 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:50.370 04:57:28 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:09:50.370 04:57:28 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:09:50.370 04:57:28 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:09:50.370 04:57:28 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:09:50.370 04:57:28 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:50.370 04:57:28 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:09:50.370 04:57:28 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:09:50.370 04:57:28 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:50.370 04:57:28 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:50.370 04:57:28 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:09:50.370 04:57:28 nvme_rpc_timeouts -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:50.370 04:57:28 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:50.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.370 --rc genhtml_branch_coverage=1 00:09:50.370 --rc genhtml_function_coverage=1 00:09:50.370 --rc genhtml_legend=1 00:09:50.370 --rc geninfo_all_blocks=1 00:09:50.370 --rc geninfo_unexecuted_blocks=1 00:09:50.370 00:09:50.370 ' 00:09:50.370 04:57:28 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:50.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.370 --rc genhtml_branch_coverage=1 00:09:50.370 --rc genhtml_function_coverage=1 00:09:50.370 --rc genhtml_legend=1 00:09:50.370 --rc geninfo_all_blocks=1 00:09:50.370 --rc geninfo_unexecuted_blocks=1 00:09:50.370 00:09:50.370 ' 00:09:50.370 04:57:28 nvme_rpc_timeouts -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:50.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.370 --rc genhtml_branch_coverage=1 00:09:50.370 --rc genhtml_function_coverage=1 00:09:50.370 --rc genhtml_legend=1 00:09:50.370 --rc geninfo_all_blocks=1 00:09:50.370 --rc geninfo_unexecuted_blocks=1 00:09:50.370 00:09:50.370 ' 00:09:50.370 04:57:28 nvme_rpc_timeouts -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:50.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.370 --rc genhtml_branch_coverage=1 00:09:50.370 --rc genhtml_function_coverage=1 00:09:50.370 --rc genhtml_legend=1 00:09:50.370 --rc geninfo_all_blocks=1 00:09:50.370 --rc geninfo_unexecuted_blocks=1 00:09:50.370 00:09:50.370 ' 00:09:50.370 04:57:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:50.370 04:57:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_77635 00:09:50.370 04:57:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_77635 00:09:50.370 04:57:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=77667 00:09:50.370 04:57:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:09:50.370 04:57:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 77667 00:09:50.370 04:57:28 nvme_rpc_timeouts -- common/autotest_common.sh@831 -- # '[' -z 77667 ']' 00:09:50.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:50.370 04:57:28 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:50.370 04:57:28 nvme_rpc_timeouts -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:50.370 04:57:28 nvme_rpc_timeouts -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:50.370 04:57:28 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:50.370 04:57:28 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:50.371 04:57:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:50.371 [2024-12-06 04:57:28.600430] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:09:50.371 [2024-12-06 04:57:28.600561] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77667 ] 00:09:50.628 [2024-12-06 04:57:28.734876] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:50.628 [2024-12-06 04:57:28.778332] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:50.628 [2024-12-06 04:57:28.778394] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.567 04:57:29 nvme_rpc_timeouts -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:51.567 04:57:29 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # return 0 00:09:51.567 Checking default timeout settings: 00:09:51.567 04:57:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:09:51.567 04:57:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:51.567 Making settings changes with rpc: 00:09:51.567 04:57:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:09:51.567 04:57:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:09:51.828 Check default vs. modified settings: 00:09:51.828 04:57:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:09:51.828 04:57:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:52.398 04:57:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:09:52.398 04:57:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:52.398 04:57:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_77635 00:09:52.398 04:57:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:52.398 04:57:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:52.398 04:57:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:09:52.398 04:57:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_77635 00:09:52.398 04:57:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:52.398 04:57:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:52.398 04:57:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:09:52.398 Setting action_on_timeout is changed as expected. 00:09:52.398 04:57:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:09:52.398 04:57:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:09:52.398 04:57:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:52.398 04:57:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:52.398 04:57:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:52.398 04:57:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_77635 00:09:52.398 04:57:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:52.398 04:57:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:52.398 04:57:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_77635 00:09:52.398 04:57:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:52.398 Setting timeout_us is changed as expected. 00:09:52.398 04:57:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:09:52.398 04:57:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:09:52.398 04:57:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:09:52.398 04:57:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:52.398 04:57:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_77635 00:09:52.398 04:57:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:52.398 04:57:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:52.398 04:57:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:52.398 04:57:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:52.398 04:57:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_77635 00:09:52.398 04:57:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:52.398 Setting timeout_admin_us is changed as expected. 00:09:52.398 04:57:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:09:52.398 04:57:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:09:52.398 04:57:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:09:52.398 04:57:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:09:52.398 04:57:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_77635 /tmp/settings_modified_77635 00:09:52.398 04:57:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 77667 00:09:52.398 04:57:30 nvme_rpc_timeouts -- common/autotest_common.sh@950 -- # '[' -z 77667 ']' 00:09:52.398 04:57:30 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # kill -0 77667 00:09:52.398 04:57:30 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # uname 00:09:52.398 04:57:30 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:52.398 04:57:30 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77667 00:09:52.398 04:57:30 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:52.398 killing process with pid 77667 00:09:52.398 04:57:30 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:52.398 04:57:30 nvme_rpc_timeouts -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77667' 00:09:52.398 04:57:30 nvme_rpc_timeouts -- common/autotest_common.sh@969 -- # kill 77667 00:09:52.398 04:57:30 nvme_rpc_timeouts -- common/autotest_common.sh@974 -- # wait 77667 00:09:52.659 RPC TIMEOUT SETTING TEST PASSED. 00:09:52.659 04:57:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:09:52.659 00:09:52.659 real 0m2.408s 00:09:52.659 user 0m4.788s 00:09:52.659 sys 0m0.553s 00:09:52.659 04:57:30 nvme_rpc_timeouts -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:52.659 ************************************ 00:09:52.659 END TEST nvme_rpc_timeouts 00:09:52.659 ************************************ 00:09:52.659 04:57:30 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:52.659 04:57:30 -- spdk/autotest.sh@239 -- # uname -s 00:09:52.659 04:57:30 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:09:52.659 04:57:30 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:52.659 04:57:30 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:52.659 04:57:30 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:52.659 04:57:30 -- common/autotest_common.sh@10 -- # set +x 00:09:52.659 ************************************ 00:09:52.659 START TEST sw_hotplug 00:09:52.659 ************************************ 00:09:52.659 04:57:30 sw_hotplug -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:52.921 * Looking for test storage... 00:09:52.921 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:52.921 04:57:30 sw_hotplug -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:52.921 04:57:30 sw_hotplug -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:52.921 04:57:30 sw_hotplug -- common/autotest_common.sh@1681 -- # lcov --version 00:09:52.921 04:57:30 sw_hotplug -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:52.921 04:57:30 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:52.921 04:57:30 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:52.921 04:57:30 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:52.921 04:57:30 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:09:52.921 04:57:30 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:09:52.921 04:57:30 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:09:52.921 04:57:30 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:09:52.921 04:57:30 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:09:52.921 04:57:30 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:09:52.921 04:57:30 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:09:52.921 04:57:30 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:52.921 04:57:30 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:09:52.921 04:57:30 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:09:52.921 04:57:30 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:52.921 04:57:30 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:52.921 04:57:30 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:09:52.921 04:57:30 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:09:52.921 04:57:30 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:52.921 04:57:30 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:09:52.921 04:57:30 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:09:52.921 04:57:30 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:09:52.921 04:57:30 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:09:52.921 04:57:30 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:52.921 04:57:30 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:09:52.921 04:57:30 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:09:52.921 04:57:30 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:52.921 04:57:30 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:52.921 04:57:30 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:09:52.921 04:57:30 sw_hotplug -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:52.921 04:57:30 sw_hotplug -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:52.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.921 --rc genhtml_branch_coverage=1 00:09:52.921 --rc genhtml_function_coverage=1 00:09:52.921 --rc genhtml_legend=1 00:09:52.921 --rc geninfo_all_blocks=1 00:09:52.921 --rc geninfo_unexecuted_blocks=1 00:09:52.921 00:09:52.921 ' 00:09:52.921 04:57:30 sw_hotplug -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:52.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.921 --rc genhtml_branch_coverage=1 00:09:52.921 --rc genhtml_function_coverage=1 00:09:52.921 --rc genhtml_legend=1 00:09:52.921 --rc geninfo_all_blocks=1 00:09:52.921 --rc geninfo_unexecuted_blocks=1 00:09:52.921 00:09:52.921 ' 00:09:52.921 04:57:30 sw_hotplug -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:52.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.921 --rc genhtml_branch_coverage=1 00:09:52.921 --rc genhtml_function_coverage=1 00:09:52.921 --rc genhtml_legend=1 00:09:52.921 --rc geninfo_all_blocks=1 00:09:52.921 --rc geninfo_unexecuted_blocks=1 00:09:52.921 00:09:52.921 ' 00:09:52.921 04:57:30 sw_hotplug -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:52.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.922 --rc genhtml_branch_coverage=1 00:09:52.922 --rc genhtml_function_coverage=1 00:09:52.922 --rc genhtml_legend=1 00:09:52.922 --rc geninfo_all_blocks=1 00:09:52.922 --rc geninfo_unexecuted_blocks=1 00:09:52.922 00:09:52.922 ' 00:09:52.922 04:57:30 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:53.182 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:53.442 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:53.442 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:53.442 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:53.442 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:53.442 04:57:31 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:09:53.442 04:57:31 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:09:53.442 04:57:31 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:09:53.442 04:57:31 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:09:53.442 04:57:31 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:09:53.442 04:57:31 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:09:53.442 04:57:31 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:09:53.442 04:57:31 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:09:53.442 04:57:31 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:09:53.442 04:57:31 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:09:53.442 04:57:31 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:09:53.442 04:57:31 sw_hotplug -- scripts/common.sh@233 -- # local class 00:09:53.442 04:57:31 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:09:53.442 04:57:31 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:09:53.442 04:57:31 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:09:53.442 04:57:31 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:09:53.442 04:57:31 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:09:53.442 04:57:31 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:09:53.442 04:57:31 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:09:53.442 04:57:31 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:09:53.442 04:57:31 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:09:53.442 04:57:31 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:09:53.442 04:57:31 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:09:53.442 04:57:31 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:09:53.442 04:57:31 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:09:53.442 04:57:31 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:09:53.442 04:57:31 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:53.442 04:57:31 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:09:53.442 04:57:31 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:53.442 04:57:31 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:53.442 04:57:31 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:53.442 04:57:31 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:53.442 04:57:31 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:09:53.442 04:57:31 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:53.442 04:57:31 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:09:53.442 04:57:31 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:53.442 04:57:31 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:53.442 04:57:31 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:53.442 04:57:31 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:53.442 04:57:31 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:09:53.442 04:57:31 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:53.442 04:57:31 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:09:53.442 04:57:31 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:53.442 04:57:31 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:53.442 04:57:31 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:53.442 04:57:31 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:53.442 04:57:31 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:09:53.442 04:57:31 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:53.442 04:57:31 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:09:53.442 04:57:31 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:53.442 04:57:31 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:53.442 04:57:31 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:53.442 04:57:31 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:53.442 04:57:31 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:09:53.442 04:57:31 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:53.442 04:57:31 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:09:53.442 04:57:31 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:53.442 04:57:31 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:53.442 04:57:31 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:53.442 04:57:31 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:53.442 04:57:31 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:09:53.442 04:57:31 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:53.442 04:57:31 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:53.442 04:57:31 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:53.442 04:57:31 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:53.442 04:57:31 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:09:53.442 04:57:31 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:53.442 04:57:31 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:53.442 04:57:31 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:53.442 04:57:31 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:53.442 04:57:31 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:09:53.442 04:57:31 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:53.442 04:57:31 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:53.442 04:57:31 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:53.442 04:57:31 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:09:53.442 04:57:31 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:53.442 04:57:31 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:09:53.442 04:57:31 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:09:53.442 04:57:31 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:53.702 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:53.964 Waiting for block devices as requested 00:09:53.964 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:54.227 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:54.227 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:54.227 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:59.537 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:59.537 04:57:37 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:09:59.537 04:57:37 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:59.798 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:09:59.798 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:59.798 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:10:00.371 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:10:00.371 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:00.371 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:00.632 04:57:38 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:10:00.632 04:57:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:00.632 04:57:38 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:10:00.632 04:57:38 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:10:00.632 04:57:38 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=78517 00:10:00.632 04:57:38 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:10:00.632 04:57:38 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:00.632 04:57:38 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:10:00.632 04:57:38 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:10:00.632 04:57:38 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:10:00.632 04:57:38 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:10:00.632 04:57:38 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:10:00.632 04:57:38 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:10:00.632 04:57:38 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 false 00:10:00.632 04:57:38 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:00.632 04:57:38 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:00.632 04:57:38 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:10:00.632 04:57:38 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:00.632 04:57:38 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:00.893 Initializing NVMe Controllers 00:10:00.893 Attaching to 0000:00:10.0 00:10:00.893 Attaching to 0000:00:11.0 00:10:00.893 Attached to 0000:00:11.0 00:10:00.893 Attached to 0000:00:10.0 00:10:00.893 Initialization complete. Starting I/O... 00:10:00.893 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:10:00.893 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:10:00.893 00:10:01.838 QEMU NVMe Ctrl (12341 ): 2588 I/Os completed (+2588) 00:10:01.838 QEMU NVMe Ctrl (12340 ): 2592 I/Os completed (+2592) 00:10:01.838 00:10:02.785 QEMU NVMe Ctrl (12341 ): 5540 I/Os completed (+2952) 00:10:02.785 QEMU NVMe Ctrl (12340 ): 5557 I/Os completed (+2965) 00:10:02.785 00:10:03.729 QEMU NVMe Ctrl (12341 ): 8339 I/Os completed (+2799) 00:10:03.729 QEMU NVMe Ctrl (12340 ): 8358 I/Os completed (+2801) 00:10:03.729 00:10:04.704 QEMU NVMe Ctrl (12341 ): 11567 I/Os completed (+3228) 00:10:04.704 QEMU NVMe Ctrl (12340 ): 11605 I/Os completed (+3247) 00:10:04.704 00:10:06.092 QEMU NVMe Ctrl (12341 ): 14699 I/Os completed (+3132) 00:10:06.092 QEMU NVMe Ctrl (12340 ): 14737 I/Os completed (+3132) 00:10:06.092 00:10:06.664 04:57:44 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:06.664 04:57:44 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:06.664 04:57:44 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:06.664 [2024-12-06 04:57:44.740093] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:10:06.664 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:06.664 [2024-12-06 04:57:44.741107] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:06.664 [2024-12-06 04:57:44.741236] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:06.664 [2024-12-06 04:57:44.741288] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:06.664 [2024-12-06 04:57:44.741330] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:06.664 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:06.664 [2024-12-06 04:57:44.742462] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:06.664 [2024-12-06 04:57:44.742502] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:06.664 [2024-12-06 04:57:44.742513] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:06.664 [2024-12-06 04:57:44.742526] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:06.664 04:57:44 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:06.664 04:57:44 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:06.664 [2024-12-06 04:57:44.760142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:10:06.664 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:06.664 [2024-12-06 04:57:44.760914] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:06.664 [2024-12-06 04:57:44.760943] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:06.664 [2024-12-06 04:57:44.760958] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:06.664 [2024-12-06 04:57:44.760971] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:06.664 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:06.664 [2024-12-06 04:57:44.761861] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:06.664 [2024-12-06 04:57:44.761888] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:06.664 [2024-12-06 04:57:44.761902] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:06.664 [2024-12-06 04:57:44.761912] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:06.664 04:57:44 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:06.665 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:06.665 EAL: Scan for (pci) bus failed. 00:10:06.665 04:57:44 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:06.665 04:57:44 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:06.665 04:57:44 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:06.665 04:57:44 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:06.926 00:10:06.926 04:57:44 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:06.926 04:57:44 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:06.926 04:57:44 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:06.926 04:57:44 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:06.926 04:57:44 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:06.926 Attaching to 0000:00:10.0 00:10:06.926 Attached to 0000:00:10.0 00:10:06.926 04:57:45 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:06.926 04:57:45 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:06.926 04:57:45 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:06.926 Attaching to 0000:00:11.0 00:10:06.926 Attached to 0000:00:11.0 00:10:07.866 QEMU NVMe Ctrl (12340 ): 4368 I/Os completed (+4368) 00:10:07.866 QEMU NVMe Ctrl (12341 ): 3992 I/Os completed (+3992) 00:10:07.866 00:10:08.802 QEMU NVMe Ctrl (12340 ): 8704 I/Os completed (+4336) 00:10:08.802 QEMU NVMe Ctrl (12341 ): 8315 I/Os completed (+4323) 00:10:08.802 00:10:09.736 QEMU NVMe Ctrl (12340 ): 13258 I/Os completed (+4554) 00:10:09.736 QEMU NVMe Ctrl (12341 ): 13289 I/Os completed (+4974) 00:10:09.736 00:10:11.120 QEMU NVMe Ctrl (12340 ): 16603 I/Os completed (+3345) 00:10:11.120 QEMU NVMe Ctrl (12341 ): 16624 I/Os completed (+3335) 00:10:11.120 00:10:11.693 QEMU NVMe Ctrl (12340 ): 19426 I/Os completed (+2823) 00:10:11.693 QEMU NVMe Ctrl (12341 ): 19487 I/Os completed (+2863) 00:10:11.693 00:10:13.080 QEMU NVMe Ctrl (12340 ): 22073 I/Os completed (+2647) 00:10:13.080 QEMU NVMe Ctrl (12341 ): 22141 I/Os completed (+2654) 00:10:13.080 00:10:14.021 QEMU NVMe Ctrl (12340 ): 25287 I/Os completed (+3214) 00:10:14.021 QEMU NVMe Ctrl (12341 ): 25348 I/Os completed (+3207) 00:10:14.021 00:10:14.961 QEMU NVMe Ctrl (12340 ): 29107 I/Os completed (+3820) 00:10:14.961 QEMU NVMe Ctrl (12341 ): 29215 I/Os completed (+3867) 00:10:14.961 00:10:15.906 QEMU NVMe Ctrl (12340 ): 32265 I/Os completed (+3158) 00:10:15.906 QEMU NVMe Ctrl (12341 ): 32477 I/Os completed (+3262) 00:10:15.906 00:10:16.857 QEMU NVMe Ctrl (12340 ): 35421 I/Os completed (+3156) 00:10:16.857 QEMU NVMe Ctrl (12341 ): 35710 I/Os completed (+3233) 00:10:16.857 00:10:17.799 QEMU NVMe Ctrl (12340 ): 38301 I/Os completed (+2880) 00:10:17.799 QEMU NVMe Ctrl (12341 ): 38600 I/Os completed (+2890) 00:10:17.799 00:10:18.744 QEMU NVMe Ctrl (12340 ): 42685 I/Os completed (+4384) 00:10:18.744 QEMU NVMe Ctrl (12341 ): 42984 I/Os completed (+4384) 00:10:18.744 00:10:19.005 04:57:57 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:19.005 04:57:57 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:19.005 04:57:57 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:19.005 04:57:57 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:19.005 [2024-12-06 04:57:57.035366] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:10:19.005 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:19.005 [2024-12-06 04:57:57.036283] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:19.005 [2024-12-06 04:57:57.036343] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:19.005 [2024-12-06 04:57:57.036368] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:19.005 [2024-12-06 04:57:57.036398] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:19.005 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:19.005 [2024-12-06 04:57:57.037529] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:19.005 [2024-12-06 04:57:57.037619] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:19.005 [2024-12-06 04:57:57.037644] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:19.005 [2024-12-06 04:57:57.037705] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:19.005 04:57:57 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:19.005 04:57:57 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:19.005 [2024-12-06 04:57:57.057494] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:10:19.005 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:19.005 [2024-12-06 04:57:57.058342] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:19.005 [2024-12-06 04:57:57.058389] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:19.005 [2024-12-06 04:57:57.058415] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:19.005 [2024-12-06 04:57:57.058438] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:19.005 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:19.005 [2024-12-06 04:57:57.059342] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:19.005 [2024-12-06 04:57:57.059379] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:19.005 [2024-12-06 04:57:57.059405] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:19.005 [2024-12-06 04:57:57.059424] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:19.005 04:57:57 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:19.005 04:57:57 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:19.005 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:19.005 EAL: Scan for (pci) bus failed. 00:10:19.005 04:57:57 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:19.005 04:57:57 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:19.005 04:57:57 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:19.267 04:57:57 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:19.267 04:57:57 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:19.267 04:57:57 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:19.267 04:57:57 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:19.267 04:57:57 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:19.267 Attaching to 0000:00:10.0 00:10:19.267 Attached to 0000:00:10.0 00:10:19.267 04:57:57 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:19.267 04:57:57 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:19.267 04:57:57 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:19.267 Attaching to 0000:00:11.0 00:10:19.267 Attached to 0000:00:11.0 00:10:19.841 QEMU NVMe Ctrl (12340 ): 2963 I/Os completed (+2963) 00:10:19.841 QEMU NVMe Ctrl (12341 ): 2600 I/Os completed (+2600) 00:10:19.841 00:10:20.786 QEMU NVMe Ctrl (12340 ): 7435 I/Os completed (+4472) 00:10:20.786 QEMU NVMe Ctrl (12341 ): 7072 I/Os completed (+4472) 00:10:20.786 00:10:21.765 QEMU NVMe Ctrl (12340 ): 11939 I/Os completed (+4504) 00:10:21.765 QEMU NVMe Ctrl (12341 ): 11576 I/Os completed (+4504) 00:10:21.765 00:10:22.710 QEMU NVMe Ctrl (12340 ): 15851 I/Os completed (+3912) 00:10:22.710 QEMU NVMe Ctrl (12341 ): 15501 I/Os completed (+3925) 00:10:22.710 00:10:24.094 QEMU NVMe Ctrl (12340 ): 19488 I/Os completed (+3637) 00:10:24.094 QEMU NVMe Ctrl (12341 ): 19138 I/Os completed (+3637) 00:10:24.094 00:10:24.705 QEMU NVMe Ctrl (12340 ): 23954 I/Os completed (+4466) 00:10:24.705 QEMU NVMe Ctrl (12341 ): 23606 I/Os completed (+4468) 00:10:24.705 00:10:26.087 QEMU NVMe Ctrl (12340 ): 28394 I/Os completed (+4440) 00:10:26.087 QEMU NVMe Ctrl (12341 ): 28048 I/Os completed (+4442) 00:10:26.087 00:10:27.027 QEMU NVMe Ctrl (12340 ): 32842 I/Os completed (+4448) 00:10:27.027 QEMU NVMe Ctrl (12341 ): 32497 I/Os completed (+4449) 00:10:27.027 00:10:27.967 QEMU NVMe Ctrl (12340 ): 37298 I/Os completed (+4456) 00:10:27.967 QEMU NVMe Ctrl (12341 ): 36961 I/Os completed (+4464) 00:10:27.967 00:10:28.911 QEMU NVMe Ctrl (12340 ): 40350 I/Os completed (+3052) 00:10:28.911 QEMU NVMe Ctrl (12341 ): 40012 I/Os completed (+3051) 00:10:28.911 00:10:29.856 QEMU NVMe Ctrl (12340 ): 43117 I/Os completed (+2767) 00:10:29.856 QEMU NVMe Ctrl (12341 ): 42797 I/Os completed (+2785) 00:10:29.856 00:10:30.800 QEMU NVMe Ctrl (12340 ): 45660 I/Os completed (+2543) 00:10:30.800 QEMU NVMe Ctrl (12341 ): 45357 I/Os completed (+2560) 00:10:30.800 00:10:31.373 04:58:09 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:31.373 04:58:09 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:31.373 04:58:09 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:31.373 04:58:09 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:31.373 [2024-12-06 04:58:09.341950] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:10:31.373 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:31.373 [2024-12-06 04:58:09.343234] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:31.373 [2024-12-06 04:58:09.343311] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:31.373 [2024-12-06 04:58:09.343331] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:31.373 [2024-12-06 04:58:09.343355] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:31.373 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:31.373 [2024-12-06 04:58:09.345537] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:31.373 [2024-12-06 04:58:09.345610] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:31.373 [2024-12-06 04:58:09.345628] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:31.373 [2024-12-06 04:58:09.345646] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:31.373 04:58:09 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:31.373 04:58:09 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:31.373 [2024-12-06 04:58:09.369541] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:10:31.373 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:31.373 [2024-12-06 04:58:09.371497] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:31.373 [2024-12-06 04:58:09.371580] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:31.373 [2024-12-06 04:58:09.371615] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:31.373 [2024-12-06 04:58:09.371643] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:31.373 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:31.373 [2024-12-06 04:58:09.374054] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:31.373 [2024-12-06 04:58:09.374114] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:31.373 [2024-12-06 04:58:09.374132] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:31.373 [2024-12-06 04:58:09.374145] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:31.373 04:58:09 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:31.373 04:58:09 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:31.373 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:31.373 EAL: Scan for (pci) bus failed. 00:10:31.373 04:58:09 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:31.373 04:58:09 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:31.373 04:58:09 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:31.373 04:58:09 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:31.373 04:58:09 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:31.373 04:58:09 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:31.373 04:58:09 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:31.373 04:58:09 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:31.373 Attaching to 0000:00:10.0 00:10:31.373 Attached to 0000:00:10.0 00:10:31.634 04:58:09 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:31.634 04:58:09 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:31.634 04:58:09 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:31.634 Attaching to 0000:00:11.0 00:10:31.634 Attached to 0000:00:11.0 00:10:31.634 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:31.634 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:31.634 [2024-12-06 04:58:09.668068] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:10:43.869 04:58:21 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:43.869 04:58:21 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:43.869 04:58:21 sw_hotplug -- common/autotest_common.sh@717 -- # time=42.93 00:10:43.869 04:58:21 sw_hotplug -- common/autotest_common.sh@718 -- # echo 42.93 00:10:43.869 04:58:21 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:10:43.869 04:58:21 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.93 00:10:43.869 04:58:21 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.93 2 00:10:43.869 remove_attach_helper took 42.93s to complete (handling 2 nvme drive(s)) 04:58:21 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:10:50.454 04:58:27 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 78517 00:10:50.454 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (78517) - No such process 00:10:50.454 04:58:27 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 78517 00:10:50.454 04:58:27 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:10:50.454 04:58:27 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:10:50.454 04:58:27 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:10:50.454 04:58:27 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=79065 00:10:50.454 04:58:27 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:10:50.454 04:58:27 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 79065 00:10:50.454 04:58:27 sw_hotplug -- common/autotest_common.sh@831 -- # '[' -z 79065 ']' 00:10:50.454 04:58:27 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:50.454 04:58:27 sw_hotplug -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:50.454 04:58:27 sw_hotplug -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:50.454 04:58:27 sw_hotplug -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:50.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:50.454 04:58:27 sw_hotplug -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:50.454 04:58:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:50.454 [2024-12-06 04:58:27.763200] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:10:50.454 [2024-12-06 04:58:27.763640] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79065 ] 00:10:50.454 [2024-12-06 04:58:27.902835] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:50.454 [2024-12-06 04:58:27.976601] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:50.454 04:58:28 sw_hotplug -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:50.454 04:58:28 sw_hotplug -- common/autotest_common.sh@864 -- # return 0 00:10:50.454 04:58:28 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:10:50.454 04:58:28 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.454 04:58:28 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:50.454 04:58:28 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.454 04:58:28 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:10:50.454 04:58:28 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:50.454 04:58:28 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:10:50.454 04:58:28 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:10:50.454 04:58:28 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:10:50.454 04:58:28 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:10:50.454 04:58:28 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:10:50.455 04:58:28 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:10:50.455 04:58:28 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:50.455 04:58:28 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:50.455 04:58:28 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:10:50.455 04:58:28 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:50.455 04:58:28 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:57.063 04:58:34 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:57.063 04:58:34 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:57.063 04:58:34 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:57.063 04:58:34 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:57.063 04:58:34 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:57.063 04:58:34 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:57.063 04:58:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:57.063 04:58:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:57.063 04:58:34 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:57.063 04:58:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:57.063 04:58:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:57.063 04:58:34 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.063 04:58:34 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:57.063 04:58:34 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.063 04:58:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:57.063 04:58:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:57.063 [2024-12-06 04:58:34.718539] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:10:57.063 [2024-12-06 04:58:34.719604] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:57.063 [2024-12-06 04:58:34.719745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:57.063 [2024-12-06 04:58:34.719765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.063 [2024-12-06 04:58:34.719778] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:57.063 [2024-12-06 04:58:34.719787] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:57.063 [2024-12-06 04:58:34.719794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.063 [2024-12-06 04:58:34.719804] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:57.063 [2024-12-06 04:58:34.719810] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:57.063 [2024-12-06 04:58:34.719818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.063 [2024-12-06 04:58:34.719825] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:57.063 [2024-12-06 04:58:34.719832] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:57.063 [2024-12-06 04:58:34.719839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.063 04:58:35 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:57.063 04:58:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:57.063 04:58:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:57.063 04:58:35 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:57.063 04:58:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:57.063 04:58:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:57.063 04:58:35 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.063 04:58:35 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:57.063 04:58:35 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.063 04:58:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:10:57.063 04:58:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:57.321 [2024-12-06 04:58:35.318542] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:10:57.321 [2024-12-06 04:58:35.319586] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:57.321 [2024-12-06 04:58:35.319617] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:57.321 [2024-12-06 04:58:35.319627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.321 [2024-12-06 04:58:35.319637] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:57.321 [2024-12-06 04:58:35.319645] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:57.321 [2024-12-06 04:58:35.319653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.321 [2024-12-06 04:58:35.319660] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:57.321 [2024-12-06 04:58:35.319676] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:57.321 [2024-12-06 04:58:35.319683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.321 [2024-12-06 04:58:35.319694] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:57.321 [2024-12-06 04:58:35.319700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:57.321 [2024-12-06 04:58:35.319708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.579 04:58:35 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:10:57.579 04:58:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:57.579 04:58:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:57.579 04:58:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:57.579 04:58:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:57.579 04:58:35 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.579 04:58:35 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:57.579 04:58:35 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:57.579 04:58:35 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.579 04:58:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:57.579 04:58:35 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:57.838 04:58:35 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:57.838 04:58:35 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:57.838 04:58:35 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:57.838 04:58:35 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:57.838 04:58:35 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:57.838 04:58:35 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:57.838 04:58:35 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:57.838 04:58:35 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:57.838 04:58:36 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:57.838 04:58:36 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:57.838 04:58:36 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:10.034 04:58:48 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:10.034 04:58:48 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:10.034 04:58:48 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:10.034 04:58:48 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:10.034 04:58:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:10.034 04:58:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:10.034 04:58:48 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.034 04:58:48 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:10.034 04:58:48 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.034 04:58:48 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:10.034 04:58:48 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:10.034 04:58:48 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:10.034 04:58:48 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:10.034 04:58:48 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:10.034 04:58:48 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:10.034 04:58:48 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:10.034 04:58:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:10.034 04:58:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:10.034 04:58:48 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:10.034 04:58:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:10.034 04:58:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:10.034 04:58:48 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.034 04:58:48 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:10.034 [2024-12-06 04:58:48.118742] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:10.034 [2024-12-06 04:58:48.119790] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.034 [2024-12-06 04:58:48.119820] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:10.034 [2024-12-06 04:58:48.119833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.034 [2024-12-06 04:58:48.119844] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.034 [2024-12-06 04:58:48.119853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:10.034 [2024-12-06 04:58:48.119860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.034 [2024-12-06 04:58:48.119868] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.034 [2024-12-06 04:58:48.119874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:10.034 [2024-12-06 04:58:48.119882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.034 [2024-12-06 04:58:48.119889] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.034 [2024-12-06 04:58:48.119897] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:10.034 [2024-12-06 04:58:48.119904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.034 04:58:48 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.034 04:58:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:10.034 04:58:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:10.601 [2024-12-06 04:58:48.618747] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:10.601 [2024-12-06 04:58:48.619734] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.601 [2024-12-06 04:58:48.619766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:10.601 [2024-12-06 04:58:48.619775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.601 [2024-12-06 04:58:48.619791] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.601 [2024-12-06 04:58:48.619799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:10.601 [2024-12-06 04:58:48.619808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.601 [2024-12-06 04:58:48.619814] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.601 [2024-12-06 04:58:48.619824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:10.601 [2024-12-06 04:58:48.619830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.601 [2024-12-06 04:58:48.619838] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.601 [2024-12-06 04:58:48.619844] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:10.601 [2024-12-06 04:58:48.619852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.601 04:58:48 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:10.601 04:58:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:10.601 04:58:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:10.601 04:58:48 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:10.601 04:58:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:10.601 04:58:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:10.601 04:58:48 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.601 04:58:48 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:10.602 04:58:48 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.602 04:58:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:10.602 04:58:48 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:10.602 04:58:48 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:10.602 04:58:48 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:10.602 04:58:48 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:10.602 04:58:48 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:10.860 04:58:48 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:10.860 04:58:48 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:10.860 04:58:48 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:10.860 04:58:48 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:10.860 04:58:48 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:10.860 04:58:48 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:10.860 04:58:48 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:23.062 04:59:00 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:23.062 04:59:00 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:23.062 04:59:00 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:23.062 04:59:00 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:23.062 04:59:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:23.062 04:59:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:23.062 04:59:00 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.062 04:59:00 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:23.062 04:59:00 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.062 04:59:00 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:23.062 04:59:00 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:23.062 04:59:00 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:23.062 04:59:00 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:23.062 04:59:00 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:23.062 04:59:00 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:23.062 04:59:00 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:23.062 04:59:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:23.062 04:59:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:23.062 04:59:00 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:23.062 04:59:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:23.062 04:59:00 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.062 04:59:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:23.062 04:59:00 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:23.062 [2024-12-06 04:59:01.018965] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:23.062 [2024-12-06 04:59:01.020017] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:23.062 [2024-12-06 04:59:01.020046] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:23.062 [2024-12-06 04:59:01.020061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.062 [2024-12-06 04:59:01.020073] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:23.062 [2024-12-06 04:59:01.020082] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:23.062 [2024-12-06 04:59:01.020089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.062 04:59:01 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.062 [2024-12-06 04:59:01.020097] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:23.062 [2024-12-06 04:59:01.020104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:23.062 [2024-12-06 04:59:01.020113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.062 [2024-12-06 04:59:01.020119] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:23.062 [2024-12-06 04:59:01.020126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:23.062 [2024-12-06 04:59:01.020133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.062 04:59:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:23.062 04:59:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:23.321 [2024-12-06 04:59:01.418973] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:23.321 [2024-12-06 04:59:01.419971] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:23.321 [2024-12-06 04:59:01.420004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:23.321 [2024-12-06 04:59:01.420014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.321 [2024-12-06 04:59:01.420025] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:23.321 [2024-12-06 04:59:01.420032] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:23.321 [2024-12-06 04:59:01.420042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.321 [2024-12-06 04:59:01.420048] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:23.321 [2024-12-06 04:59:01.420056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:23.321 [2024-12-06 04:59:01.420063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.321 [2024-12-06 04:59:01.420070] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:23.321 [2024-12-06 04:59:01.420077] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:23.321 [2024-12-06 04:59:01.420084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.321 04:59:01 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:23.321 04:59:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:23.321 04:59:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:23.321 04:59:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:23.321 04:59:01 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:23.321 04:59:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:23.321 04:59:01 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.321 04:59:01 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:23.321 04:59:01 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.581 04:59:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:23.581 04:59:01 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:23.581 04:59:01 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:23.581 04:59:01 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:23.581 04:59:01 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:23.581 04:59:01 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:23.581 04:59:01 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:23.581 04:59:01 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:23.581 04:59:01 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:23.581 04:59:01 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:23.581 04:59:01 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:23.582 04:59:01 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:23.582 04:59:01 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:35.799 04:59:13 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:35.799 04:59:13 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:35.799 04:59:13 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:35.799 04:59:13 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:35.799 04:59:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:35.799 04:59:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:35.799 04:59:13 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.799 04:59:13 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:35.799 04:59:13 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.799 04:59:13 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:35.799 04:59:13 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:35.799 04:59:13 sw_hotplug -- common/autotest_common.sh@717 -- # time=45.19 00:11:35.799 04:59:13 sw_hotplug -- common/autotest_common.sh@718 -- # echo 45.19 00:11:35.799 04:59:13 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:11:35.799 04:59:13 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.19 00:11:35.799 04:59:13 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.19 2 00:11:35.799 remove_attach_helper took 45.19s to complete (handling 2 nvme drive(s)) 04:59:13 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:11:35.799 04:59:13 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.799 04:59:13 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:35.799 04:59:13 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.799 04:59:13 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:35.799 04:59:13 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.799 04:59:13 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:35.799 04:59:13 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.799 04:59:13 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:11:35.799 04:59:13 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:35.799 04:59:13 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:35.799 04:59:13 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:11:35.799 04:59:13 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:11:35.799 04:59:13 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:11:35.799 04:59:13 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:11:35.799 04:59:13 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:11:35.799 04:59:13 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:35.799 04:59:13 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:35.799 04:59:13 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:35.799 04:59:13 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:35.799 04:59:13 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:42.357 04:59:19 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:42.357 04:59:19 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:42.357 04:59:19 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:42.357 04:59:19 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:42.357 04:59:19 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:42.357 04:59:19 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:42.357 04:59:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:42.357 04:59:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:42.357 04:59:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:42.357 04:59:19 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:42.357 04:59:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:42.357 04:59:19 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.357 04:59:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:42.357 04:59:19 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.357 04:59:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:42.357 04:59:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:42.357 [2024-12-06 04:59:19.941556] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:42.357 [2024-12-06 04:59:19.942343] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:42.357 [2024-12-06 04:59:19.942370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:42.357 [2024-12-06 04:59:19.942382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.357 [2024-12-06 04:59:19.942395] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:42.357 [2024-12-06 04:59:19.942404] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:42.357 [2024-12-06 04:59:19.942411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.357 [2024-12-06 04:59:19.942419] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:42.357 [2024-12-06 04:59:19.942426] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:42.357 [2024-12-06 04:59:19.942436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.357 [2024-12-06 04:59:19.942443] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:42.357 [2024-12-06 04:59:19.942451] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:42.357 [2024-12-06 04:59:19.942457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.357 [2024-12-06 04:59:20.341558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:42.357 [2024-12-06 04:59:20.342307] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:42.357 [2024-12-06 04:59:20.342340] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:42.357 [2024-12-06 04:59:20.342350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.358 [2024-12-06 04:59:20.342361] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:42.358 [2024-12-06 04:59:20.342369] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:42.358 [2024-12-06 04:59:20.342377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.358 [2024-12-06 04:59:20.342384] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:42.358 [2024-12-06 04:59:20.342392] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:42.358 [2024-12-06 04:59:20.342398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.358 [2024-12-06 04:59:20.342406] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:42.358 [2024-12-06 04:59:20.342412] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:42.358 [2024-12-06 04:59:20.342422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.358 04:59:20 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:42.358 04:59:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:42.358 04:59:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:42.358 04:59:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:42.358 04:59:20 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:42.358 04:59:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:42.358 04:59:20 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.358 04:59:20 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:42.358 04:59:20 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.358 04:59:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:42.358 04:59:20 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:42.358 04:59:20 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:42.358 04:59:20 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:42.358 04:59:20 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:42.616 04:59:20 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:42.616 04:59:20 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:42.616 04:59:20 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:42.616 04:59:20 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:42.616 04:59:20 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:42.616 04:59:20 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:42.616 04:59:20 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:42.616 04:59:20 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:54.996 04:59:32 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:54.996 04:59:32 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:54.996 04:59:32 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:54.996 04:59:32 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:54.996 04:59:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:54.996 04:59:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:54.996 04:59:32 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.996 04:59:32 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:54.996 04:59:32 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.996 04:59:32 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:54.996 04:59:32 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:54.996 04:59:32 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:54.996 04:59:32 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:54.996 04:59:32 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:54.996 04:59:32 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:54.996 [2024-12-06 04:59:32.741775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:54.996 [2024-12-06 04:59:32.742597] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:54.996 [2024-12-06 04:59:32.742628] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:54.996 [2024-12-06 04:59:32.742640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:54.996 [2024-12-06 04:59:32.742652] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:54.996 [2024-12-06 04:59:32.742661] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:54.996 [2024-12-06 04:59:32.742678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:54.996 [2024-12-06 04:59:32.742687] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:54.997 [2024-12-06 04:59:32.742693] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:54.997 [2024-12-06 04:59:32.742701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:54.997 [2024-12-06 04:59:32.742708] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:54.997 [2024-12-06 04:59:32.742716] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:54.997 [2024-12-06 04:59:32.742722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:54.997 04:59:32 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:54.997 04:59:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:54.997 04:59:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:54.997 04:59:32 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:54.997 04:59:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:54.997 04:59:32 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.997 04:59:32 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:54.997 04:59:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:54.997 04:59:32 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.997 04:59:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:54.997 04:59:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:55.255 [2024-12-06 04:59:33.241782] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:55.255 [2024-12-06 04:59:33.242510] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:55.255 [2024-12-06 04:59:33.242544] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:55.255 [2024-12-06 04:59:33.242554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:55.255 [2024-12-06 04:59:33.242566] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:55.255 [2024-12-06 04:59:33.242573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:55.255 [2024-12-06 04:59:33.242582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:55.255 [2024-12-06 04:59:33.242588] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:55.255 [2024-12-06 04:59:33.242596] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:55.255 [2024-12-06 04:59:33.242603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:55.255 [2024-12-06 04:59:33.242611] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:55.255 [2024-12-06 04:59:33.242617] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:55.255 [2024-12-06 04:59:33.242625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:55.255 04:59:33 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:55.255 04:59:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:55.255 04:59:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:55.255 04:59:33 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:55.255 04:59:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:55.255 04:59:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:55.256 04:59:33 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.256 04:59:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:55.256 04:59:33 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.256 04:59:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:55.256 04:59:33 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:55.256 04:59:33 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:55.256 04:59:33 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:55.256 04:59:33 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:55.256 04:59:33 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:55.256 04:59:33 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:55.256 04:59:33 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:55.256 04:59:33 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:55.256 04:59:33 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:55.513 04:59:33 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:55.514 04:59:33 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:55.514 04:59:33 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:07.709 04:59:45 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:07.709 04:59:45 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:07.709 04:59:45 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:07.709 04:59:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:07.709 04:59:45 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.709 04:59:45 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:07.709 04:59:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:07.709 04:59:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:07.709 04:59:45 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.709 04:59:45 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:07.709 04:59:45 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:07.709 04:59:45 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:07.709 04:59:45 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:07.709 04:59:45 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:07.709 04:59:45 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:07.709 04:59:45 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:07.709 04:59:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:07.709 04:59:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:07.709 04:59:45 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:07.709 04:59:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:07.709 04:59:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:07.709 04:59:45 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.710 04:59:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:07.710 04:59:45 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.710 [2024-12-06 04:59:45.641980] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:07.710 [2024-12-06 04:59:45.642755] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:07.710 [2024-12-06 04:59:45.642785] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:07.710 [2024-12-06 04:59:45.642797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:07.710 [2024-12-06 04:59:45.642808] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:07.710 [2024-12-06 04:59:45.642819] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:07.710 [2024-12-06 04:59:45.642826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:07.710 [2024-12-06 04:59:45.642835] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:07.710 [2024-12-06 04:59:45.642842] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:07.710 [2024-12-06 04:59:45.642850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:07.710 [2024-12-06 04:59:45.642856] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:07.710 [2024-12-06 04:59:45.642863] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:07.710 [2024-12-06 04:59:45.642870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:07.710 04:59:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:07.710 04:59:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:07.967 [2024-12-06 04:59:46.041986] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:07.967 [2024-12-06 04:59:46.042717] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:07.967 [2024-12-06 04:59:46.042748] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:07.967 [2024-12-06 04:59:46.042758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:07.967 [2024-12-06 04:59:46.042770] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:07.967 [2024-12-06 04:59:46.042777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:07.967 [2024-12-06 04:59:46.042785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:07.967 [2024-12-06 04:59:46.042792] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:07.967 [2024-12-06 04:59:46.042802] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:07.967 [2024-12-06 04:59:46.042808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:07.967 [2024-12-06 04:59:46.042815] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:07.967 [2024-12-06 04:59:46.042821] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:07.967 [2024-12-06 04:59:46.042829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:07.967 04:59:46 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:07.967 04:59:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:07.967 04:59:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:07.967 04:59:46 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:07.967 04:59:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:07.967 04:59:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:07.967 04:59:46 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.967 04:59:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:07.967 04:59:46 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.967 04:59:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:07.967 04:59:46 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:08.223 04:59:46 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:08.223 04:59:46 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:08.223 04:59:46 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:08.223 04:59:46 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:08.223 04:59:46 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:08.223 04:59:46 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:08.223 04:59:46 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:08.223 04:59:46 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:08.223 04:59:46 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:08.223 04:59:46 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:08.224 04:59:46 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:20.417 04:59:58 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:20.417 04:59:58 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:20.417 04:59:58 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:20.417 04:59:58 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:20.417 04:59:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:20.417 04:59:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:20.417 04:59:58 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.417 04:59:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:20.417 04:59:58 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.417 04:59:58 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:20.417 04:59:58 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:20.417 04:59:58 sw_hotplug -- common/autotest_common.sh@717 -- # time=44.60 00:12:20.417 04:59:58 sw_hotplug -- common/autotest_common.sh@718 -- # echo 44.60 00:12:20.417 04:59:58 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:12:20.417 04:59:58 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.60 00:12:20.417 04:59:58 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.60 2 00:12:20.417 remove_attach_helper took 44.60s to complete (handling 2 nvme drive(s)) 04:59:58 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:12:20.417 04:59:58 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 79065 00:12:20.417 04:59:58 sw_hotplug -- common/autotest_common.sh@950 -- # '[' -z 79065 ']' 00:12:20.417 04:59:58 sw_hotplug -- common/autotest_common.sh@954 -- # kill -0 79065 00:12:20.417 04:59:58 sw_hotplug -- common/autotest_common.sh@955 -- # uname 00:12:20.417 04:59:58 sw_hotplug -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:20.417 04:59:58 sw_hotplug -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79065 00:12:20.417 04:59:58 sw_hotplug -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:20.417 04:59:58 sw_hotplug -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:20.417 killing process with pid 79065 00:12:20.417 04:59:58 sw_hotplug -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79065' 00:12:20.417 04:59:58 sw_hotplug -- common/autotest_common.sh@969 -- # kill 79065 00:12:20.417 04:59:58 sw_hotplug -- common/autotest_common.sh@974 -- # wait 79065 00:12:20.679 04:59:58 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:20.939 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:21.509 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:21.509 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:21.509 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:21.509 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:21.509 00:12:21.509 real 2m28.796s 00:12:21.509 user 1m49.083s 00:12:21.509 sys 0m18.182s 00:12:21.509 04:59:59 sw_hotplug -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:21.509 ************************************ 00:12:21.509 END TEST sw_hotplug 00:12:21.509 ************************************ 00:12:21.509 04:59:59 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:21.509 04:59:59 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:12:21.509 04:59:59 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:21.509 04:59:59 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:21.510 04:59:59 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:21.510 04:59:59 -- common/autotest_common.sh@10 -- # set +x 00:12:21.510 ************************************ 00:12:21.510 START TEST nvme_xnvme 00:12:21.510 ************************************ 00:12:21.510 04:59:59 nvme_xnvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:21.771 * Looking for test storage... 00:12:21.771 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:21.771 04:59:59 nvme_xnvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:21.771 04:59:59 nvme_xnvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:21.771 04:59:59 nvme_xnvme -- common/autotest_common.sh@1681 -- # lcov --version 00:12:21.771 04:59:59 nvme_xnvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:21.771 04:59:59 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:21.771 04:59:59 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:21.771 04:59:59 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:21.771 04:59:59 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:21.771 04:59:59 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:21.771 04:59:59 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:21.771 04:59:59 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:21.771 04:59:59 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:21.771 04:59:59 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:21.771 04:59:59 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:21.771 04:59:59 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:21.771 04:59:59 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:21.771 04:59:59 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:21.771 04:59:59 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:21.771 04:59:59 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:21.771 04:59:59 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:21.771 04:59:59 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:21.771 04:59:59 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:21.771 04:59:59 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:21.771 04:59:59 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:21.771 04:59:59 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:21.771 04:59:59 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:21.771 04:59:59 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:21.771 04:59:59 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:21.771 04:59:59 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:21.771 04:59:59 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:21.771 04:59:59 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:21.771 04:59:59 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:21.771 04:59:59 nvme_xnvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:21.771 04:59:59 nvme_xnvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:21.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:21.771 --rc genhtml_branch_coverage=1 00:12:21.771 --rc genhtml_function_coverage=1 00:12:21.771 --rc genhtml_legend=1 00:12:21.771 --rc geninfo_all_blocks=1 00:12:21.771 --rc geninfo_unexecuted_blocks=1 00:12:21.771 00:12:21.771 ' 00:12:21.771 04:59:59 nvme_xnvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:21.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:21.771 --rc genhtml_branch_coverage=1 00:12:21.771 --rc genhtml_function_coverage=1 00:12:21.771 --rc genhtml_legend=1 00:12:21.771 --rc geninfo_all_blocks=1 00:12:21.771 --rc geninfo_unexecuted_blocks=1 00:12:21.771 00:12:21.771 ' 00:12:21.771 04:59:59 nvme_xnvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:21.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:21.771 --rc genhtml_branch_coverage=1 00:12:21.771 --rc genhtml_function_coverage=1 00:12:21.771 --rc genhtml_legend=1 00:12:21.771 --rc geninfo_all_blocks=1 00:12:21.771 --rc geninfo_unexecuted_blocks=1 00:12:21.771 00:12:21.771 ' 00:12:21.771 04:59:59 nvme_xnvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:21.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:21.771 --rc genhtml_branch_coverage=1 00:12:21.771 --rc genhtml_function_coverage=1 00:12:21.771 --rc genhtml_legend=1 00:12:21.771 --rc geninfo_all_blocks=1 00:12:21.771 --rc geninfo_unexecuted_blocks=1 00:12:21.771 00:12:21.771 ' 00:12:21.771 04:59:59 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:21.771 04:59:59 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:21.771 04:59:59 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:21.771 04:59:59 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:21.771 04:59:59 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:21.771 04:59:59 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.771 04:59:59 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.771 04:59:59 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.771 04:59:59 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:21.771 04:59:59 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.771 04:59:59 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:12:21.771 04:59:59 nvme_xnvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:21.771 04:59:59 nvme_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:21.771 04:59:59 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:21.771 ************************************ 00:12:21.771 START TEST xnvme_to_malloc_dd_copy 00:12:21.771 ************************************ 00:12:21.771 04:59:59 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1125 -- # malloc_to_xnvme_copy 00:12:21.771 04:59:59 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:12:21.772 04:59:59 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:12:21.772 04:59:59 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:12:21.772 04:59:59 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@187 -- # return 00:12:21.772 04:59:59 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:12:21.772 04:59:59 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:12:21.772 04:59:59 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:12:21.772 04:59:59 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:12:21.772 04:59:59 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:12:21.772 04:59:59 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:12:21.772 04:59:59 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:12:21.772 04:59:59 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:12:21.772 04:59:59 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:12:21.772 04:59:59 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:12:21.772 04:59:59 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:12:21.772 04:59:59 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:12:21.772 04:59:59 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:12:21.772 04:59:59 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:12:21.772 04:59:59 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:12:21.772 04:59:59 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:21.772 04:59:59 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:12:21.772 04:59:59 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:12:21.772 04:59:59 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:21.772 04:59:59 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:21.772 { 00:12:21.772 "subsystems": [ 00:12:21.772 { 00:12:21.772 "subsystem": "bdev", 00:12:21.772 "config": [ 00:12:21.772 { 00:12:21.772 "params": { 00:12:21.772 "block_size": 512, 00:12:21.772 "num_blocks": 2097152, 00:12:21.772 "name": "malloc0" 00:12:21.772 }, 00:12:21.772 "method": "bdev_malloc_create" 00:12:21.772 }, 00:12:21.772 { 00:12:21.772 "params": { 00:12:21.772 "io_mechanism": "libaio", 00:12:21.772 "filename": "/dev/nullb0", 00:12:21.772 "name": "null0" 00:12:21.772 }, 00:12:21.772 "method": "bdev_xnvme_create" 00:12:21.772 }, 00:12:21.772 { 00:12:21.772 "method": "bdev_wait_for_examine" 00:12:21.772 } 00:12:21.772 ] 00:12:21.772 } 00:12:21.772 ] 00:12:21.772 } 00:12:21.772 [2024-12-06 04:59:59.956073] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:12:21.772 [2024-12-06 04:59:59.956202] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80431 ] 00:12:22.033 [2024-12-06 05:00:00.094103] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:22.033 [2024-12-06 05:00:00.142885] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:23.418  [2024-12-06T05:00:02.590Z] Copying: 221/1024 [MB] (221 MBps) [2024-12-06T05:00:03.523Z] Copying: 453/1024 [MB] (231 MBps) [2024-12-06T05:00:04.455Z] Copying: 757/1024 [MB] (304 MBps) [2024-12-06T05:00:04.714Z] Copying: 1024/1024 [MB] (average 264 MBps) 00:12:26.482 00:12:26.482 05:00:04 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:26.482 05:00:04 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:26.482 05:00:04 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:26.482 05:00:04 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:26.482 { 00:12:26.482 "subsystems": [ 00:12:26.482 { 00:12:26.482 "subsystem": "bdev", 00:12:26.482 "config": [ 00:12:26.482 { 00:12:26.482 "params": { 00:12:26.482 "block_size": 512, 00:12:26.482 "num_blocks": 2097152, 00:12:26.482 "name": "malloc0" 00:12:26.482 }, 00:12:26.482 "method": "bdev_malloc_create" 00:12:26.482 }, 00:12:26.482 { 00:12:26.482 "params": { 00:12:26.482 "io_mechanism": "libaio", 00:12:26.482 "filename": "/dev/nullb0", 00:12:26.482 "name": "null0" 00:12:26.482 }, 00:12:26.482 "method": "bdev_xnvme_create" 00:12:26.482 }, 00:12:26.482 { 00:12:26.482 "method": "bdev_wait_for_examine" 00:12:26.482 } 00:12:26.482 ] 00:12:26.482 } 00:12:26.482 ] 00:12:26.482 } 00:12:26.743 [2024-12-06 05:00:04.723053] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:12:26.743 [2024-12-06 05:00:04.723172] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80492 ] 00:12:26.743 [2024-12-06 05:00:04.858157] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:26.743 [2024-12-06 05:00:04.895596] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:28.128  [2024-12-06T05:00:07.302Z] Copying: 223/1024 [MB] (223 MBps) [2024-12-06T05:00:08.241Z] Copying: 466/1024 [MB] (242 MBps) [2024-12-06T05:00:09.189Z] Copying: 740/1024 [MB] (273 MBps) [2024-12-06T05:00:09.450Z] Copying: 996/1024 [MB] (256 MBps) [2024-12-06T05:00:09.710Z] Copying: 1024/1024 [MB] (average 250 MBps) 00:12:31.478 00:12:31.740 05:00:09 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:12:31.740 05:00:09 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:31.740 05:00:09 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:12:31.740 05:00:09 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:12:31.740 05:00:09 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:31.740 05:00:09 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:31.740 { 00:12:31.740 "subsystems": [ 00:12:31.740 { 00:12:31.740 "subsystem": "bdev", 00:12:31.740 "config": [ 00:12:31.740 { 00:12:31.740 "params": { 00:12:31.740 "block_size": 512, 00:12:31.740 "num_blocks": 2097152, 00:12:31.740 "name": "malloc0" 00:12:31.740 }, 00:12:31.740 "method": "bdev_malloc_create" 00:12:31.740 }, 00:12:31.740 { 00:12:31.740 "params": { 00:12:31.740 "io_mechanism": "io_uring", 00:12:31.740 "filename": "/dev/nullb0", 00:12:31.740 "name": "null0" 00:12:31.740 }, 00:12:31.740 "method": "bdev_xnvme_create" 00:12:31.740 }, 00:12:31.740 { 00:12:31.740 "method": "bdev_wait_for_examine" 00:12:31.740 } 00:12:31.740 ] 00:12:31.740 } 00:12:31.740 ] 00:12:31.740 } 00:12:31.740 [2024-12-06 05:00:09.770779] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:12:31.740 [2024-12-06 05:00:09.770892] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80557 ] 00:12:31.740 [2024-12-06 05:00:09.904629] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:31.740 [2024-12-06 05:00:09.947449] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:33.127  [2024-12-06T05:00:12.302Z] Copying: 309/1024 [MB] (309 MBps) [2024-12-06T05:00:13.315Z] Copying: 619/1024 [MB] (310 MBps) [2024-12-06T05:00:13.574Z] Copying: 930/1024 [MB] (310 MBps) [2024-12-06T05:00:14.146Z] Copying: 1024/1024 [MB] (average 310 MBps) 00:12:35.914 00:12:35.914 05:00:13 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:35.914 05:00:13 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:35.914 05:00:13 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:35.914 05:00:13 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:35.914 { 00:12:35.914 "subsystems": [ 00:12:35.914 { 00:12:35.914 "subsystem": "bdev", 00:12:35.914 "config": [ 00:12:35.914 { 00:12:35.914 "params": { 00:12:35.914 "block_size": 512, 00:12:35.914 "num_blocks": 2097152, 00:12:35.914 "name": "malloc0" 00:12:35.914 }, 00:12:35.914 "method": "bdev_malloc_create" 00:12:35.914 }, 00:12:35.914 { 00:12:35.914 "params": { 00:12:35.914 "io_mechanism": "io_uring", 00:12:35.914 "filename": "/dev/nullb0", 00:12:35.914 "name": "null0" 00:12:35.914 }, 00:12:35.914 "method": "bdev_xnvme_create" 00:12:35.914 }, 00:12:35.914 { 00:12:35.914 "method": "bdev_wait_for_examine" 00:12:35.914 } 00:12:35.914 ] 00:12:35.914 } 00:12:35.914 ] 00:12:35.914 } 00:12:35.914 [2024-12-06 05:00:14.041115] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:12:35.914 [2024-12-06 05:00:14.041228] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80611 ] 00:12:36.175 [2024-12-06 05:00:14.179514] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:36.175 [2024-12-06 05:00:14.227921] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:37.558  [2024-12-06T05:00:16.731Z] Copying: 311/1024 [MB] (311 MBps) [2024-12-06T05:00:17.668Z] Copying: 624/1024 [MB] (312 MBps) [2024-12-06T05:00:17.929Z] Copying: 936/1024 [MB] (312 MBps) [2024-12-06T05:00:18.497Z] Copying: 1024/1024 [MB] (average 312 MBps) 00:12:40.265 00:12:40.265 05:00:18 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:12:40.265 05:00:18 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # modprobe -r null_blk 00:12:40.265 00:12:40.265 real 0m18.427s 00:12:40.265 user 0m15.268s 00:12:40.265 sys 0m2.649s 00:12:40.265 05:00:18 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:40.265 05:00:18 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:40.265 ************************************ 00:12:40.265 END TEST xnvme_to_malloc_dd_copy 00:12:40.265 ************************************ 00:12:40.265 05:00:18 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:40.265 05:00:18 nvme_xnvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:40.265 05:00:18 nvme_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:40.265 05:00:18 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:40.265 ************************************ 00:12:40.265 START TEST xnvme_bdevperf 00:12:40.265 ************************************ 00:12:40.265 05:00:18 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1125 -- # xnvme_bdevperf 00:12:40.265 05:00:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:12:40.265 05:00:18 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:12:40.265 05:00:18 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:12:40.265 05:00:18 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@187 -- # return 00:12:40.265 05:00:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:12:40.265 05:00:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:12:40.265 05:00:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:12:40.265 05:00:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:12:40.265 05:00:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:12:40.265 05:00:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:12:40.265 05:00:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:12:40.265 05:00:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:12:40.265 05:00:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:12:40.265 05:00:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:12:40.265 05:00:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:12:40.265 05:00:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:40.265 05:00:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:12:40.265 05:00:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:12:40.265 05:00:18 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:40.265 05:00:18 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:40.265 { 00:12:40.265 "subsystems": [ 00:12:40.265 { 00:12:40.265 "subsystem": "bdev", 00:12:40.265 "config": [ 00:12:40.265 { 00:12:40.265 "params": { 00:12:40.265 "io_mechanism": "libaio", 00:12:40.265 "filename": "/dev/nullb0", 00:12:40.265 "name": "null0" 00:12:40.265 }, 00:12:40.265 "method": "bdev_xnvme_create" 00:12:40.265 }, 00:12:40.265 { 00:12:40.265 "method": "bdev_wait_for_examine" 00:12:40.265 } 00:12:40.265 ] 00:12:40.265 } 00:12:40.265 ] 00:12:40.265 } 00:12:40.265 [2024-12-06 05:00:18.434467] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:12:40.265 [2024-12-06 05:00:18.434567] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80688 ] 00:12:40.525 [2024-12-06 05:00:18.566065] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:40.525 [2024-12-06 05:00:18.608752] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:40.525 Running I/O for 5 seconds... 00:12:42.484 203072.00 IOPS, 793.25 MiB/s [2024-12-06T05:00:22.099Z] 202912.00 IOPS, 792.62 MiB/s [2024-12-06T05:00:23.043Z] 203114.67 IOPS, 793.42 MiB/s [2024-12-06T05:00:23.987Z] 203152.00 IOPS, 793.56 MiB/s 00:12:45.755 Latency(us) 00:12:45.755 [2024-12-06T05:00:23.987Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:45.755 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:45.755 null0 : 5.00 203164.85 793.61 0.00 0.00 312.82 115.79 1531.27 00:12:45.755 [2024-12-06T05:00:23.987Z] =================================================================================================================== 00:12:45.755 [2024-12-06T05:00:23.987Z] Total : 203164.85 793.61 0.00 0.00 312.82 115.79 1531.27 00:12:45.755 05:00:23 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:12:45.755 05:00:23 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:45.755 05:00:23 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:12:45.755 05:00:23 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:12:45.755 05:00:23 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:45.755 05:00:23 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:45.755 { 00:12:45.755 "subsystems": [ 00:12:45.755 { 00:12:45.755 "subsystem": "bdev", 00:12:45.755 "config": [ 00:12:45.755 { 00:12:45.755 "params": { 00:12:45.755 "io_mechanism": "io_uring", 00:12:45.755 "filename": "/dev/nullb0", 00:12:45.755 "name": "null0" 00:12:45.755 }, 00:12:45.755 "method": "bdev_xnvme_create" 00:12:45.755 }, 00:12:45.755 { 00:12:45.755 "method": "bdev_wait_for_examine" 00:12:45.755 } 00:12:45.755 ] 00:12:45.755 } 00:12:45.755 ] 00:12:45.755 } 00:12:45.755 [2024-12-06 05:00:23.961608] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:12:45.755 [2024-12-06 05:00:23.961742] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80752 ] 00:12:46.016 [2024-12-06 05:00:24.092730] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:46.016 [2024-12-06 05:00:24.133640] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:46.016 Running I/O for 5 seconds... 00:12:48.341 234944.00 IOPS, 917.75 MiB/s [2024-12-06T05:00:27.533Z] 234944.00 IOPS, 917.75 MiB/s [2024-12-06T05:00:28.475Z] 234944.00 IOPS, 917.75 MiB/s [2024-12-06T05:00:29.417Z] 234928.00 IOPS, 917.69 MiB/s 00:12:51.185 Latency(us) 00:12:51.185 [2024-12-06T05:00:29.417Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:51.185 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:51.185 null0 : 5.00 234854.78 917.40 0.00 0.00 270.22 154.39 1487.16 00:12:51.185 [2024-12-06T05:00:29.417Z] =================================================================================================================== 00:12:51.185 [2024-12-06T05:00:29.417Z] Total : 234854.78 917.40 0.00 0.00 270.22 154.39 1487.16 00:12:51.185 05:00:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:12:51.185 05:00:29 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # modprobe -r null_blk 00:12:51.447 00:12:51.447 real 0m11.092s 00:12:51.447 user 0m8.742s 00:12:51.447 sys 0m2.109s 00:12:51.447 05:00:29 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:51.447 05:00:29 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:51.447 ************************************ 00:12:51.447 END TEST xnvme_bdevperf 00:12:51.447 ************************************ 00:12:51.447 00:12:51.447 real 0m29.794s 00:12:51.447 user 0m24.130s 00:12:51.447 sys 0m4.876s 00:12:51.447 05:00:29 nvme_xnvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:51.447 ************************************ 00:12:51.447 END TEST nvme_xnvme 00:12:51.447 ************************************ 00:12:51.447 05:00:29 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:51.447 05:00:29 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:12:51.447 05:00:29 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:51.447 05:00:29 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:51.447 05:00:29 -- common/autotest_common.sh@10 -- # set +x 00:12:51.447 ************************************ 00:12:51.447 START TEST blockdev_xnvme 00:12:51.447 ************************************ 00:12:51.447 05:00:29 blockdev_xnvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:12:51.447 * Looking for test storage... 00:12:51.447 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:12:51.447 05:00:29 blockdev_xnvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:51.447 05:00:29 blockdev_xnvme -- common/autotest_common.sh@1681 -- # lcov --version 00:12:51.447 05:00:29 blockdev_xnvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:51.447 05:00:29 blockdev_xnvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:51.447 05:00:29 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:51.447 05:00:29 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:51.447 05:00:29 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:51.447 05:00:29 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:51.447 05:00:29 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:51.447 05:00:29 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:51.447 05:00:29 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:51.447 05:00:29 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:51.447 05:00:29 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:51.447 05:00:29 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:51.447 05:00:29 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:51.447 05:00:29 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:51.447 05:00:29 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:12:51.447 05:00:29 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:51.447 05:00:29 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:51.447 05:00:29 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:51.447 05:00:29 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:51.447 05:00:29 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:51.447 05:00:29 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:51.707 05:00:29 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:51.707 05:00:29 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:51.707 05:00:29 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:51.707 05:00:29 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:51.707 05:00:29 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:51.707 05:00:29 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:51.707 05:00:29 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:51.707 05:00:29 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:51.707 05:00:29 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:12:51.707 05:00:29 blockdev_xnvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:51.707 05:00:29 blockdev_xnvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:51.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:51.707 --rc genhtml_branch_coverage=1 00:12:51.707 --rc genhtml_function_coverage=1 00:12:51.707 --rc genhtml_legend=1 00:12:51.707 --rc geninfo_all_blocks=1 00:12:51.707 --rc geninfo_unexecuted_blocks=1 00:12:51.707 00:12:51.707 ' 00:12:51.707 05:00:29 blockdev_xnvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:51.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:51.708 --rc genhtml_branch_coverage=1 00:12:51.708 --rc genhtml_function_coverage=1 00:12:51.708 --rc genhtml_legend=1 00:12:51.708 --rc geninfo_all_blocks=1 00:12:51.708 --rc geninfo_unexecuted_blocks=1 00:12:51.708 00:12:51.708 ' 00:12:51.708 05:00:29 blockdev_xnvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:51.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:51.708 --rc genhtml_branch_coverage=1 00:12:51.708 --rc genhtml_function_coverage=1 00:12:51.708 --rc genhtml_legend=1 00:12:51.708 --rc geninfo_all_blocks=1 00:12:51.708 --rc geninfo_unexecuted_blocks=1 00:12:51.708 00:12:51.708 ' 00:12:51.708 05:00:29 blockdev_xnvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:51.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:51.708 --rc genhtml_branch_coverage=1 00:12:51.708 --rc genhtml_function_coverage=1 00:12:51.708 --rc genhtml_legend=1 00:12:51.708 --rc geninfo_all_blocks=1 00:12:51.708 --rc geninfo_unexecuted_blocks=1 00:12:51.708 00:12:51.708 ' 00:12:51.708 05:00:29 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:12:51.708 05:00:29 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:12:51.708 05:00:29 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:12:51.708 05:00:29 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:51.708 05:00:29 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:12:51.708 05:00:29 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:12:51.708 05:00:29 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:12:51.708 05:00:29 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:12:51.708 05:00:29 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:12:51.708 05:00:29 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:12:51.708 05:00:29 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:12:51.708 05:00:29 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:12:51.708 05:00:29 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:12:51.708 05:00:29 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:12:51.708 05:00:29 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:12:51.708 05:00:29 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:12:51.708 05:00:29 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:12:51.708 05:00:29 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:12:51.708 05:00:29 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:12:51.708 05:00:29 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:12:51.708 05:00:29 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:12:51.708 05:00:29 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:12:51.708 05:00:29 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:12:51.708 05:00:29 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:12:51.708 05:00:29 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=80894 00:12:51.708 05:00:29 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:51.708 05:00:29 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 80894 00:12:51.708 05:00:29 blockdev_xnvme -- common/autotest_common.sh@831 -- # '[' -z 80894 ']' 00:12:51.708 05:00:29 blockdev_xnvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:51.708 05:00:29 blockdev_xnvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:51.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:51.708 05:00:29 blockdev_xnvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:51.708 05:00:29 blockdev_xnvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:51.708 05:00:29 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:51.708 05:00:29 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:12:51.708 [2024-12-06 05:00:29.764028] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:12:51.708 [2024-12-06 05:00:29.764155] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80894 ] 00:12:51.708 [2024-12-06 05:00:29.897116] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:51.968 [2024-12-06 05:00:29.938297] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:52.540 05:00:30 blockdev_xnvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:52.540 05:00:30 blockdev_xnvme -- common/autotest_common.sh@864 -- # return 0 00:12:52.540 05:00:30 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:12:52.540 05:00:30 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:12:52.540 05:00:30 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:12:52.540 05:00:30 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:12:52.540 05:00:30 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:52.801 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:53.062 Waiting for block devices as requested 00:12:53.062 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:53.062 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:53.062 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:53.325 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:58.615 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:58.615 05:00:36 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:12:58.615 05:00:36 blockdev_xnvme -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:12:58.615 05:00:36 blockdev_xnvme -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:12:58.615 05:00:36 blockdev_xnvme -- common/autotest_common.sh@1656 -- # local nvme bdf 00:12:58.615 05:00:36 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:58.615 05:00:36 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:12:58.615 05:00:36 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:12:58.615 05:00:36 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:12:58.615 05:00:36 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:58.615 05:00:36 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:58.615 05:00:36 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:12:58.615 05:00:36 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:12:58.615 05:00:36 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:12:58.615 05:00:36 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:58.615 05:00:36 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:58.615 05:00:36 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:12:58.615 05:00:36 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:12:58.615 05:00:36 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:12:58.615 05:00:36 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:58.615 05:00:36 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:58.615 05:00:36 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:12:58.615 05:00:36 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:12:58.615 05:00:36 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:12:58.615 05:00:36 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:58.615 05:00:36 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:58.615 05:00:36 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:12:58.615 05:00:36 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:12:58.615 05:00:36 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:12:58.615 05:00:36 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:58.615 05:00:36 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:58.615 05:00:36 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:12:58.615 05:00:36 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:12:58.615 05:00:36 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:12:58.615 05:00:36 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:58.615 05:00:36 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:58.615 05:00:36 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:12:58.615 05:00:36 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:12:58.615 05:00:36 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:12:58.615 05:00:36 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:58.615 05:00:36 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:58.615 05:00:36 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:12:58.615 05:00:36 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:58.615 05:00:36 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:58.615 05:00:36 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:58.615 05:00:36 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:12:58.615 05:00:36 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:58.615 05:00:36 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:58.615 05:00:36 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:58.615 05:00:36 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:12:58.615 05:00:36 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:58.615 05:00:36 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:58.615 05:00:36 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:58.616 05:00:36 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:12:58.616 05:00:36 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:58.616 05:00:36 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:58.616 05:00:36 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:58.616 05:00:36 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:12:58.616 05:00:36 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:58.616 05:00:36 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:58.616 05:00:36 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:58.616 05:00:36 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:12:58.616 05:00:36 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:58.616 05:00:36 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:58.616 05:00:36 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:12:58.616 05:00:36 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:12:58.616 05:00:36 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.616 05:00:36 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:58.616 05:00:36 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:12:58.616 nvme0n1 00:12:58.616 nvme1n1 00:12:58.616 nvme2n1 00:12:58.616 nvme2n2 00:12:58.616 nvme2n3 00:12:58.616 nvme3n1 00:12:58.616 05:00:36 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.616 05:00:36 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:12:58.616 05:00:36 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.616 05:00:36 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:58.616 05:00:36 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.616 05:00:36 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:12:58.616 05:00:36 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:12:58.616 05:00:36 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.616 05:00:36 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:58.616 05:00:36 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.616 05:00:36 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:12:58.616 05:00:36 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.616 05:00:36 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:58.616 05:00:36 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.616 05:00:36 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:12:58.616 05:00:36 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.616 05:00:36 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:58.616 05:00:36 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.616 05:00:36 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:12:58.616 05:00:36 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:12:58.616 05:00:36 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:12:58.616 05:00:36 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.616 05:00:36 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:58.616 05:00:36 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.616 05:00:36 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:12:58.616 05:00:36 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:12:58.616 05:00:36 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "090de698-7e08-4aee-b780-3ba72b5085c5"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "090de698-7e08-4aee-b780-3ba72b5085c5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "72cc078e-f333-4eb5-b7a1-a527298d5d9d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "72cc078e-f333-4eb5-b7a1-a527298d5d9d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "0636f4d6-2185-4169-88b9-6d45b932ff7f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "0636f4d6-2185-4169-88b9-6d45b932ff7f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "603359e2-d078-455e-a2e6-1830862c2534"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "603359e2-d078-455e-a2e6-1830862c2534",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "411839e2-85b7-4b59-be79-bad870f11b9c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "411839e2-85b7-4b59-be79-bad870f11b9c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "9aba69a0-ff7d-42fa-a2b7-1b4ffcf81767"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "9aba69a0-ff7d-42fa-a2b7-1b4ffcf81767",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:12:58.616 05:00:36 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:12:58.616 05:00:36 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:12:58.616 05:00:36 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:12:58.616 05:00:36 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 80894 00:12:58.616 05:00:36 blockdev_xnvme -- common/autotest_common.sh@950 -- # '[' -z 80894 ']' 00:12:58.616 05:00:36 blockdev_xnvme -- common/autotest_common.sh@954 -- # kill -0 80894 00:12:58.616 05:00:36 blockdev_xnvme -- common/autotest_common.sh@955 -- # uname 00:12:58.616 05:00:36 blockdev_xnvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:58.616 05:00:36 blockdev_xnvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80894 00:12:58.616 killing process with pid 80894 00:12:58.616 05:00:36 blockdev_xnvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:58.616 05:00:36 blockdev_xnvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:58.616 05:00:36 blockdev_xnvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80894' 00:12:58.616 05:00:36 blockdev_xnvme -- common/autotest_common.sh@969 -- # kill 80894 00:12:58.616 05:00:36 blockdev_xnvme -- common/autotest_common.sh@974 -- # wait 80894 00:12:58.903 05:00:36 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:58.903 05:00:36 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:12:58.903 05:00:36 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:12:58.903 05:00:36 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:58.903 05:00:36 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:58.903 ************************************ 00:12:58.903 START TEST bdev_hello_world 00:12:58.903 ************************************ 00:12:58.903 05:00:36 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:12:58.903 [2024-12-06 05:00:37.013883] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:12:58.903 [2024-12-06 05:00:37.013995] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81237 ] 00:12:59.164 [2024-12-06 05:00:37.148361] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:59.164 [2024-12-06 05:00:37.190243] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:59.164 [2024-12-06 05:00:37.368267] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:12:59.164 [2024-12-06 05:00:37.368432] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:12:59.164 [2024-12-06 05:00:37.368475] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:12:59.164 [2024-12-06 05:00:37.370522] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:12:59.164 [2024-12-06 05:00:37.370890] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:12:59.164 [2024-12-06 05:00:37.370910] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:12:59.164 [2024-12-06 05:00:37.371219] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:12:59.164 00:12:59.164 [2024-12-06 05:00:37.371248] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:12:59.425 00:12:59.425 ************************************ 00:12:59.425 END TEST bdev_hello_world 00:12:59.425 ************************************ 00:12:59.425 real 0m0.586s 00:12:59.425 user 0m0.319s 00:12:59.425 sys 0m0.150s 00:12:59.425 05:00:37 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:59.425 05:00:37 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:12:59.425 05:00:37 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:12:59.425 05:00:37 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:59.425 05:00:37 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:59.425 05:00:37 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:59.425 ************************************ 00:12:59.425 START TEST bdev_bounds 00:12:59.425 ************************************ 00:12:59.425 05:00:37 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:12:59.425 Process bdevio pid: 81268 00:12:59.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:59.425 05:00:37 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=81268 00:12:59.425 05:00:37 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:12:59.425 05:00:37 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:59.425 05:00:37 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 81268' 00:12:59.425 05:00:37 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 81268 00:12:59.425 05:00:37 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 81268 ']' 00:12:59.425 05:00:37 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:59.425 05:00:37 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:59.425 05:00:37 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:59.425 05:00:37 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:59.425 05:00:37 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:12:59.686 [2024-12-06 05:00:37.665229] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:12:59.686 [2024-12-06 05:00:37.665470] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81268 ] 00:12:59.686 [2024-12-06 05:00:37.798831] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:59.686 [2024-12-06 05:00:37.841377] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:12:59.686 [2024-12-06 05:00:37.841646] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:59.686 [2024-12-06 05:00:37.841712] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:13:00.628 05:00:38 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:00.628 05:00:38 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:13:00.628 05:00:38 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:13:00.628 I/O targets: 00:13:00.628 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:13:00.628 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:13:00.628 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:00.628 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:00.628 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:00.628 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:13:00.628 00:13:00.628 00:13:00.628 CUnit - A unit testing framework for C - Version 2.1-3 00:13:00.628 http://cunit.sourceforge.net/ 00:13:00.628 00:13:00.628 00:13:00.629 Suite: bdevio tests on: nvme3n1 00:13:00.629 Test: blockdev write read block ...passed 00:13:00.629 Test: blockdev write zeroes read block ...passed 00:13:00.629 Test: blockdev write zeroes read no split ...passed 00:13:00.629 Test: blockdev write zeroes read split ...passed 00:13:00.629 Test: blockdev write zeroes read split partial ...passed 00:13:00.629 Test: blockdev reset ...passed 00:13:00.629 Test: blockdev write read 8 blocks ...passed 00:13:00.629 Test: blockdev write read size > 128k ...passed 00:13:00.629 Test: blockdev write read invalid size ...passed 00:13:00.629 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:00.629 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:00.629 Test: blockdev write read max offset ...passed 00:13:00.629 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:00.629 Test: blockdev writev readv 8 blocks ...passed 00:13:00.629 Test: blockdev writev readv 30 x 1block ...passed 00:13:00.629 Test: blockdev writev readv block ...passed 00:13:00.629 Test: blockdev writev readv size > 128k ...passed 00:13:00.629 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:00.629 Test: blockdev comparev and writev ...passed 00:13:00.629 Test: blockdev nvme passthru rw ...passed 00:13:00.629 Test: blockdev nvme passthru vendor specific ...passed 00:13:00.629 Test: blockdev nvme admin passthru ...passed 00:13:00.629 Test: blockdev copy ...passed 00:13:00.629 Suite: bdevio tests on: nvme2n3 00:13:00.629 Test: blockdev write read block ...passed 00:13:00.629 Test: blockdev write zeroes read block ...passed 00:13:00.629 Test: blockdev write zeroes read no split ...passed 00:13:00.629 Test: blockdev write zeroes read split ...passed 00:13:00.629 Test: blockdev write zeroes read split partial ...passed 00:13:00.629 Test: blockdev reset ...passed 00:13:00.629 Test: blockdev write read 8 blocks ...passed 00:13:00.629 Test: blockdev write read size > 128k ...passed 00:13:00.629 Test: blockdev write read invalid size ...passed 00:13:00.629 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:00.629 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:00.629 Test: blockdev write read max offset ...passed 00:13:00.629 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:00.629 Test: blockdev writev readv 8 blocks ...passed 00:13:00.629 Test: blockdev writev readv 30 x 1block ...passed 00:13:00.629 Test: blockdev writev readv block ...passed 00:13:00.629 Test: blockdev writev readv size > 128k ...passed 00:13:00.629 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:00.629 Test: blockdev comparev and writev ...passed 00:13:00.629 Test: blockdev nvme passthru rw ...passed 00:13:00.629 Test: blockdev nvme passthru vendor specific ...passed 00:13:00.629 Test: blockdev nvme admin passthru ...passed 00:13:00.629 Test: blockdev copy ...passed 00:13:00.629 Suite: bdevio tests on: nvme2n2 00:13:00.629 Test: blockdev write read block ...passed 00:13:00.629 Test: blockdev write zeroes read block ...passed 00:13:00.629 Test: blockdev write zeroes read no split ...passed 00:13:00.629 Test: blockdev write zeroes read split ...passed 00:13:00.629 Test: blockdev write zeroes read split partial ...passed 00:13:00.629 Test: blockdev reset ...passed 00:13:00.629 Test: blockdev write read 8 blocks ...passed 00:13:00.629 Test: blockdev write read size > 128k ...passed 00:13:00.629 Test: blockdev write read invalid size ...passed 00:13:00.629 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:00.629 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:00.629 Test: blockdev write read max offset ...passed 00:13:00.629 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:00.629 Test: blockdev writev readv 8 blocks ...passed 00:13:00.629 Test: blockdev writev readv 30 x 1block ...passed 00:13:00.629 Test: blockdev writev readv block ...passed 00:13:00.629 Test: blockdev writev readv size > 128k ...passed 00:13:00.629 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:00.629 Test: blockdev comparev and writev ...passed 00:13:00.629 Test: blockdev nvme passthru rw ...passed 00:13:00.629 Test: blockdev nvme passthru vendor specific ...passed 00:13:00.629 Test: blockdev nvme admin passthru ...passed 00:13:00.629 Test: blockdev copy ...passed 00:13:00.629 Suite: bdevio tests on: nvme2n1 00:13:00.629 Test: blockdev write read block ...passed 00:13:00.629 Test: blockdev write zeroes read block ...passed 00:13:00.629 Test: blockdev write zeroes read no split ...passed 00:13:00.629 Test: blockdev write zeroes read split ...passed 00:13:00.629 Test: blockdev write zeroes read split partial ...passed 00:13:00.629 Test: blockdev reset ...passed 00:13:00.629 Test: blockdev write read 8 blocks ...passed 00:13:00.629 Test: blockdev write read size > 128k ...passed 00:13:00.629 Test: blockdev write read invalid size ...passed 00:13:00.629 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:00.629 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:00.629 Test: blockdev write read max offset ...passed 00:13:00.629 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:00.629 Test: blockdev writev readv 8 blocks ...passed 00:13:00.629 Test: blockdev writev readv 30 x 1block ...passed 00:13:00.629 Test: blockdev writev readv block ...passed 00:13:00.629 Test: blockdev writev readv size > 128k ...passed 00:13:00.629 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:00.629 Test: blockdev comparev and writev ...passed 00:13:00.629 Test: blockdev nvme passthru rw ...passed 00:13:00.629 Test: blockdev nvme passthru vendor specific ...passed 00:13:00.629 Test: blockdev nvme admin passthru ...passed 00:13:00.629 Test: blockdev copy ...passed 00:13:00.629 Suite: bdevio tests on: nvme1n1 00:13:00.629 Test: blockdev write read block ...passed 00:13:00.629 Test: blockdev write zeroes read block ...passed 00:13:00.629 Test: blockdev write zeroes read no split ...passed 00:13:00.629 Test: blockdev write zeroes read split ...passed 00:13:00.629 Test: blockdev write zeroes read split partial ...passed 00:13:00.629 Test: blockdev reset ...passed 00:13:00.629 Test: blockdev write read 8 blocks ...passed 00:13:00.629 Test: blockdev write read size > 128k ...passed 00:13:00.629 Test: blockdev write read invalid size ...passed 00:13:00.629 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:00.629 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:00.629 Test: blockdev write read max offset ...passed 00:13:00.630 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:00.630 Test: blockdev writev readv 8 blocks ...passed 00:13:00.630 Test: blockdev writev readv 30 x 1block ...passed 00:13:00.630 Test: blockdev writev readv block ...passed 00:13:00.630 Test: blockdev writev readv size > 128k ...passed 00:13:00.630 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:00.630 Test: blockdev comparev and writev ...passed 00:13:00.630 Test: blockdev nvme passthru rw ...passed 00:13:00.630 Test: blockdev nvme passthru vendor specific ...passed 00:13:00.630 Test: blockdev nvme admin passthru ...passed 00:13:00.630 Test: blockdev copy ...passed 00:13:00.630 Suite: bdevio tests on: nvme0n1 00:13:00.630 Test: blockdev write read block ...passed 00:13:00.630 Test: blockdev write zeroes read block ...passed 00:13:00.630 Test: blockdev write zeroes read no split ...passed 00:13:00.630 Test: blockdev write zeroes read split ...passed 00:13:00.630 Test: blockdev write zeroes read split partial ...passed 00:13:00.630 Test: blockdev reset ...passed 00:13:00.630 Test: blockdev write read 8 blocks ...passed 00:13:00.630 Test: blockdev write read size > 128k ...passed 00:13:00.630 Test: blockdev write read invalid size ...passed 00:13:00.630 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:00.630 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:00.630 Test: blockdev write read max offset ...passed 00:13:00.630 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:00.630 Test: blockdev writev readv 8 blocks ...passed 00:13:00.630 Test: blockdev writev readv 30 x 1block ...passed 00:13:00.630 Test: blockdev writev readv block ...passed 00:13:00.630 Test: blockdev writev readv size > 128k ...passed 00:13:00.630 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:00.630 Test: blockdev comparev and writev ...passed 00:13:00.630 Test: blockdev nvme passthru rw ...passed 00:13:00.630 Test: blockdev nvme passthru vendor specific ...passed 00:13:00.630 Test: blockdev nvme admin passthru ...passed 00:13:00.630 Test: blockdev copy ...passed 00:13:00.630 00:13:00.630 Run Summary: Type Total Ran Passed Failed Inactive 00:13:00.630 suites 6 6 n/a 0 0 00:13:00.630 tests 138 138 138 0 0 00:13:00.630 asserts 780 780 780 0 n/a 00:13:00.630 00:13:00.630 Elapsed time = 0.465 seconds 00:13:00.630 0 00:13:00.630 05:00:38 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 81268 00:13:00.630 05:00:38 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 81268 ']' 00:13:00.630 05:00:38 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 81268 00:13:00.630 05:00:38 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:13:00.630 05:00:38 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:00.630 05:00:38 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81268 00:13:00.630 05:00:38 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:00.630 05:00:38 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:00.630 05:00:38 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81268' 00:13:00.630 killing process with pid 81268 00:13:00.630 05:00:38 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 81268 00:13:00.630 05:00:38 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 81268 00:13:00.890 05:00:39 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:13:00.890 00:13:00.890 real 0m1.425s 00:13:00.890 user 0m3.535s 00:13:00.890 sys 0m0.266s 00:13:00.890 05:00:39 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:00.890 05:00:39 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:13:00.890 ************************************ 00:13:00.890 END TEST bdev_bounds 00:13:00.890 ************************************ 00:13:00.890 05:00:39 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:13:00.890 05:00:39 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:00.890 05:00:39 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:00.890 05:00:39 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:00.890 ************************************ 00:13:00.890 START TEST bdev_nbd 00:13:00.890 ************************************ 00:13:00.890 05:00:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:13:00.890 05:00:39 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:13:00.890 05:00:39 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:13:00.890 05:00:39 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:00.890 05:00:39 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:00.890 05:00:39 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:00.890 05:00:39 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:13:00.890 05:00:39 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:13:00.890 05:00:39 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:13:00.890 05:00:39 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:13:00.890 05:00:39 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:13:00.890 05:00:39 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:13:00.890 05:00:39 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:00.890 05:00:39 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:13:00.890 05:00:39 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:00.890 05:00:39 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:13:00.890 05:00:39 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=81312 00:13:00.890 05:00:39 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:13:00.890 05:00:39 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 81312 /var/tmp/spdk-nbd.sock 00:13:00.890 05:00:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 81312 ']' 00:13:00.890 05:00:39 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:00.890 05:00:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:13:00.890 05:00:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:00.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:13:00.890 05:00:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:13:00.890 05:00:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:00.890 05:00:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:01.150 [2024-12-06 05:00:39.162540] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:13:01.150 [2024-12-06 05:00:39.162656] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:01.150 [2024-12-06 05:00:39.296926] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:01.150 [2024-12-06 05:00:39.336897] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:02.090 05:00:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:02.090 05:00:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:13:02.090 05:00:40 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:13:02.090 05:00:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:02.090 05:00:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:02.090 05:00:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:13:02.090 05:00:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:13:02.090 05:00:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:02.090 05:00:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:02.090 05:00:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:13:02.090 05:00:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:13:02.090 05:00:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:13:02.090 05:00:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:13:02.090 05:00:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:02.090 05:00:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:13:02.090 05:00:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:13:02.090 05:00:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:13:02.090 05:00:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:13:02.090 05:00:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:02.090 05:00:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:02.090 05:00:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:02.090 05:00:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:02.090 05:00:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:02.090 05:00:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:02.090 05:00:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:02.090 05:00:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:02.090 05:00:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:02.090 1+0 records in 00:13:02.090 1+0 records out 00:13:02.090 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000660502 s, 6.2 MB/s 00:13:02.090 05:00:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:02.090 05:00:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:02.090 05:00:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:02.090 05:00:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:02.090 05:00:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:02.090 05:00:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:02.090 05:00:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:02.090 05:00:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:13:02.350 05:00:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:13:02.350 05:00:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:13:02.350 05:00:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:13:02.350 05:00:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:02.350 05:00:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:02.350 05:00:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:02.350 05:00:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:02.350 05:00:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:02.350 05:00:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:02.350 05:00:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:02.350 05:00:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:02.350 05:00:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:02.350 1+0 records in 00:13:02.350 1+0 records out 00:13:02.350 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00122487 s, 3.3 MB/s 00:13:02.350 05:00:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:02.350 05:00:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:02.350 05:00:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:02.350 05:00:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:02.350 05:00:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:02.350 05:00:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:02.350 05:00:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:02.350 05:00:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:13:02.609 05:00:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:13:02.609 05:00:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:13:02.609 05:00:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:13:02.609 05:00:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:13:02.609 05:00:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:02.610 05:00:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:02.610 05:00:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:02.610 05:00:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:13:02.610 05:00:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:02.610 05:00:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:02.610 05:00:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:02.610 05:00:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:02.610 1+0 records in 00:13:02.610 1+0 records out 00:13:02.610 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000993603 s, 4.1 MB/s 00:13:02.610 05:00:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:02.610 05:00:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:02.610 05:00:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:02.610 05:00:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:02.610 05:00:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:02.610 05:00:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:02.610 05:00:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:02.610 05:00:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:13:02.870 05:00:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:13:02.870 05:00:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:13:02.870 05:00:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:13:02.870 05:00:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:13:02.870 05:00:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:02.870 05:00:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:02.870 05:00:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:02.870 05:00:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:13:02.870 05:00:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:02.870 05:00:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:02.870 05:00:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:02.870 05:00:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:02.870 1+0 records in 00:13:02.870 1+0 records out 00:13:02.870 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000920717 s, 4.4 MB/s 00:13:02.870 05:00:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:02.870 05:00:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:02.870 05:00:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:02.870 05:00:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:02.870 05:00:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:02.870 05:00:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:02.870 05:00:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:02.870 05:00:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:13:03.131 05:00:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:13:03.131 05:00:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:13:03.131 05:00:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:13:03.131 05:00:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:13:03.131 05:00:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:03.131 05:00:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:03.131 05:00:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:03.131 05:00:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:13:03.131 05:00:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:03.131 05:00:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:03.131 05:00:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:03.131 05:00:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:03.131 1+0 records in 00:13:03.131 1+0 records out 00:13:03.131 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00104722 s, 3.9 MB/s 00:13:03.131 05:00:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:03.131 05:00:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:03.131 05:00:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:03.131 05:00:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:03.131 05:00:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:03.131 05:00:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:03.131 05:00:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:03.131 05:00:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:13:03.392 05:00:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:13:03.392 05:00:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:13:03.392 05:00:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:13:03.392 05:00:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:13:03.392 05:00:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:03.392 05:00:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:03.392 05:00:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:03.392 05:00:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:13:03.392 05:00:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:03.392 05:00:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:03.392 05:00:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:03.392 05:00:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:03.392 1+0 records in 00:13:03.392 1+0 records out 00:13:03.392 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000881971 s, 4.6 MB/s 00:13:03.392 05:00:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:03.392 05:00:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:03.392 05:00:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:03.392 05:00:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:03.392 05:00:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:03.392 05:00:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:03.392 05:00:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:03.392 05:00:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:03.653 05:00:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:13:03.653 { 00:13:03.653 "nbd_device": "/dev/nbd0", 00:13:03.653 "bdev_name": "nvme0n1" 00:13:03.653 }, 00:13:03.653 { 00:13:03.653 "nbd_device": "/dev/nbd1", 00:13:03.653 "bdev_name": "nvme1n1" 00:13:03.653 }, 00:13:03.653 { 00:13:03.653 "nbd_device": "/dev/nbd2", 00:13:03.653 "bdev_name": "nvme2n1" 00:13:03.653 }, 00:13:03.653 { 00:13:03.653 "nbd_device": "/dev/nbd3", 00:13:03.653 "bdev_name": "nvme2n2" 00:13:03.653 }, 00:13:03.653 { 00:13:03.653 "nbd_device": "/dev/nbd4", 00:13:03.653 "bdev_name": "nvme2n3" 00:13:03.653 }, 00:13:03.653 { 00:13:03.653 "nbd_device": "/dev/nbd5", 00:13:03.653 "bdev_name": "nvme3n1" 00:13:03.653 } 00:13:03.653 ]' 00:13:03.653 05:00:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:13:03.653 05:00:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:13:03.653 05:00:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:13:03.653 { 00:13:03.653 "nbd_device": "/dev/nbd0", 00:13:03.653 "bdev_name": "nvme0n1" 00:13:03.653 }, 00:13:03.653 { 00:13:03.653 "nbd_device": "/dev/nbd1", 00:13:03.653 "bdev_name": "nvme1n1" 00:13:03.653 }, 00:13:03.653 { 00:13:03.653 "nbd_device": "/dev/nbd2", 00:13:03.653 "bdev_name": "nvme2n1" 00:13:03.653 }, 00:13:03.653 { 00:13:03.653 "nbd_device": "/dev/nbd3", 00:13:03.653 "bdev_name": "nvme2n2" 00:13:03.653 }, 00:13:03.653 { 00:13:03.653 "nbd_device": "/dev/nbd4", 00:13:03.653 "bdev_name": "nvme2n3" 00:13:03.653 }, 00:13:03.653 { 00:13:03.653 "nbd_device": "/dev/nbd5", 00:13:03.653 "bdev_name": "nvme3n1" 00:13:03.653 } 00:13:03.653 ]' 00:13:03.653 05:00:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:13:03.653 05:00:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:03.653 05:00:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:13:03.653 05:00:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:03.653 05:00:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:03.653 05:00:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:03.653 05:00:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:03.653 05:00:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:03.653 05:00:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:03.653 05:00:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:03.653 05:00:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:03.653 05:00:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:03.654 05:00:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:03.654 05:00:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:03.654 05:00:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:03.654 05:00:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:03.654 05:00:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:03.915 05:00:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:03.915 05:00:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:03.915 05:00:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:03.915 05:00:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:03.915 05:00:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:03.915 05:00:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:03.915 05:00:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:03.915 05:00:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:03.915 05:00:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:03.915 05:00:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:13:04.176 05:00:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:13:04.176 05:00:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:13:04.176 05:00:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:13:04.176 05:00:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:04.176 05:00:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:04.176 05:00:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:13:04.176 05:00:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:04.176 05:00:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:04.176 05:00:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:04.176 05:00:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:13:04.436 05:00:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:13:04.436 05:00:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:13:04.436 05:00:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:13:04.436 05:00:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:04.436 05:00:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:04.436 05:00:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:13:04.436 05:00:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:04.436 05:00:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:04.436 05:00:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:04.436 05:00:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:13:04.697 05:00:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:13:04.697 05:00:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:13:04.697 05:00:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:13:04.697 05:00:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:04.697 05:00:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:04.697 05:00:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:13:04.697 05:00:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:04.697 05:00:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:04.697 05:00:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:04.697 05:00:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:13:04.697 05:00:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:13:04.697 05:00:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:13:04.697 05:00:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:13:04.697 05:00:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:04.697 05:00:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:04.697 05:00:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:13:04.697 05:00:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:04.697 05:00:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:04.697 05:00:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:04.697 05:00:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:04.697 05:00:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:04.957 05:00:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:04.958 05:00:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:04.958 05:00:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:04.958 05:00:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:04.958 05:00:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:04.958 05:00:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:04.958 05:00:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:04.958 05:00:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:04.958 05:00:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:04.958 05:00:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:13:04.958 05:00:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:13:04.958 05:00:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:13:04.958 05:00:43 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:04.958 05:00:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:04.958 05:00:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:04.958 05:00:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:13:04.958 05:00:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:04.958 05:00:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:13:04.958 05:00:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:04.958 05:00:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:04.958 05:00:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:04.958 05:00:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:04.958 05:00:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:04.958 05:00:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:04.958 05:00:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:13:04.958 05:00:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:04.958 05:00:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:04.958 05:00:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:13:05.219 /dev/nbd0 00:13:05.219 05:00:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:05.219 05:00:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:05.219 05:00:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:05.219 05:00:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:05.219 05:00:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:05.219 05:00:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:05.219 05:00:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:05.219 05:00:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:05.219 05:00:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:05.219 05:00:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:05.219 05:00:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:05.219 1+0 records in 00:13:05.219 1+0 records out 00:13:05.219 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00100459 s, 4.1 MB/s 00:13:05.219 05:00:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:05.219 05:00:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:05.219 05:00:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:05.219 05:00:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:05.219 05:00:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:05.219 05:00:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:05.219 05:00:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:05.219 05:00:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:13:05.479 /dev/nbd1 00:13:05.480 05:00:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:05.480 05:00:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:05.480 05:00:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:05.480 05:00:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:05.480 05:00:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:05.480 05:00:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:05.480 05:00:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:05.480 05:00:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:05.480 05:00:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:05.480 05:00:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:05.480 05:00:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:05.480 1+0 records in 00:13:05.480 1+0 records out 00:13:05.480 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00123388 s, 3.3 MB/s 00:13:05.480 05:00:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:05.480 05:00:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:05.480 05:00:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:05.480 05:00:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:05.480 05:00:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:05.480 05:00:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:05.480 05:00:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:05.480 05:00:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:13:05.740 /dev/nbd10 00:13:05.740 05:00:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:13:05.740 05:00:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:13:05.740 05:00:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:13:05.740 05:00:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:05.741 05:00:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:05.741 05:00:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:05.741 05:00:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:13:05.741 05:00:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:05.741 05:00:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:05.741 05:00:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:05.741 05:00:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:05.741 1+0 records in 00:13:05.741 1+0 records out 00:13:05.741 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00076207 s, 5.4 MB/s 00:13:05.741 05:00:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:05.741 05:00:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:05.741 05:00:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:05.741 05:00:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:05.741 05:00:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:05.741 05:00:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:05.741 05:00:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:05.741 05:00:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:13:06.002 /dev/nbd11 00:13:06.002 05:00:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:13:06.002 05:00:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:13:06.002 05:00:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:13:06.002 05:00:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:06.002 05:00:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:06.002 05:00:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:06.002 05:00:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:13:06.002 05:00:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:06.002 05:00:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:06.002 05:00:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:06.002 05:00:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:06.002 1+0 records in 00:13:06.002 1+0 records out 00:13:06.002 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00113577 s, 3.6 MB/s 00:13:06.002 05:00:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:06.002 05:00:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:06.002 05:00:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:06.002 05:00:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:06.002 05:00:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:06.002 05:00:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:06.002 05:00:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:06.002 05:00:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:13:06.263 /dev/nbd12 00:13:06.263 05:00:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:13:06.263 05:00:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:13:06.263 05:00:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:13:06.263 05:00:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:06.263 05:00:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:06.263 05:00:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:06.263 05:00:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:13:06.263 05:00:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:06.263 05:00:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:06.263 05:00:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:06.263 05:00:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:06.263 1+0 records in 00:13:06.263 1+0 records out 00:13:06.263 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00102901 s, 4.0 MB/s 00:13:06.263 05:00:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:06.263 05:00:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:06.263 05:00:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:06.263 05:00:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:06.263 05:00:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:06.263 05:00:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:06.263 05:00:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:06.263 05:00:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:13:06.525 /dev/nbd13 00:13:06.525 05:00:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:13:06.525 05:00:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:13:06.525 05:00:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:13:06.525 05:00:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:06.525 05:00:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:06.525 05:00:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:06.525 05:00:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:13:06.525 05:00:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:06.525 05:00:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:06.525 05:00:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:06.525 05:00:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:06.525 1+0 records in 00:13:06.525 1+0 records out 00:13:06.525 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00125109 s, 3.3 MB/s 00:13:06.525 05:00:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:06.525 05:00:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:06.525 05:00:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:06.525 05:00:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:06.525 05:00:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:06.525 05:00:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:06.525 05:00:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:06.525 05:00:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:06.525 05:00:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:06.525 05:00:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:06.786 05:00:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:06.786 { 00:13:06.786 "nbd_device": "/dev/nbd0", 00:13:06.786 "bdev_name": "nvme0n1" 00:13:06.786 }, 00:13:06.786 { 00:13:06.786 "nbd_device": "/dev/nbd1", 00:13:06.786 "bdev_name": "nvme1n1" 00:13:06.786 }, 00:13:06.786 { 00:13:06.786 "nbd_device": "/dev/nbd10", 00:13:06.786 "bdev_name": "nvme2n1" 00:13:06.786 }, 00:13:06.786 { 00:13:06.786 "nbd_device": "/dev/nbd11", 00:13:06.786 "bdev_name": "nvme2n2" 00:13:06.786 }, 00:13:06.786 { 00:13:06.786 "nbd_device": "/dev/nbd12", 00:13:06.786 "bdev_name": "nvme2n3" 00:13:06.786 }, 00:13:06.786 { 00:13:06.786 "nbd_device": "/dev/nbd13", 00:13:06.786 "bdev_name": "nvme3n1" 00:13:06.786 } 00:13:06.786 ]' 00:13:06.786 05:00:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:06.786 05:00:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:06.786 { 00:13:06.786 "nbd_device": "/dev/nbd0", 00:13:06.786 "bdev_name": "nvme0n1" 00:13:06.786 }, 00:13:06.786 { 00:13:06.786 "nbd_device": "/dev/nbd1", 00:13:06.786 "bdev_name": "nvme1n1" 00:13:06.786 }, 00:13:06.786 { 00:13:06.786 "nbd_device": "/dev/nbd10", 00:13:06.786 "bdev_name": "nvme2n1" 00:13:06.786 }, 00:13:06.786 { 00:13:06.786 "nbd_device": "/dev/nbd11", 00:13:06.786 "bdev_name": "nvme2n2" 00:13:06.786 }, 00:13:06.786 { 00:13:06.786 "nbd_device": "/dev/nbd12", 00:13:06.786 "bdev_name": "nvme2n3" 00:13:06.786 }, 00:13:06.786 { 00:13:06.786 "nbd_device": "/dev/nbd13", 00:13:06.786 "bdev_name": "nvme3n1" 00:13:06.786 } 00:13:06.786 ]' 00:13:06.786 05:00:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:13:06.786 /dev/nbd1 00:13:06.786 /dev/nbd10 00:13:06.786 /dev/nbd11 00:13:06.786 /dev/nbd12 00:13:06.786 /dev/nbd13' 00:13:06.786 05:00:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:13:06.786 /dev/nbd1 00:13:06.786 /dev/nbd10 00:13:06.786 /dev/nbd11 00:13:06.786 /dev/nbd12 00:13:06.786 /dev/nbd13' 00:13:06.786 05:00:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:06.786 05:00:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:13:06.786 05:00:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:13:06.786 05:00:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:13:06.786 05:00:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:13:06.786 05:00:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:13:06.786 05:00:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:06.786 05:00:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:06.786 05:00:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:13:06.786 05:00:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:06.786 05:00:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:13:06.786 05:00:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:13:06.786 256+0 records in 00:13:06.786 256+0 records out 00:13:06.786 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.012269 s, 85.5 MB/s 00:13:06.786 05:00:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:06.786 05:00:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:13:07.059 256+0 records in 00:13:07.059 256+0 records out 00:13:07.059 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.242588 s, 4.3 MB/s 00:13:07.059 05:00:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:07.059 05:00:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:13:07.321 256+0 records in 00:13:07.321 256+0 records out 00:13:07.321 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.309165 s, 3.4 MB/s 00:13:07.321 05:00:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:07.321 05:00:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:13:07.585 256+0 records in 00:13:07.585 256+0 records out 00:13:07.585 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.248157 s, 4.2 MB/s 00:13:07.585 05:00:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:07.585 05:00:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:13:07.915 256+0 records in 00:13:07.915 256+0 records out 00:13:07.915 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.222975 s, 4.7 MB/s 00:13:07.915 05:00:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:07.915 05:00:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:13:07.915 256+0 records in 00:13:07.915 256+0 records out 00:13:07.915 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.205444 s, 5.1 MB/s 00:13:07.915 05:00:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:07.915 05:00:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:13:08.207 256+0 records in 00:13:08.207 256+0 records out 00:13:08.207 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.249623 s, 4.2 MB/s 00:13:08.207 05:00:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:13:08.207 05:00:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:08.207 05:00:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:08.207 05:00:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:13:08.207 05:00:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:08.207 05:00:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:13:08.207 05:00:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:13:08.207 05:00:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:08.207 05:00:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:13:08.207 05:00:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:08.207 05:00:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:13:08.207 05:00:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:08.207 05:00:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:13:08.207 05:00:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:08.207 05:00:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:13:08.207 05:00:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:08.207 05:00:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:13:08.207 05:00:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:08.207 05:00:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:13:08.207 05:00:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:08.207 05:00:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:08.207 05:00:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:08.207 05:00:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:08.207 05:00:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:08.207 05:00:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:08.207 05:00:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:08.207 05:00:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:08.469 05:00:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:08.469 05:00:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:08.469 05:00:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:08.469 05:00:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:08.469 05:00:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:08.469 05:00:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:08.469 05:00:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:08.469 05:00:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:08.469 05:00:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:08.469 05:00:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:08.730 05:00:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:08.730 05:00:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:08.730 05:00:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:08.730 05:00:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:08.730 05:00:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:08.730 05:00:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:08.730 05:00:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:08.730 05:00:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:08.730 05:00:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:08.730 05:00:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:13:08.991 05:00:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:13:08.991 05:00:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:13:08.991 05:00:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:13:08.991 05:00:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:08.991 05:00:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:08.991 05:00:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:13:08.991 05:00:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:08.991 05:00:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:08.991 05:00:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:08.991 05:00:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:13:09.252 05:00:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:13:09.252 05:00:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:13:09.252 05:00:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:13:09.252 05:00:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:09.252 05:00:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:09.252 05:00:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:13:09.252 05:00:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:09.252 05:00:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:09.252 05:00:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:09.252 05:00:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:13:09.514 05:00:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:13:09.514 05:00:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:13:09.514 05:00:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:13:09.514 05:00:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:09.514 05:00:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:09.514 05:00:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:13:09.514 05:00:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:09.514 05:00:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:09.514 05:00:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:09.514 05:00:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:13:09.514 05:00:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:13:09.775 05:00:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:13:09.775 05:00:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:13:09.775 05:00:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:09.775 05:00:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:09.775 05:00:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:13:09.775 05:00:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:09.775 05:00:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:09.775 05:00:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:09.775 05:00:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:09.775 05:00:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:09.775 05:00:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:09.775 05:00:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:09.775 05:00:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:10.038 05:00:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:10.038 05:00:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:10.038 05:00:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:10.038 05:00:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:10.038 05:00:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:10.038 05:00:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:10.038 05:00:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:13:10.038 05:00:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:13:10.038 05:00:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:13:10.038 05:00:48 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:10.038 05:00:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:10.038 05:00:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:13:10.038 05:00:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:13:10.038 malloc_lvol_verify 00:13:10.038 05:00:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:13:10.300 61174c1e-f384-43c3-9a77-dea68c5b3352 00:13:10.300 05:00:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:13:10.563 10201058-fd2e-47cd-82c6-cb2ad0390748 00:13:10.563 05:00:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:13:10.825 /dev/nbd0 00:13:10.825 05:00:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:13:10.825 05:00:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:13:10.825 05:00:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:13:10.825 05:00:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:13:10.825 05:00:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:13:10.825 mke2fs 1.47.0 (5-Feb-2023) 00:13:10.825 Discarding device blocks: 0/4096 done 00:13:10.825 Creating filesystem with 4096 1k blocks and 1024 inodes 00:13:10.825 00:13:10.825 Allocating group tables: 0/1 done 00:13:10.825 Writing inode tables: 0/1 done 00:13:10.825 Creating journal (1024 blocks): done 00:13:10.825 Writing superblocks and filesystem accounting information: 0/1 done 00:13:10.825 00:13:10.825 05:00:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:10.825 05:00:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:10.825 05:00:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:10.825 05:00:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:10.825 05:00:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:10.825 05:00:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:10.825 05:00:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:11.087 05:00:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:11.087 05:00:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:11.087 05:00:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:11.087 05:00:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:11.087 05:00:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:11.087 05:00:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:11.087 05:00:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:11.087 05:00:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:11.087 05:00:49 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 81312 00:13:11.087 05:00:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 81312 ']' 00:13:11.087 05:00:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 81312 00:13:11.087 05:00:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:13:11.087 05:00:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:11.087 05:00:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81312 00:13:11.087 05:00:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:11.087 05:00:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:11.087 killing process with pid 81312 00:13:11.087 05:00:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81312' 00:13:11.087 05:00:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 81312 00:13:11.087 05:00:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 81312 00:13:11.350 ************************************ 00:13:11.350 END TEST bdev_nbd 00:13:11.350 ************************************ 00:13:11.350 05:00:49 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:13:11.350 00:13:11.350 real 0m10.337s 00:13:11.350 user 0m13.946s 00:13:11.350 sys 0m3.721s 00:13:11.350 05:00:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:11.350 05:00:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:11.350 05:00:49 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:13:11.350 05:00:49 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:13:11.350 05:00:49 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:13:11.350 05:00:49 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:13:11.350 05:00:49 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:11.350 05:00:49 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:11.350 05:00:49 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:11.350 ************************************ 00:13:11.350 START TEST bdev_fio 00:13:11.350 ************************************ 00:13:11.350 05:00:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:13:11.350 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:13:11.350 05:00:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:13:11.350 05:00:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:13:11.350 05:00:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:13:11.350 05:00:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:13:11.350 05:00:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:13:11.350 05:00:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:13:11.350 05:00:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:13:11.350 05:00:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:11.350 05:00:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:13:11.350 05:00:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:13:11.350 05:00:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:13:11.350 05:00:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:13:11.350 05:00:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:11.350 05:00:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:13:11.350 05:00:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:13:11.350 05:00:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:11.350 05:00:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:13:11.350 05:00:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:13:11.350 05:00:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:13:11.350 05:00:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:13:11.350 05:00:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:13:11.350 05:00:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:13:11.350 05:00:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:13:11.350 05:00:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:11.350 05:00:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:13:11.350 05:00:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:13:11.350 05:00:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:11.350 05:00:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:13:11.350 05:00:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:13:11.350 05:00:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:11.350 05:00:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:13:11.350 05:00:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:13:11.350 05:00:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:11.350 05:00:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n2]' 00:13:11.350 05:00:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n2 00:13:11.350 05:00:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:11.350 05:00:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n3]' 00:13:11.350 05:00:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n3 00:13:11.350 05:00:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:11.350 05:00:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:13:11.350 05:00:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:13:11.350 05:00:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:13:11.351 05:00:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:11.351 05:00:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:13:11.351 05:00:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:11.351 05:00:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:11.612 ************************************ 00:13:11.612 START TEST bdev_fio_rw_verify 00:13:11.612 ************************************ 00:13:11.612 05:00:49 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:11.612 05:00:49 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:11.612 05:00:49 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:13:11.612 05:00:49 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:11.612 05:00:49 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:13:11.612 05:00:49 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:11.612 05:00:49 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:13:11.612 05:00:49 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:13:11.612 05:00:49 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:13:11.612 05:00:49 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:11.612 05:00:49 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:13:11.612 05:00:49 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:13:11.612 05:00:49 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:11.612 05:00:49 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:11.612 05:00:49 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:13:11.612 05:00:49 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:11.612 05:00:49 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:11.612 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:11.612 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:11.612 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:11.612 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:11.612 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:11.612 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:11.612 fio-3.35 00:13:11.612 Starting 6 threads 00:13:23.855 00:13:23.855 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=81711: Fri Dec 6 05:01:00 2024 00:13:23.855 read: IOPS=12.0k, BW=47.1MiB/s (49.4MB/s)(471MiB/10002msec) 00:13:23.855 slat (usec): min=2, max=1902, avg= 7.40, stdev=13.12 00:13:23.855 clat (usec): min=96, max=8879, avg=1683.59, stdev=830.75 00:13:23.855 lat (usec): min=100, max=8889, avg=1690.99, stdev=831.26 00:13:23.855 clat percentiles (usec): 00:13:23.855 | 50.000th=[ 1582], 99.000th=[ 4293], 99.900th=[ 5800], 99.990th=[ 7111], 00:13:23.855 | 99.999th=[ 7635] 00:13:23.855 write: IOPS=12.2k, BW=47.6MiB/s (49.9MB/s)(476MiB/10002msec); 0 zone resets 00:13:23.855 slat (usec): min=12, max=4675, avg=44.43, stdev=156.06 00:13:23.855 clat (usec): min=101, max=12496, avg=1923.13, stdev=909.86 00:13:23.855 lat (usec): min=120, max=12530, avg=1967.56, stdev=923.04 00:13:23.855 clat percentiles (usec): 00:13:23.855 | 50.000th=[ 1778], 99.000th=[ 4686], 99.900th=[ 6259], 99.990th=[ 9503], 00:13:23.855 | 99.999th=[12518] 00:13:23.855 bw ( KiB/s): min=44784, max=52176, per=100.00%, avg=48928.79, stdev=393.13, samples=114 00:13:23.855 iops : min=11196, max=13043, avg=12231.58, stdev=98.24, samples=114 00:13:23.855 lat (usec) : 100=0.01%, 250=0.43%, 500=2.71%, 750=4.90%, 1000=8.14% 00:13:23.855 lat (msec) : 2=49.26%, 4=32.31%, 10=2.25%, 20=0.01% 00:13:23.855 cpu : usr=46.43%, sys=30.51%, ctx=4892, majf=0, minf=14747 00:13:23.855 IO depths : 1=11.3%, 2=23.7%, 4=51.3%, 8=13.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:23.855 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:23.855 complete : 0=0.0%, 4=89.2%, 8=10.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:23.855 issued rwts: total=120519,121971,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:23.855 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:23.855 00:13:23.855 Run status group 0 (all jobs): 00:13:23.855 READ: bw=47.1MiB/s (49.4MB/s), 47.1MiB/s-47.1MiB/s (49.4MB/s-49.4MB/s), io=471MiB (494MB), run=10002-10002msec 00:13:23.855 WRITE: bw=47.6MiB/s (49.9MB/s), 47.6MiB/s-47.6MiB/s (49.9MB/s-49.9MB/s), io=476MiB (500MB), run=10002-10002msec 00:13:23.855 ----------------------------------------------------- 00:13:23.855 Suppressions used: 00:13:23.855 count bytes template 00:13:23.855 6 48 /usr/src/fio/parse.c 00:13:23.855 1375 132000 /usr/src/fio/iolog.c 00:13:23.855 1 8 libtcmalloc_minimal.so 00:13:23.855 1 904 libcrypto.so 00:13:23.855 ----------------------------------------------------- 00:13:23.855 00:13:23.855 00:13:23.855 real 0m11.166s 00:13:23.855 user 0m28.580s 00:13:23.855 sys 0m18.644s 00:13:23.855 05:01:00 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:23.855 ************************************ 00:13:23.855 END TEST bdev_fio_rw_verify 00:13:23.855 ************************************ 00:13:23.855 05:01:00 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:13:23.855 05:01:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:13:23.855 05:01:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:23.855 05:01:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:13:23.855 05:01:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:23.855 05:01:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:13:23.855 05:01:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:13:23.855 05:01:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:13:23.855 05:01:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:13:23.855 05:01:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:23.855 05:01:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:13:23.855 05:01:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:13:23.855 05:01:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:23.855 05:01:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:13:23.855 05:01:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:13:23.855 05:01:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:13:23.856 05:01:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:13:23.856 05:01:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:13:23.856 05:01:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "090de698-7e08-4aee-b780-3ba72b5085c5"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "090de698-7e08-4aee-b780-3ba72b5085c5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "72cc078e-f333-4eb5-b7a1-a527298d5d9d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "72cc078e-f333-4eb5-b7a1-a527298d5d9d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "0636f4d6-2185-4169-88b9-6d45b932ff7f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "0636f4d6-2185-4169-88b9-6d45b932ff7f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "603359e2-d078-455e-a2e6-1830862c2534"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "603359e2-d078-455e-a2e6-1830862c2534",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "411839e2-85b7-4b59-be79-bad870f11b9c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "411839e2-85b7-4b59-be79-bad870f11b9c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "9aba69a0-ff7d-42fa-a2b7-1b4ffcf81767"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "9aba69a0-ff7d-42fa-a2b7-1b4ffcf81767",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:13:23.856 05:01:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:13:23.856 05:01:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:23.856 /home/vagrant/spdk_repo/spdk 00:13:23.856 05:01:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:13:23.856 05:01:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:13:23.856 05:01:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:13:23.856 00:13:23.856 real 0m11.360s 00:13:23.856 user 0m28.654s 00:13:23.856 sys 0m18.738s 00:13:23.856 05:01:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:23.856 ************************************ 00:13:23.856 END TEST bdev_fio 00:13:23.856 ************************************ 00:13:23.856 05:01:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:23.856 05:01:00 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:23.856 05:01:00 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:23.856 05:01:00 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:13:23.856 05:01:00 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:23.856 05:01:00 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:23.856 ************************************ 00:13:23.856 START TEST bdev_verify 00:13:23.856 ************************************ 00:13:23.856 05:01:00 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:23.856 [2024-12-06 05:01:01.004224] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:13:23.856 [2024-12-06 05:01:01.004364] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81880 ] 00:13:23.856 [2024-12-06 05:01:01.141558] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:23.856 [2024-12-06 05:01:01.214803] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:23.856 [2024-12-06 05:01:01.214814] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:23.856 Running I/O for 5 seconds... 00:13:25.744 25361.00 IOPS, 99.07 MiB/s [2024-12-06T05:01:04.917Z] 25591.00 IOPS, 99.96 MiB/s [2024-12-06T05:01:05.876Z] 25292.67 IOPS, 98.80 MiB/s [2024-12-06T05:01:06.814Z] 25213.00 IOPS, 98.49 MiB/s [2024-12-06T05:01:06.814Z] 25137.80 IOPS, 98.19 MiB/s 00:13:28.582 Latency(us) 00:13:28.582 [2024-12-06T05:01:06.814Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:28.582 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:28.582 Verification LBA range: start 0x0 length 0xa0000 00:13:28.582 nvme0n1 : 5.03 2061.36 8.05 0.00 0.00 61979.63 7007.31 69770.63 00:13:28.582 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:28.582 Verification LBA range: start 0xa0000 length 0xa0000 00:13:28.582 nvme0n1 : 5.04 1932.00 7.55 0.00 0.00 66121.81 10939.47 75416.81 00:13:28.582 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:28.582 Verification LBA range: start 0x0 length 0xbd0bd 00:13:28.582 nvme1n1 : 5.06 2555.39 9.98 0.00 0.00 49623.79 4032.98 63721.16 00:13:28.582 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:28.582 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:13:28.582 nvme1n1 : 5.06 2301.54 8.99 0.00 0.00 55250.25 4562.31 79449.80 00:13:28.582 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:28.582 Verification LBA range: start 0x0 length 0x80000 00:13:28.582 nvme2n1 : 5.06 2101.26 8.21 0.00 0.00 60377.45 7007.31 55655.19 00:13:28.582 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:28.582 Verification LBA range: start 0x80000 length 0x80000 00:13:28.582 nvme2n1 : 5.06 1946.46 7.60 0.00 0.00 65422.24 12502.25 65334.35 00:13:28.582 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:28.582 Verification LBA range: start 0x0 length 0x80000 00:13:28.582 nvme2n2 : 5.06 2072.53 8.10 0.00 0.00 61057.03 5671.38 60494.77 00:13:28.582 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:28.582 Verification LBA range: start 0x80000 length 0x80000 00:13:28.582 nvme2n2 : 5.05 1900.38 7.42 0.00 0.00 66785.48 11191.53 62107.96 00:13:28.582 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:28.582 Verification LBA range: start 0x0 length 0x80000 00:13:28.582 nvme2n3 : 5.07 2071.71 8.09 0.00 0.00 60972.35 6301.54 66544.25 00:13:28.582 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:28.582 Verification LBA range: start 0x80000 length 0x80000 00:13:28.582 nvme2n3 : 5.06 1897.95 7.41 0.00 0.00 66695.98 8469.27 64527.75 00:13:28.582 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:28.582 Verification LBA range: start 0x0 length 0x20000 00:13:28.582 nvme3n1 : 5.08 2091.78 8.17 0.00 0.00 60324.11 3377.62 70980.53 00:13:28.582 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:28.582 Verification LBA range: start 0x20000 length 0x20000 00:13:28.582 nvme3n1 : 5.07 1919.70 7.50 0.00 0.00 65816.64 5494.94 70577.23 00:13:28.582 [2024-12-06T05:01:06.814Z] =================================================================================================================== 00:13:28.582 [2024-12-06T05:01:06.814Z] Total : 24852.04 97.08 0.00 0.00 61263.52 3377.62 79449.80 00:13:28.843 ************************************ 00:13:28.843 END TEST bdev_verify 00:13:28.843 ************************************ 00:13:28.843 00:13:28.843 real 0m6.002s 00:13:28.843 user 0m9.477s 00:13:28.843 sys 0m1.477s 00:13:28.843 05:01:06 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:28.843 05:01:06 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:13:28.843 05:01:06 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:28.843 05:01:06 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:13:28.843 05:01:06 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:28.843 05:01:06 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:28.843 ************************************ 00:13:28.843 START TEST bdev_verify_big_io 00:13:28.843 ************************************ 00:13:28.843 05:01:07 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:29.102 [2024-12-06 05:01:07.080883] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:13:29.102 [2024-12-06 05:01:07.081022] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81969 ] 00:13:29.102 [2024-12-06 05:01:07.219772] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:29.102 [2024-12-06 05:01:07.290597] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:29.102 [2024-12-06 05:01:07.290763] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:29.671 Running I/O for 5 seconds... 00:13:35.801 1952.00 IOPS, 122.00 MiB/s [2024-12-06T05:01:14.033Z] 3376.50 IOPS, 211.03 MiB/s 00:13:35.801 Latency(us) 00:13:35.801 [2024-12-06T05:01:14.033Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:35.801 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:35.801 Verification LBA range: start 0x0 length 0xa000 00:13:35.801 nvme0n1 : 5.86 148.05 9.25 0.00 0.00 851105.00 19156.68 955010.76 00:13:35.801 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:35.801 Verification LBA range: start 0xa000 length 0xa000 00:13:35.801 nvme0n1 : 5.88 76.24 4.76 0.00 0.00 1601404.57 138734.67 2606921.26 00:13:35.801 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:35.801 Verification LBA range: start 0x0 length 0xbd0b 00:13:35.801 nvme1n1 : 5.85 186.06 11.63 0.00 0.00 659839.91 10435.35 845313.58 00:13:35.801 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:35.801 Verification LBA range: start 0xbd0b length 0xbd0b 00:13:35.801 nvme1n1 : 6.20 82.58 5.16 0.00 0.00 1390721.97 84289.38 1329271.73 00:13:35.801 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:35.801 Verification LBA range: start 0x0 length 0x8000 00:13:35.801 nvme2n1 : 5.84 142.42 8.90 0.00 0.00 840025.49 20971.52 1477685.56 00:13:35.801 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:35.801 Verification LBA range: start 0x8000 length 0x8000 00:13:35.801 nvme2n1 : 5.94 86.18 5.39 0.00 0.00 1283099.57 139541.27 1142141.24 00:13:35.801 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:35.801 Verification LBA range: start 0x0 length 0x8000 00:13:35.801 nvme2n2 : 5.85 98.45 6.15 0.00 0.00 1184085.33 26012.75 2051982.57 00:13:35.801 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:35.801 Verification LBA range: start 0x8000 length 0x8000 00:13:35.801 nvme2n2 : 6.01 106.43 6.65 0.00 0.00 998828.03 38111.70 1677721.60 00:13:35.801 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:35.801 Verification LBA range: start 0x0 length 0x8000 00:13:35.801 nvme2n3 : 5.85 114.80 7.17 0.00 0.00 990981.33 10284.11 2710165.66 00:13:35.801 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:35.801 Verification LBA range: start 0x8000 length 0x8000 00:13:35.801 nvme2n3 : 6.16 139.07 8.69 0.00 0.00 734921.18 5167.26 1277649.53 00:13:35.801 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:35.801 Verification LBA range: start 0x0 length 0x2000 00:13:35.801 nvme3n1 : 5.86 196.64 12.29 0.00 0.00 562052.67 6074.68 851766.35 00:13:35.801 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:35.801 Verification LBA range: start 0x2000 length 0x2000 00:13:35.801 nvme3n1 : 6.40 260.09 16.26 0.00 0.00 377239.46 3062.55 2039077.02 00:13:35.801 [2024-12-06T05:01:14.033Z] =================================================================================================================== 00:13:35.801 [2024-12-06T05:01:14.033Z] Total : 1637.01 102.31 0.00 0.00 828258.19 3062.55 2710165.66 00:13:36.061 00:13:36.061 real 0m7.265s 00:13:36.061 user 0m13.280s 00:13:36.061 sys 0m0.519s 00:13:36.061 05:01:14 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:36.061 ************************************ 00:13:36.061 END TEST bdev_verify_big_io 00:13:36.061 ************************************ 00:13:36.061 05:01:14 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:13:36.321 05:01:14 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:36.322 05:01:14 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:13:36.322 05:01:14 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:36.322 05:01:14 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:36.322 ************************************ 00:13:36.322 START TEST bdev_write_zeroes 00:13:36.322 ************************************ 00:13:36.322 05:01:14 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:36.322 [2024-12-06 05:01:14.395872] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:13:36.322 [2024-12-06 05:01:14.395976] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82083 ] 00:13:36.322 [2024-12-06 05:01:14.528851] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:36.583 [2024-12-06 05:01:14.571675] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:36.583 Running I/O for 1 seconds... 00:13:37.963 102400.00 IOPS, 400.00 MiB/s 00:13:37.963 Latency(us) 00:13:37.963 [2024-12-06T05:01:16.195Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:37.963 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:37.963 nvme0n1 : 1.03 16720.58 65.31 0.00 0.00 7644.46 5091.64 19660.80 00:13:37.963 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:37.963 nvme1n1 : 1.03 17626.09 68.85 0.00 0.00 7231.91 4688.34 20669.05 00:13:37.963 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:37.963 nvme2n1 : 1.02 16761.10 65.47 0.00 0.00 7620.89 5016.02 16938.54 00:13:37.963 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:37.963 nvme2n2 : 1.03 16600.18 64.84 0.00 0.00 7655.09 4763.96 18955.03 00:13:37.963 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:37.963 nvme2n3 : 1.04 16539.84 64.61 0.00 0.00 7676.88 4637.93 20265.75 00:13:37.963 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:37.963 nvme3n1 : 1.04 16502.62 64.46 0.00 0.00 7692.83 4436.28 20769.87 00:13:37.963 [2024-12-06T05:01:16.195Z] =================================================================================================================== 00:13:37.963 [2024-12-06T05:01:16.195Z] Total : 100750.41 393.56 0.00 0.00 7583.62 4436.28 20769.87 00:13:37.963 00:13:37.963 real 0m1.734s 00:13:37.963 user 0m1.154s 00:13:37.963 sys 0m0.412s 00:13:37.963 05:01:16 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:37.963 05:01:16 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:13:37.963 ************************************ 00:13:37.963 END TEST bdev_write_zeroes 00:13:37.963 ************************************ 00:13:37.963 05:01:16 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:37.963 05:01:16 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:13:37.963 05:01:16 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:37.963 05:01:16 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:37.963 ************************************ 00:13:37.963 START TEST bdev_json_nonenclosed 00:13:37.963 ************************************ 00:13:37.963 05:01:16 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:38.222 [2024-12-06 05:01:16.219396] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:13:38.222 [2024-12-06 05:01:16.219551] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82115 ] 00:13:38.222 [2024-12-06 05:01:16.358784] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:38.222 [2024-12-06 05:01:16.431046] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:38.222 [2024-12-06 05:01:16.431187] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:13:38.222 [2024-12-06 05:01:16.431207] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:38.222 [2024-12-06 05:01:16.431226] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:38.484 00:13:38.484 real 0m0.423s 00:13:38.484 user 0m0.188s 00:13:38.484 sys 0m0.129s 00:13:38.484 05:01:16 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:38.484 ************************************ 00:13:38.484 05:01:16 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:13:38.484 END TEST bdev_json_nonenclosed 00:13:38.484 ************************************ 00:13:38.484 05:01:16 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:38.484 05:01:16 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:13:38.484 05:01:16 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:38.484 05:01:16 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:38.484 ************************************ 00:13:38.484 START TEST bdev_json_nonarray 00:13:38.484 ************************************ 00:13:38.484 05:01:16 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:38.484 [2024-12-06 05:01:16.704907] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:13:38.484 [2024-12-06 05:01:16.705042] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82146 ] 00:13:38.744 [2024-12-06 05:01:16.842752] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:38.744 [2024-12-06 05:01:16.910776] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:38.744 [2024-12-06 05:01:16.910933] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:13:38.744 [2024-12-06 05:01:16.910953] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:38.744 [2024-12-06 05:01:16.910971] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:39.005 00:13:39.005 real 0m0.404s 00:13:39.005 user 0m0.189s 00:13:39.005 sys 0m0.109s 00:13:39.005 05:01:17 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:39.005 ************************************ 00:13:39.005 05:01:17 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:13:39.005 END TEST bdev_json_nonarray 00:13:39.005 ************************************ 00:13:39.005 05:01:17 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:13:39.005 05:01:17 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:13:39.005 05:01:17 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:13:39.005 05:01:17 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:13:39.005 05:01:17 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:13:39.005 05:01:17 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:13:39.005 05:01:17 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:39.005 05:01:17 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:13:39.005 05:01:17 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:13:39.005 05:01:17 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:13:39.005 05:01:17 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:13:39.005 05:01:17 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:39.576 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:41.486 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:13:42.056 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:13:42.056 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:42.056 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:13:42.316 00:13:42.316 real 0m50.787s 00:13:42.317 user 1m19.020s 00:13:42.317 sys 0m31.264s 00:13:42.317 05:01:20 blockdev_xnvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:42.317 ************************************ 00:13:42.317 END TEST blockdev_xnvme 00:13:42.317 ************************************ 00:13:42.317 05:01:20 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:42.317 05:01:20 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:13:42.317 05:01:20 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:42.317 05:01:20 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:42.317 05:01:20 -- common/autotest_common.sh@10 -- # set +x 00:13:42.317 ************************************ 00:13:42.317 START TEST ublk 00:13:42.317 ************************************ 00:13:42.317 05:01:20 ublk -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:13:42.317 * Looking for test storage... 00:13:42.317 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:13:42.317 05:01:20 ublk -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:42.317 05:01:20 ublk -- common/autotest_common.sh@1681 -- # lcov --version 00:13:42.317 05:01:20 ublk -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:42.577 05:01:20 ublk -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:42.577 05:01:20 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:42.577 05:01:20 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:42.577 05:01:20 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:42.577 05:01:20 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:13:42.577 05:01:20 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:13:42.577 05:01:20 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:13:42.577 05:01:20 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:13:42.577 05:01:20 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:13:42.577 05:01:20 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:13:42.577 05:01:20 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:13:42.577 05:01:20 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:42.577 05:01:20 ublk -- scripts/common.sh@344 -- # case "$op" in 00:13:42.577 05:01:20 ublk -- scripts/common.sh@345 -- # : 1 00:13:42.577 05:01:20 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:42.577 05:01:20 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:42.577 05:01:20 ublk -- scripts/common.sh@365 -- # decimal 1 00:13:42.577 05:01:20 ublk -- scripts/common.sh@353 -- # local d=1 00:13:42.577 05:01:20 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:42.577 05:01:20 ublk -- scripts/common.sh@355 -- # echo 1 00:13:42.577 05:01:20 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:13:42.577 05:01:20 ublk -- scripts/common.sh@366 -- # decimal 2 00:13:42.577 05:01:20 ublk -- scripts/common.sh@353 -- # local d=2 00:13:42.577 05:01:20 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:42.577 05:01:20 ublk -- scripts/common.sh@355 -- # echo 2 00:13:42.577 05:01:20 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:13:42.577 05:01:20 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:42.577 05:01:20 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:42.577 05:01:20 ublk -- scripts/common.sh@368 -- # return 0 00:13:42.577 05:01:20 ublk -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:42.577 05:01:20 ublk -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:42.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:42.577 --rc genhtml_branch_coverage=1 00:13:42.577 --rc genhtml_function_coverage=1 00:13:42.577 --rc genhtml_legend=1 00:13:42.577 --rc geninfo_all_blocks=1 00:13:42.577 --rc geninfo_unexecuted_blocks=1 00:13:42.577 00:13:42.577 ' 00:13:42.577 05:01:20 ublk -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:42.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:42.577 --rc genhtml_branch_coverage=1 00:13:42.577 --rc genhtml_function_coverage=1 00:13:42.577 --rc genhtml_legend=1 00:13:42.577 --rc geninfo_all_blocks=1 00:13:42.577 --rc geninfo_unexecuted_blocks=1 00:13:42.577 00:13:42.577 ' 00:13:42.577 05:01:20 ublk -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:42.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:42.577 --rc genhtml_branch_coverage=1 00:13:42.577 --rc genhtml_function_coverage=1 00:13:42.577 --rc genhtml_legend=1 00:13:42.577 --rc geninfo_all_blocks=1 00:13:42.577 --rc geninfo_unexecuted_blocks=1 00:13:42.577 00:13:42.577 ' 00:13:42.577 05:01:20 ublk -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:42.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:42.577 --rc genhtml_branch_coverage=1 00:13:42.577 --rc genhtml_function_coverage=1 00:13:42.577 --rc genhtml_legend=1 00:13:42.577 --rc geninfo_all_blocks=1 00:13:42.577 --rc geninfo_unexecuted_blocks=1 00:13:42.577 00:13:42.578 ' 00:13:42.578 05:01:20 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:13:42.578 05:01:20 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:13:42.578 05:01:20 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:13:42.578 05:01:20 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:13:42.578 05:01:20 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:13:42.578 05:01:20 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:13:42.578 05:01:20 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:13:42.578 05:01:20 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:13:42.578 05:01:20 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:13:42.578 05:01:20 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:13:42.578 05:01:20 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:13:42.578 05:01:20 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:13:42.578 05:01:20 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:13:42.578 05:01:20 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:13:42.578 05:01:20 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:13:42.578 05:01:20 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:13:42.578 05:01:20 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:13:42.578 05:01:20 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:13:42.578 05:01:20 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:13:42.578 05:01:20 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:13:42.578 05:01:20 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:42.578 05:01:20 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:42.578 05:01:20 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:42.578 ************************************ 00:13:42.578 START TEST test_save_ublk_config 00:13:42.578 ************************************ 00:13:42.578 05:01:20 ublk.test_save_ublk_config -- common/autotest_common.sh@1125 -- # test_save_config 00:13:42.578 05:01:20 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:13:42.578 05:01:20 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=82430 00:13:42.578 05:01:20 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:13:42.578 05:01:20 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 82430 00:13:42.578 05:01:20 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # '[' -z 82430 ']' 00:13:42.578 05:01:20 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:42.578 05:01:20 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:13:42.578 05:01:20 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:42.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:42.578 05:01:20 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:42.578 05:01:20 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:42.578 05:01:20 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:42.578 [2024-12-06 05:01:20.676000] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:13:42.578 [2024-12-06 05:01:20.676145] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82430 ] 00:13:42.838 [2024-12-06 05:01:20.811918] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:42.838 [2024-12-06 05:01:20.895884] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:43.408 05:01:21 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:43.408 05:01:21 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # return 0 00:13:43.408 05:01:21 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:13:43.408 05:01:21 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:13:43.408 05:01:21 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.408 05:01:21 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:43.408 [2024-12-06 05:01:21.535693] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:43.408 [2024-12-06 05:01:21.536116] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:43.408 malloc0 00:13:43.408 [2024-12-06 05:01:21.575815] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:13:43.408 [2024-12-06 05:01:21.575913] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:13:43.408 [2024-12-06 05:01:21.575922] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:43.408 [2024-12-06 05:01:21.575937] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:43.408 [2024-12-06 05:01:21.584837] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:43.408 [2024-12-06 05:01:21.584877] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:43.408 [2024-12-06 05:01:21.591703] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:43.408 [2024-12-06 05:01:21.591841] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:43.408 [2024-12-06 05:01:21.608697] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:43.408 0 00:13:43.409 05:01:21 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.409 05:01:21 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:13:43.409 05:01:21 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.409 05:01:21 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:43.669 05:01:21 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.669 05:01:21 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:13:43.669 "subsystems": [ 00:13:43.669 { 00:13:43.669 "subsystem": "fsdev", 00:13:43.669 "config": [ 00:13:43.669 { 00:13:43.669 "method": "fsdev_set_opts", 00:13:43.669 "params": { 00:13:43.669 "fsdev_io_pool_size": 65535, 00:13:43.669 "fsdev_io_cache_size": 256 00:13:43.669 } 00:13:43.669 } 00:13:43.669 ] 00:13:43.669 }, 00:13:43.669 { 00:13:43.669 "subsystem": "keyring", 00:13:43.669 "config": [] 00:13:43.669 }, 00:13:43.669 { 00:13:43.669 "subsystem": "iobuf", 00:13:43.669 "config": [ 00:13:43.669 { 00:13:43.669 "method": "iobuf_set_options", 00:13:43.669 "params": { 00:13:43.669 "small_pool_count": 8192, 00:13:43.669 "large_pool_count": 1024, 00:13:43.669 "small_bufsize": 8192, 00:13:43.669 "large_bufsize": 135168 00:13:43.669 } 00:13:43.669 } 00:13:43.669 ] 00:13:43.669 }, 00:13:43.669 { 00:13:43.669 "subsystem": "sock", 00:13:43.669 "config": [ 00:13:43.669 { 00:13:43.669 "method": "sock_set_default_impl", 00:13:43.669 "params": { 00:13:43.669 "impl_name": "posix" 00:13:43.669 } 00:13:43.669 }, 00:13:43.669 { 00:13:43.669 "method": "sock_impl_set_options", 00:13:43.669 "params": { 00:13:43.669 "impl_name": "ssl", 00:13:43.669 "recv_buf_size": 4096, 00:13:43.669 "send_buf_size": 4096, 00:13:43.669 "enable_recv_pipe": true, 00:13:43.669 "enable_quickack": false, 00:13:43.669 "enable_placement_id": 0, 00:13:43.669 "enable_zerocopy_send_server": true, 00:13:43.669 "enable_zerocopy_send_client": false, 00:13:43.669 "zerocopy_threshold": 0, 00:13:43.669 "tls_version": 0, 00:13:43.669 "enable_ktls": false 00:13:43.669 } 00:13:43.669 }, 00:13:43.669 { 00:13:43.669 "method": "sock_impl_set_options", 00:13:43.669 "params": { 00:13:43.669 "impl_name": "posix", 00:13:43.669 "recv_buf_size": 2097152, 00:13:43.669 "send_buf_size": 2097152, 00:13:43.669 "enable_recv_pipe": true, 00:13:43.669 "enable_quickack": false, 00:13:43.669 "enable_placement_id": 0, 00:13:43.669 "enable_zerocopy_send_server": true, 00:13:43.669 "enable_zerocopy_send_client": false, 00:13:43.669 "zerocopy_threshold": 0, 00:13:43.669 "tls_version": 0, 00:13:43.669 "enable_ktls": false 00:13:43.669 } 00:13:43.669 } 00:13:43.669 ] 00:13:43.669 }, 00:13:43.669 { 00:13:43.669 "subsystem": "vmd", 00:13:43.669 "config": [] 00:13:43.669 }, 00:13:43.669 { 00:13:43.669 "subsystem": "accel", 00:13:43.669 "config": [ 00:13:43.669 { 00:13:43.669 "method": "accel_set_options", 00:13:43.669 "params": { 00:13:43.669 "small_cache_size": 128, 00:13:43.669 "large_cache_size": 16, 00:13:43.669 "task_count": 2048, 00:13:43.669 "sequence_count": 2048, 00:13:43.669 "buf_count": 2048 00:13:43.669 } 00:13:43.669 } 00:13:43.669 ] 00:13:43.669 }, 00:13:43.669 { 00:13:43.669 "subsystem": "bdev", 00:13:43.669 "config": [ 00:13:43.669 { 00:13:43.669 "method": "bdev_set_options", 00:13:43.669 "params": { 00:13:43.669 "bdev_io_pool_size": 65535, 00:13:43.669 "bdev_io_cache_size": 256, 00:13:43.669 "bdev_auto_examine": true, 00:13:43.669 "iobuf_small_cache_size": 128, 00:13:43.669 "iobuf_large_cache_size": 16 00:13:43.669 } 00:13:43.669 }, 00:13:43.669 { 00:13:43.669 "method": "bdev_raid_set_options", 00:13:43.669 "params": { 00:13:43.669 "process_window_size_kb": 1024, 00:13:43.669 "process_max_bandwidth_mb_sec": 0 00:13:43.669 } 00:13:43.669 }, 00:13:43.669 { 00:13:43.669 "method": "bdev_iscsi_set_options", 00:13:43.670 "params": { 00:13:43.670 "timeout_sec": 30 00:13:43.670 } 00:13:43.670 }, 00:13:43.670 { 00:13:43.670 "method": "bdev_nvme_set_options", 00:13:43.670 "params": { 00:13:43.670 "action_on_timeout": "none", 00:13:43.670 "timeout_us": 0, 00:13:43.670 "timeout_admin_us": 0, 00:13:43.670 "keep_alive_timeout_ms": 10000, 00:13:43.670 "arbitration_burst": 0, 00:13:43.670 "low_priority_weight": 0, 00:13:43.670 "medium_priority_weight": 0, 00:13:43.670 "high_priority_weight": 0, 00:13:43.670 "nvme_adminq_poll_period_us": 10000, 00:13:43.670 "nvme_ioq_poll_period_us": 0, 00:13:43.670 "io_queue_requests": 0, 00:13:43.670 "delay_cmd_submit": true, 00:13:43.670 "transport_retry_count": 4, 00:13:43.670 "bdev_retry_count": 3, 00:13:43.670 "transport_ack_timeout": 0, 00:13:43.670 "ctrlr_loss_timeout_sec": 0, 00:13:43.670 "reconnect_delay_sec": 0, 00:13:43.670 "fast_io_fail_timeout_sec": 0, 00:13:43.670 "disable_auto_failback": false, 00:13:43.670 "generate_uuids": false, 00:13:43.670 "transport_tos": 0, 00:13:43.670 "nvme_error_stat": false, 00:13:43.670 "rdma_srq_size": 0, 00:13:43.670 "io_path_stat": false, 00:13:43.670 "allow_accel_sequence": false, 00:13:43.670 "rdma_max_cq_size": 0, 00:13:43.670 "rdma_cm_event_timeout_ms": 0, 00:13:43.670 "dhchap_digests": [ 00:13:43.670 "sha256", 00:13:43.670 "sha384", 00:13:43.670 "sha512" 00:13:43.670 ], 00:13:43.670 "dhchap_dhgroups": [ 00:13:43.670 "null", 00:13:43.670 "ffdhe2048", 00:13:43.670 "ffdhe3072", 00:13:43.670 "ffdhe4096", 00:13:43.670 "ffdhe6144", 00:13:43.670 "ffdhe8192" 00:13:43.670 ] 00:13:43.670 } 00:13:43.670 }, 00:13:43.670 { 00:13:43.670 "method": "bdev_nvme_set_hotplug", 00:13:43.670 "params": { 00:13:43.670 "period_us": 100000, 00:13:43.670 "enable": false 00:13:43.670 } 00:13:43.670 }, 00:13:43.670 { 00:13:43.670 "method": "bdev_malloc_create", 00:13:43.670 "params": { 00:13:43.670 "name": "malloc0", 00:13:43.670 "num_blocks": 8192, 00:13:43.670 "block_size": 4096, 00:13:43.670 "physical_block_size": 4096, 00:13:43.670 "uuid": "5e7f68c6-dc56-40cf-b961-b298d9460749", 00:13:43.670 "optimal_io_boundary": 0, 00:13:43.670 "md_size": 0, 00:13:43.670 "dif_type": 0, 00:13:43.670 "dif_is_head_of_md": false, 00:13:43.670 "dif_pi_format": 0 00:13:43.670 } 00:13:43.670 }, 00:13:43.670 { 00:13:43.670 "method": "bdev_wait_for_examine" 00:13:43.670 } 00:13:43.670 ] 00:13:43.670 }, 00:13:43.670 { 00:13:43.670 "subsystem": "scsi", 00:13:43.670 "config": null 00:13:43.670 }, 00:13:43.670 { 00:13:43.670 "subsystem": "scheduler", 00:13:43.670 "config": [ 00:13:43.670 { 00:13:43.670 "method": "framework_set_scheduler", 00:13:43.670 "params": { 00:13:43.670 "name": "static" 00:13:43.670 } 00:13:43.670 } 00:13:43.670 ] 00:13:43.670 }, 00:13:43.670 { 00:13:43.670 "subsystem": "vhost_scsi", 00:13:43.670 "config": [] 00:13:43.670 }, 00:13:43.670 { 00:13:43.670 "subsystem": "vhost_blk", 00:13:43.670 "config": [] 00:13:43.670 }, 00:13:43.670 { 00:13:43.670 "subsystem": "ublk", 00:13:43.670 "config": [ 00:13:43.670 { 00:13:43.670 "method": "ublk_create_target", 00:13:43.670 "params": { 00:13:43.670 "cpumask": "1" 00:13:43.670 } 00:13:43.670 }, 00:13:43.670 { 00:13:43.670 "method": "ublk_start_disk", 00:13:43.670 "params": { 00:13:43.670 "bdev_name": "malloc0", 00:13:43.670 "ublk_id": 0, 00:13:43.670 "num_queues": 1, 00:13:43.670 "queue_depth": 128 00:13:43.670 } 00:13:43.670 } 00:13:43.670 ] 00:13:43.670 }, 00:13:43.670 { 00:13:43.670 "subsystem": "nbd", 00:13:43.670 "config": [] 00:13:43.670 }, 00:13:43.670 { 00:13:43.670 "subsystem": "nvmf", 00:13:43.670 "config": [ 00:13:43.670 { 00:13:43.670 "method": "nvmf_set_config", 00:13:43.670 "params": { 00:13:43.670 "discovery_filter": "match_any", 00:13:43.670 "admin_cmd_passthru": { 00:13:43.670 "identify_ctrlr": false 00:13:43.670 }, 00:13:43.670 "dhchap_digests": [ 00:13:43.670 "sha256", 00:13:43.670 "sha384", 00:13:43.670 "sha512" 00:13:43.670 ], 00:13:43.670 "dhchap_dhgroups": [ 00:13:43.670 "null", 00:13:43.670 "ffdhe2048", 00:13:43.670 "ffdhe3072", 00:13:43.670 "ffdhe4096", 00:13:43.670 "ffdhe6144", 00:13:43.670 "ffdhe8192" 00:13:43.670 ] 00:13:43.670 } 00:13:43.670 }, 00:13:43.670 { 00:13:43.670 "method": "nvmf_set_max_subsystems", 00:13:43.670 "params": { 00:13:43.670 "max_subsystems": 1024 00:13:43.670 } 00:13:43.670 }, 00:13:43.670 { 00:13:43.670 "method": "nvmf_set_crdt", 00:13:43.670 "params": { 00:13:43.670 "crdt1": 0, 00:13:43.670 "crdt2": 0, 00:13:43.670 "crdt3": 0 00:13:43.670 } 00:13:43.670 } 00:13:43.670 ] 00:13:43.670 }, 00:13:43.670 { 00:13:43.670 "subsystem": "iscsi", 00:13:43.670 "config": [ 00:13:43.670 { 00:13:43.670 "method": "iscsi_set_options", 00:13:43.670 "params": { 00:13:43.670 "node_base": "iqn.2016-06.io.spdk", 00:13:43.670 "max_sessions": 128, 00:13:43.670 "max_connections_per_session": 2, 00:13:43.670 "max_queue_depth": 64, 00:13:43.670 "default_time2wait": 2, 00:13:43.670 "default_time2retain": 20, 00:13:43.670 "first_burst_length": 8192, 00:13:43.670 "immediate_data": true, 00:13:43.670 "allow_duplicated_isid": false, 00:13:43.670 "error_recovery_level": 0, 00:13:43.670 "nop_timeout": 60, 00:13:43.670 "nop_in_interval": 30, 00:13:43.670 "disable_chap": false, 00:13:43.670 "require_chap": false, 00:13:43.670 "mutual_chap": false, 00:13:43.670 "chap_group": 0, 00:13:43.670 "max_large_datain_per_connection": 64, 00:13:43.670 "max_r2t_per_connection": 4, 00:13:43.670 "pdu_pool_size": 36864, 00:13:43.670 "immediate_data_pool_size": 16384, 00:13:43.670 "data_out_pool_size": 2048 00:13:43.670 } 00:13:43.670 } 00:13:43.670 ] 00:13:43.670 } 00:13:43.670 ] 00:13:43.670 }' 00:13:43.670 05:01:21 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 82430 00:13:43.670 05:01:21 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # '[' -z 82430 ']' 00:13:43.670 05:01:21 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # kill -0 82430 00:13:43.670 05:01:21 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # uname 00:13:43.670 05:01:21 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:43.670 05:01:21 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82430 00:13:43.931 05:01:21 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:43.931 05:01:21 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:43.931 05:01:21 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82430' 00:13:43.931 killing process with pid 82430 00:13:43.931 05:01:21 ublk.test_save_ublk_config -- common/autotest_common.sh@969 -- # kill 82430 00:13:43.931 05:01:21 ublk.test_save_ublk_config -- common/autotest_common.sh@974 -- # wait 82430 00:13:44.191 [2024-12-06 05:01:22.328571] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:44.191 [2024-12-06 05:01:22.367728] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:44.191 [2024-12-06 05:01:22.367888] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:44.191 [2024-12-06 05:01:22.375711] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:44.191 [2024-12-06 05:01:22.375797] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:44.191 [2024-12-06 05:01:22.375807] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:44.191 [2024-12-06 05:01:22.375848] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:13:44.191 [2024-12-06 05:01:22.376010] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:13:45.133 05:01:23 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=82475 00:13:45.133 05:01:23 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 82475 00:13:45.133 05:01:23 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # '[' -z 82475 ']' 00:13:45.133 05:01:23 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:45.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:45.133 05:01:23 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:45.133 05:01:23 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:45.133 05:01:23 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:45.133 05:01:23 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:45.133 05:01:23 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:13:45.133 05:01:23 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:13:45.133 "subsystems": [ 00:13:45.133 { 00:13:45.133 "subsystem": "fsdev", 00:13:45.133 "config": [ 00:13:45.133 { 00:13:45.133 "method": "fsdev_set_opts", 00:13:45.133 "params": { 00:13:45.133 "fsdev_io_pool_size": 65535, 00:13:45.133 "fsdev_io_cache_size": 256 00:13:45.133 } 00:13:45.133 } 00:13:45.133 ] 00:13:45.133 }, 00:13:45.133 { 00:13:45.133 "subsystem": "keyring", 00:13:45.133 "config": [] 00:13:45.133 }, 00:13:45.133 { 00:13:45.133 "subsystem": "iobuf", 00:13:45.133 "config": [ 00:13:45.133 { 00:13:45.133 "method": "iobuf_set_options", 00:13:45.133 "params": { 00:13:45.133 "small_pool_count": 8192, 00:13:45.133 "large_pool_count": 1024, 00:13:45.133 "small_bufsize": 8192, 00:13:45.133 "large_bufsize": 135168 00:13:45.133 } 00:13:45.133 } 00:13:45.133 ] 00:13:45.133 }, 00:13:45.133 { 00:13:45.133 "subsystem": "sock", 00:13:45.133 "config": [ 00:13:45.133 { 00:13:45.133 "method": "sock_set_default_impl", 00:13:45.133 "params": { 00:13:45.133 "impl_name": "posix" 00:13:45.133 } 00:13:45.133 }, 00:13:45.133 { 00:13:45.133 "method": "sock_impl_set_options", 00:13:45.133 "params": { 00:13:45.133 "impl_name": "ssl", 00:13:45.133 "recv_buf_size": 4096, 00:13:45.133 "send_buf_size": 4096, 00:13:45.133 "enable_recv_pipe": true, 00:13:45.133 "enable_quickack": false, 00:13:45.133 "enable_placement_id": 0, 00:13:45.133 "enable_zerocopy_send_server": true, 00:13:45.133 "enable_zerocopy_send_client": false, 00:13:45.133 "zerocopy_threshold": 0, 00:13:45.133 "tls_version": 0, 00:13:45.133 "enable_ktls": false 00:13:45.133 } 00:13:45.133 }, 00:13:45.133 { 00:13:45.133 "method": "sock_impl_set_options", 00:13:45.133 "params": { 00:13:45.133 "impl_name": "posix", 00:13:45.133 "recv_buf_size": 2097152, 00:13:45.133 "send_buf_size": 2097152, 00:13:45.133 "enable_recv_pipe": true, 00:13:45.133 "enable_quickack": false, 00:13:45.133 "enable_placement_id": 0, 00:13:45.133 "enable_zerocopy_send_server": true, 00:13:45.133 "enable_zerocopy_send_client": false, 00:13:45.133 "zerocopy_threshold": 0, 00:13:45.133 "tls_version": 0, 00:13:45.133 "enable_ktls": false 00:13:45.133 } 00:13:45.133 } 00:13:45.133 ] 00:13:45.133 }, 00:13:45.133 { 00:13:45.133 "subsystem": "vmd", 00:13:45.133 "config": [] 00:13:45.133 }, 00:13:45.133 { 00:13:45.133 "subsystem": "accel", 00:13:45.133 "config": [ 00:13:45.133 { 00:13:45.133 "method": "accel_set_options", 00:13:45.133 "params": { 00:13:45.133 "small_cache_size": 128, 00:13:45.133 "large_cache_size": 16, 00:13:45.133 "task_count": 2048, 00:13:45.133 "sequence_count": 2048, 00:13:45.133 "buf_count": 2048 00:13:45.133 } 00:13:45.133 } 00:13:45.133 ] 00:13:45.133 }, 00:13:45.133 { 00:13:45.133 "subsystem": "bdev", 00:13:45.133 "config": [ 00:13:45.133 { 00:13:45.133 "method": "bdev_set_options", 00:13:45.133 "params": { 00:13:45.133 "bdev_io_pool_size": 65535, 00:13:45.133 "bdev_io_cache_size": 256, 00:13:45.133 "bdev_auto_examine": true, 00:13:45.133 "iobuf_small_cache_size": 128, 00:13:45.133 "iobuf_large_cache_size": 16 00:13:45.133 } 00:13:45.133 }, 00:13:45.133 { 00:13:45.133 "method": "bdev_raid_set_options", 00:13:45.133 "params": { 00:13:45.133 "process_window_size_kb": 1024, 00:13:45.133 "process_max_bandwidth_mb_sec": 0 00:13:45.133 } 00:13:45.133 }, 00:13:45.133 { 00:13:45.133 "method": "bdev_iscsi_set_options", 00:13:45.133 "params": { 00:13:45.133 "timeout_sec": 30 00:13:45.133 } 00:13:45.133 }, 00:13:45.133 { 00:13:45.133 "method": "bdev_nvme_set_options", 00:13:45.133 "params": { 00:13:45.133 "action_on_timeout": "none", 00:13:45.133 "timeout_us": 0, 00:13:45.133 "timeout_admin_us": 0, 00:13:45.133 "keep_alive_timeout_ms": 10000, 00:13:45.133 "arbitration_burst": 0, 00:13:45.133 "low_priority_weight": 0, 00:13:45.133 "medium_priority_weight": 0, 00:13:45.133 "high_priority_weight": 0, 00:13:45.133 "nvme_adminq_poll_period_us": 10000, 00:13:45.133 "nvme_ioq_poll_period_us": 0, 00:13:45.133 "io_queue_requests": 0, 00:13:45.133 "delay_cmd_submit": true, 00:13:45.133 "transport_retry_count": 4, 00:13:45.133 "bdev_retry_count": 3, 00:13:45.133 "transport_ack_timeout": 0, 00:13:45.133 "ctrlr_loss_timeout_sec": 0, 00:13:45.133 "reconnect_delay_sec": 0, 00:13:45.133 "fast_io_fail_timeout_sec": 0, 00:13:45.133 "disable_auto_failback": false, 00:13:45.133 "generate_uuids": false, 00:13:45.133 "transport_tos": 0, 00:13:45.133 "nvme_error_stat": false, 00:13:45.133 "rdma_srq_size": 0, 00:13:45.133 "io_path_stat": false, 00:13:45.133 "allow_accel_sequence": false, 00:13:45.133 "rdma_max_cq_size": 0, 00:13:45.133 "rdma_cm_event_timeout_ms": 0, 00:13:45.133 "dhchap_digests": [ 00:13:45.133 "sha256", 00:13:45.133 "sha384", 00:13:45.133 "sha512" 00:13:45.133 ], 00:13:45.133 "dhchap_dhgroups": [ 00:13:45.133 "null", 00:13:45.133 "ffdhe2048", 00:13:45.134 "ffdhe3072", 00:13:45.134 "ffdhe4096", 00:13:45.134 "ffdhe6144", 00:13:45.134 "ffdhe8192" 00:13:45.134 ] 00:13:45.134 } 00:13:45.134 }, 00:13:45.134 { 00:13:45.134 "method": "bdev_nvme_set_hotplug", 00:13:45.134 "params": { 00:13:45.134 "period_us": 100000, 00:13:45.134 "enable": false 00:13:45.134 } 00:13:45.134 }, 00:13:45.134 { 00:13:45.134 "method": "bdev_malloc_create", 00:13:45.134 "params": { 00:13:45.134 "name": "malloc0", 00:13:45.134 "num_blocks": 8192, 00:13:45.134 "block_size": 4096, 00:13:45.134 "physical_block_size": 4096, 00:13:45.134 "uuid": "5e7f68c6-dc56-40cf-b961-b298d9460749", 00:13:45.134 "optimal_io_boundary": 0, 00:13:45.134 "md_size": 0, 00:13:45.134 "dif_type": 0, 00:13:45.134 "dif_is_head_of_md": false, 00:13:45.134 "dif_pi_format": 0 00:13:45.134 } 00:13:45.134 }, 00:13:45.134 { 00:13:45.134 "method": "bdev_wait_for_examine" 00:13:45.134 } 00:13:45.134 ] 00:13:45.134 }, 00:13:45.134 { 00:13:45.134 "subsystem": "scsi", 00:13:45.134 "config": null 00:13:45.134 }, 00:13:45.134 { 00:13:45.134 "subsystem": "scheduler", 00:13:45.134 "config": [ 00:13:45.134 { 00:13:45.134 "method": "framework_set_scheduler", 00:13:45.134 "params": { 00:13:45.134 "name": "static" 00:13:45.134 } 00:13:45.134 } 00:13:45.134 ] 00:13:45.134 }, 00:13:45.134 { 00:13:45.134 "subsystem": "vhost_scsi", 00:13:45.134 "config": [] 00:13:45.134 }, 00:13:45.134 { 00:13:45.134 "subsystem": "vhost_blk", 00:13:45.134 "config": [] 00:13:45.134 }, 00:13:45.134 { 00:13:45.134 "subsystem": "ublk", 00:13:45.134 "config": [ 00:13:45.134 { 00:13:45.134 "method": "ublk_create_target", 00:13:45.134 "params": { 00:13:45.134 "cpumask": "1" 00:13:45.134 } 00:13:45.134 }, 00:13:45.134 { 00:13:45.134 "method": "ublk_start_disk", 00:13:45.134 "params": { 00:13:45.134 "bdev_name": "malloc0", 00:13:45.134 "ublk_id": 0, 00:13:45.134 "num_queues": 1, 00:13:45.134 "queue_depth": 128 00:13:45.134 } 00:13:45.134 } 00:13:45.134 ] 00:13:45.134 }, 00:13:45.134 { 00:13:45.134 "subsystem": "nbd", 00:13:45.134 "config": [] 00:13:45.134 }, 00:13:45.134 { 00:13:45.134 "subsystem": "nvmf", 00:13:45.134 "config": [ 00:13:45.134 { 00:13:45.134 "method": "nvmf_set_config", 00:13:45.134 "params": { 00:13:45.134 "discovery_filter": "match_any", 00:13:45.134 "admin_cmd_passthru": { 00:13:45.134 "identify_ctrlr": false 00:13:45.134 }, 00:13:45.134 "dhchap_digests": [ 00:13:45.134 "sha256", 00:13:45.134 "sha384", 00:13:45.134 "sha512" 00:13:45.134 ], 00:13:45.134 "dhchap_dhgroups": [ 00:13:45.134 "null", 00:13:45.134 "ffdhe2048", 00:13:45.134 "ffdhe3072", 00:13:45.134 "ffdhe4096", 00:13:45.134 "ffdhe6144", 00:13:45.134 "ffdhe8192" 00:13:45.134 ] 00:13:45.134 } 00:13:45.134 }, 00:13:45.134 { 00:13:45.134 "method": "nvmf_set_max_subsystems", 00:13:45.134 "params": { 00:13:45.134 "max_subsystems": 1024 00:13:45.134 } 00:13:45.134 }, 00:13:45.134 { 00:13:45.134 "method": "nvmf_set_crdt", 00:13:45.134 "params": { 00:13:45.134 "crdt1": 0, 00:13:45.134 "crdt2": 0, 00:13:45.134 "crdt3": 0 00:13:45.134 } 00:13:45.134 } 00:13:45.134 ] 00:13:45.134 }, 00:13:45.134 { 00:13:45.134 "subsystem": "iscsi", 00:13:45.134 "config": [ 00:13:45.134 { 00:13:45.134 "method": "iscsi_set_options", 00:13:45.134 "params": { 00:13:45.134 "node_base": "iqn.2016-06.io.spdk", 00:13:45.134 "max_sessions": 128, 00:13:45.134 "max_connections_per_session": 2, 00:13:45.134 "max_queue_depth": 64, 00:13:45.134 "default_time2wait": 2, 00:13:45.134 "default_time2retain": 20, 00:13:45.134 "first_burst_length": 8192, 00:13:45.134 "immediate_data": true, 00:13:45.134 "allow_duplicated_isid": false, 00:13:45.134 "error_recovery_level": 0, 00:13:45.134 "nop_timeout": 60, 00:13:45.134 "nop_in_interval": 30, 00:13:45.134 "disable_chap": false, 00:13:45.134 "require_chap": false, 00:13:45.134 "mutual_chap": false, 00:13:45.134 "chap_group": 0, 00:13:45.134 "max_large_datain_per_connection": 64, 00:13:45.134 "max_r2t_per_connection": 4, 00:13:45.134 "pdu_pool_size": 36864, 00:13:45.134 "immediate_data_pool_size": 16384, 00:13:45.134 "data_out_pool_size": 2048 00:13:45.134 } 00:13:45.134 } 00:13:45.134 ] 00:13:45.134 } 00:13:45.134 ] 00:13:45.134 }' 00:13:45.134 [2024-12-06 05:01:23.140000] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:13:45.134 [2024-12-06 05:01:23.140140] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82475 ] 00:13:45.134 [2024-12-06 05:01:23.278971] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:45.134 [2024-12-06 05:01:23.359450] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:45.704 [2024-12-06 05:01:23.827697] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:45.704 [2024-12-06 05:01:23.828138] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:45.705 [2024-12-06 05:01:23.835836] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:13:45.705 [2024-12-06 05:01:23.835925] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:13:45.705 [2024-12-06 05:01:23.835934] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:45.705 [2024-12-06 05:01:23.835943] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:45.705 [2024-12-06 05:01:23.844835] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:45.705 [2024-12-06 05:01:23.844879] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:45.705 [2024-12-06 05:01:23.851710] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:45.705 [2024-12-06 05:01:23.851843] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:45.705 [2024-12-06 05:01:23.868699] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:45.964 05:01:23 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:45.965 05:01:23 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # return 0 00:13:45.965 05:01:23 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:13:45.965 05:01:23 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.965 05:01:23 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:45.965 05:01:23 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:13:45.965 05:01:24 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.965 05:01:24 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:45.965 05:01:24 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:13:45.965 05:01:24 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 82475 00:13:45.965 05:01:24 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # '[' -z 82475 ']' 00:13:45.965 05:01:24 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # kill -0 82475 00:13:45.965 05:01:24 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # uname 00:13:45.965 05:01:24 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:45.965 05:01:24 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82475 00:13:45.965 05:01:24 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:45.965 05:01:24 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:45.965 killing process with pid 82475 00:13:45.965 05:01:24 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82475' 00:13:45.965 05:01:24 ublk.test_save_ublk_config -- common/autotest_common.sh@969 -- # kill 82475 00:13:45.965 05:01:24 ublk.test_save_ublk_config -- common/autotest_common.sh@974 -- # wait 82475 00:13:46.540 [2024-12-06 05:01:24.461445] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:46.540 [2024-12-06 05:01:24.506713] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:46.540 [2024-12-06 05:01:24.506884] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:46.540 [2024-12-06 05:01:24.515700] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:46.540 [2024-12-06 05:01:24.515788] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:46.540 [2024-12-06 05:01:24.515799] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:46.540 [2024-12-06 05:01:24.515849] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:13:46.540 [2024-12-06 05:01:24.516013] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:13:47.162 05:01:25 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:13:47.162 00:13:47.162 real 0m4.610s 00:13:47.162 user 0m2.869s 00:13:47.162 sys 0m2.402s 00:13:47.162 05:01:25 ublk.test_save_ublk_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:47.162 ************************************ 00:13:47.162 END TEST test_save_ublk_config 00:13:47.162 ************************************ 00:13:47.162 05:01:25 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:47.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:47.162 05:01:25 ublk -- ublk/ublk.sh@139 -- # spdk_pid=82531 00:13:47.162 05:01:25 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:47.162 05:01:25 ublk -- ublk/ublk.sh@141 -- # waitforlisten 82531 00:13:47.162 05:01:25 ublk -- common/autotest_common.sh@831 -- # '[' -z 82531 ']' 00:13:47.162 05:01:25 ublk -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:47.162 05:01:25 ublk -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:47.162 05:01:25 ublk -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:47.162 05:01:25 ublk -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:47.162 05:01:25 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:47.162 05:01:25 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:13:47.162 [2024-12-06 05:01:25.339141] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:13:47.162 [2024-12-06 05:01:25.339283] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82531 ] 00:13:47.427 [2024-12-06 05:01:25.475141] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:47.427 [2024-12-06 05:01:25.545338] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:47.427 [2024-12-06 05:01:25.545434] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:48.002 05:01:26 ublk -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:48.002 05:01:26 ublk -- common/autotest_common.sh@864 -- # return 0 00:13:48.002 05:01:26 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:13:48.002 05:01:26 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:48.002 05:01:26 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:48.002 05:01:26 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:48.002 ************************************ 00:13:48.002 START TEST test_create_ublk 00:13:48.002 ************************************ 00:13:48.002 05:01:26 ublk.test_create_ublk -- common/autotest_common.sh@1125 -- # test_create_ublk 00:13:48.002 05:01:26 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:13:48.002 05:01:26 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.002 05:01:26 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:48.002 [2024-12-06 05:01:26.216697] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:48.002 [2024-12-06 05:01:26.218945] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:48.002 05:01:26 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.002 05:01:26 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:13:48.002 05:01:26 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:13:48.002 05:01:26 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.002 05:01:26 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:48.264 05:01:26 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.264 05:01:26 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:13:48.264 05:01:26 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:13:48.264 05:01:26 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.264 05:01:26 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:48.264 [2024-12-06 05:01:26.350888] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:13:48.264 [2024-12-06 05:01:26.351408] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:13:48.264 [2024-12-06 05:01:26.351439] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:48.264 [2024-12-06 05:01:26.351461] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:48.264 [2024-12-06 05:01:26.356809] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:48.264 [2024-12-06 05:01:26.356854] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:48.264 [2024-12-06 05:01:26.366715] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:48.264 [2024-12-06 05:01:26.367506] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:48.264 [2024-12-06 05:01:26.390723] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:48.264 05:01:26 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.264 05:01:26 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:13:48.264 05:01:26 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:13:48.264 05:01:26 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:13:48.264 05:01:26 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.264 05:01:26 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:48.264 05:01:26 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.264 05:01:26 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:13:48.264 { 00:13:48.264 "ublk_device": "/dev/ublkb0", 00:13:48.264 "id": 0, 00:13:48.264 "queue_depth": 512, 00:13:48.264 "num_queues": 4, 00:13:48.264 "bdev_name": "Malloc0" 00:13:48.264 } 00:13:48.264 ]' 00:13:48.264 05:01:26 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:13:48.264 05:01:26 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:48.264 05:01:26 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:13:48.264 05:01:26 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:13:48.264 05:01:26 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:13:48.526 05:01:26 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:13:48.526 05:01:26 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:13:48.526 05:01:26 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:13:48.526 05:01:26 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:13:48.526 05:01:26 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:13:48.526 05:01:26 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:13:48.526 05:01:26 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:13:48.526 05:01:26 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:13:48.526 05:01:26 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:13:48.526 05:01:26 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:13:48.526 05:01:26 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:13:48.526 05:01:26 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:13:48.526 05:01:26 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:13:48.526 05:01:26 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:13:48.526 05:01:26 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:13:48.526 05:01:26 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:13:48.526 05:01:26 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:13:48.526 fio: verification read phase will never start because write phase uses all of runtime 00:13:48.526 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:13:48.526 fio-3.35 00:13:48.526 Starting 1 process 00:14:00.738 00:14:00.738 fio_test: (groupid=0, jobs=1): err= 0: pid=82570: Fri Dec 6 05:01:36 2024 00:14:00.738 write: IOPS=16.3k, BW=63.6MiB/s (66.7MB/s)(636MiB/10001msec); 0 zone resets 00:14:00.738 clat (usec): min=32, max=4110, avg=60.63, stdev=81.22 00:14:00.738 lat (usec): min=32, max=4111, avg=61.06, stdev=81.24 00:14:00.738 clat percentiles (usec): 00:14:00.738 | 1.00th=[ 47], 5.00th=[ 50], 10.00th=[ 51], 20.00th=[ 52], 00:14:00.738 | 30.00th=[ 53], 40.00th=[ 55], 50.00th=[ 56], 60.00th=[ 58], 00:14:00.738 | 70.00th=[ 60], 80.00th=[ 62], 90.00th=[ 67], 95.00th=[ 73], 00:14:00.738 | 99.00th=[ 106], 99.50th=[ 122], 99.90th=[ 1303], 99.95th=[ 2442], 00:14:00.738 | 99.99th=[ 3458] 00:14:00.738 bw ( KiB/s): min=52112, max=68696, per=99.93%, avg=65120.42, stdev=4111.51, samples=19 00:14:00.738 iops : min=13028, max=17174, avg=16280.11, stdev=1027.88, samples=19 00:14:00.738 lat (usec) : 50=7.75%, 100=90.96%, 250=1.09%, 500=0.06%, 750=0.01% 00:14:00.738 lat (usec) : 1000=0.01% 00:14:00.738 lat (msec) : 2=0.05%, 4=0.06%, 10=0.01% 00:14:00.738 cpu : usr=2.71%, sys=12.98%, ctx=162930, majf=0, minf=795 00:14:00.738 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:00.738 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:00.738 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:00.738 issued rwts: total=0,162929,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:00.738 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:00.738 00:14:00.738 Run status group 0 (all jobs): 00:14:00.738 WRITE: bw=63.6MiB/s (66.7MB/s), 63.6MiB/s-63.6MiB/s (66.7MB/s-66.7MB/s), io=636MiB (667MB), run=10001-10001msec 00:14:00.738 00:14:00.738 Disk stats (read/write): 00:14:00.738 ublkb0: ios=0/161273, merge=0/0, ticks=0/8394, in_queue=8395, util=99.05% 00:14:00.738 05:01:36 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:14:00.738 05:01:36 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.738 05:01:36 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:00.738 [2024-12-06 05:01:36.813746] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:00.738 [2024-12-06 05:01:36.849283] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:00.738 [2024-12-06 05:01:36.850179] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:00.738 [2024-12-06 05:01:36.858695] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:00.738 [2024-12-06 05:01:36.858951] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:00.738 [2024-12-06 05:01:36.858963] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:00.738 05:01:36 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.738 05:01:36 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:14:00.738 05:01:36 ublk.test_create_ublk -- common/autotest_common.sh@650 -- # local es=0 00:14:00.738 05:01:36 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:14:00.738 05:01:36 ublk.test_create_ublk -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:00.738 05:01:36 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:00.738 05:01:36 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:00.738 05:01:36 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:00.738 05:01:36 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # rpc_cmd ublk_stop_disk 0 00:14:00.738 05:01:36 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.738 05:01:36 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:00.738 [2024-12-06 05:01:36.874751] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:14:00.738 request: 00:14:00.738 { 00:14:00.738 "ublk_id": 0, 00:14:00.738 "method": "ublk_stop_disk", 00:14:00.738 "req_id": 1 00:14:00.738 } 00:14:00.738 Got JSON-RPC error response 00:14:00.738 response: 00:14:00.738 { 00:14:00.738 "code": -19, 00:14:00.738 "message": "No such device" 00:14:00.738 } 00:14:00.738 05:01:36 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:00.738 05:01:36 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # es=1 00:14:00.738 05:01:36 ublk.test_create_ublk -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:00.738 05:01:36 ublk.test_create_ublk -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:00.738 05:01:36 ublk.test_create_ublk -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:00.738 05:01:36 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:14:00.738 05:01:36 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.738 05:01:36 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:00.738 [2024-12-06 05:01:36.890761] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:00.738 [2024-12-06 05:01:36.892508] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:00.738 [2024-12-06 05:01:36.892537] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:00.738 05:01:36 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.738 05:01:36 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:14:00.738 05:01:36 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.738 05:01:36 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:00.738 05:01:36 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.738 05:01:36 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:14:00.738 05:01:36 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:00.738 05:01:36 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.738 05:01:36 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:00.738 05:01:36 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.738 05:01:36 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:14:00.738 05:01:36 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:14:00.738 05:01:37 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:14:00.738 05:01:37 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:14:00.738 05:01:37 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.738 05:01:37 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:00.738 05:01:37 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.738 05:01:37 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:14:00.738 05:01:37 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:14:00.738 05:01:37 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:14:00.738 00:14:00.738 real 0m10.858s 00:14:00.738 user 0m0.562s 00:14:00.738 sys 0m1.376s 00:14:00.738 05:01:37 ublk.test_create_ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:00.738 05:01:37 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:00.738 ************************************ 00:14:00.738 END TEST test_create_ublk 00:14:00.738 ************************************ 00:14:00.739 05:01:37 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:14:00.739 05:01:37 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:00.739 05:01:37 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:00.739 05:01:37 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:00.739 ************************************ 00:14:00.739 START TEST test_create_multi_ublk 00:14:00.739 ************************************ 00:14:00.739 05:01:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@1125 -- # test_create_multi_ublk 00:14:00.739 05:01:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:14:00.739 05:01:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.739 05:01:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:00.739 [2024-12-06 05:01:37.110685] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:00.739 [2024-12-06 05:01:37.111807] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:00.739 05:01:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.739 05:01:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:14:00.739 05:01:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:14:00.739 05:01:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:00.739 05:01:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:14:00.739 05:01:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.739 05:01:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:00.739 05:01:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.739 05:01:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:14:00.739 05:01:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:14:00.739 05:01:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.739 05:01:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:00.739 [2024-12-06 05:01:37.206810] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:14:00.739 [2024-12-06 05:01:37.207128] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:14:00.739 [2024-12-06 05:01:37.207142] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:00.739 [2024-12-06 05:01:37.207148] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:00.739 [2024-12-06 05:01:37.230703] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:00.739 [2024-12-06 05:01:37.230721] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:00.739 [2024-12-06 05:01:37.242692] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:00.739 [2024-12-06 05:01:37.243213] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:00.739 [2024-12-06 05:01:37.278694] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:00.739 05:01:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.739 05:01:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:14:00.739 05:01:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:00.739 05:01:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:14:00.739 05:01:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.739 05:01:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:00.739 05:01:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.739 05:01:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:14:00.739 05:01:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:14:00.739 05:01:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.739 05:01:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:00.739 [2024-12-06 05:01:37.386793] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:14:00.739 [2024-12-06 05:01:37.387111] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:14:00.739 [2024-12-06 05:01:37.387124] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:00.739 [2024-12-06 05:01:37.387131] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:14:00.739 [2024-12-06 05:01:37.398701] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:00.739 [2024-12-06 05:01:37.398721] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:00.739 [2024-12-06 05:01:37.410695] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:00.739 [2024-12-06 05:01:37.411204] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:14:00.739 [2024-12-06 05:01:37.435703] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:14:00.739 05:01:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.739 05:01:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:14:00.739 05:01:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:00.739 05:01:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:14:00.739 05:01:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.739 05:01:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:00.739 05:01:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.739 05:01:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:14:00.739 05:01:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:14:00.739 05:01:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.739 05:01:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:00.739 [2024-12-06 05:01:37.542776] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:14:00.739 [2024-12-06 05:01:37.543094] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:14:00.739 [2024-12-06 05:01:37.543108] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:14:00.739 [2024-12-06 05:01:37.543112] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:14:00.739 [2024-12-06 05:01:37.554922] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:00.739 [2024-12-06 05:01:37.554934] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:00.739 [2024-12-06 05:01:37.566700] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:00.739 [2024-12-06 05:01:37.567212] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:14:00.739 [2024-12-06 05:01:37.579722] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:14:00.739 05:01:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.739 05:01:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:14:00.739 05:01:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:00.739 05:01:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:14:00.739 05:01:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.739 05:01:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:00.739 05:01:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.739 05:01:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:14:00.739 05:01:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:14:00.739 05:01:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.739 05:01:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:00.739 [2024-12-06 05:01:37.686791] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:14:00.739 [2024-12-06 05:01:37.687121] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:14:00.739 [2024-12-06 05:01:37.687132] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:14:00.739 [2024-12-06 05:01:37.687139] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:14:00.739 [2024-12-06 05:01:37.698713] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:00.739 [2024-12-06 05:01:37.698735] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:00.739 [2024-12-06 05:01:37.710689] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:00.739 [2024-12-06 05:01:37.711208] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:14:00.739 [2024-12-06 05:01:37.750687] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:14:00.739 05:01:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.739 05:01:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:14:00.739 05:01:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:14:00.739 05:01:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.739 05:01:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:00.739 05:01:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.739 05:01:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:14:00.739 { 00:14:00.739 "ublk_device": "/dev/ublkb0", 00:14:00.739 "id": 0, 00:14:00.739 "queue_depth": 512, 00:14:00.739 "num_queues": 4, 00:14:00.739 "bdev_name": "Malloc0" 00:14:00.739 }, 00:14:00.739 { 00:14:00.739 "ublk_device": "/dev/ublkb1", 00:14:00.739 "id": 1, 00:14:00.739 "queue_depth": 512, 00:14:00.739 "num_queues": 4, 00:14:00.739 "bdev_name": "Malloc1" 00:14:00.739 }, 00:14:00.739 { 00:14:00.739 "ublk_device": "/dev/ublkb2", 00:14:00.739 "id": 2, 00:14:00.739 "queue_depth": 512, 00:14:00.739 "num_queues": 4, 00:14:00.739 "bdev_name": "Malloc2" 00:14:00.739 }, 00:14:00.739 { 00:14:00.739 "ublk_device": "/dev/ublkb3", 00:14:00.739 "id": 3, 00:14:00.739 "queue_depth": 512, 00:14:00.739 "num_queues": 4, 00:14:00.739 "bdev_name": "Malloc3" 00:14:00.739 } 00:14:00.739 ]' 00:14:00.739 05:01:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:14:00.739 05:01:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:00.740 05:01:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:14:00.740 05:01:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:00.740 05:01:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:14:00.740 05:01:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:14:00.740 05:01:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:14:00.740 05:01:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:00.740 05:01:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:14:00.740 05:01:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:00.740 05:01:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:14:00.740 05:01:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:14:00.740 05:01:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:00.740 05:01:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:14:00.740 05:01:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:14:00.740 05:01:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:14:00.740 05:01:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:14:00.740 05:01:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:14:00.740 05:01:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:00.740 05:01:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:14:00.740 05:01:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:00.740 05:01:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:14:00.740 05:01:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:14:00.740 05:01:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:00.740 05:01:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:14:00.740 05:01:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:14:00.740 05:01:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:14:00.740 05:01:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:14:00.740 05:01:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:14:00.740 05:01:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:00.740 05:01:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:14:00.740 05:01:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:00.740 05:01:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:14:00.740 05:01:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:14:00.740 05:01:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:00.740 05:01:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:14:00.740 05:01:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:14:00.740 05:01:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:14:00.740 05:01:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:14:00.740 05:01:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:14:00.740 05:01:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:00.740 05:01:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:14:00.740 05:01:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:00.740 05:01:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:14:00.740 05:01:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:14:00.740 05:01:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:14:00.740 05:01:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:14:00.740 05:01:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:00.740 05:01:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:14:00.740 05:01:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.740 05:01:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:00.740 [2024-12-06 05:01:38.407755] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:00.740 [2024-12-06 05:01:38.447722] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:00.740 [2024-12-06 05:01:38.448550] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:00.740 [2024-12-06 05:01:38.455688] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:00.740 [2024-12-06 05:01:38.455929] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:00.740 [2024-12-06 05:01:38.455942] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:00.740 05:01:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.740 05:01:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:00.740 05:01:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:14:00.740 05:01:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.740 05:01:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:00.740 [2024-12-06 05:01:38.471742] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:14:00.740 [2024-12-06 05:01:38.515734] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:00.740 [2024-12-06 05:01:38.516486] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:14:00.740 [2024-12-06 05:01:38.523692] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:00.740 [2024-12-06 05:01:38.523923] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:14:00.740 [2024-12-06 05:01:38.523949] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:14:00.740 05:01:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.740 05:01:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:00.740 05:01:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:14:00.740 05:01:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.740 05:01:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:00.740 [2024-12-06 05:01:38.539755] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:14:00.740 [2024-12-06 05:01:38.573723] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:00.740 [2024-12-06 05:01:38.574435] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:14:00.740 [2024-12-06 05:01:38.583719] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:00.740 [2024-12-06 05:01:38.583942] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:14:00.740 [2024-12-06 05:01:38.583954] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:14:00.740 05:01:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.740 05:01:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:00.740 05:01:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:14:00.740 05:01:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.740 05:01:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:00.740 [2024-12-06 05:01:38.598768] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:14:00.740 [2024-12-06 05:01:38.642718] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:00.740 [2024-12-06 05:01:38.643340] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:14:00.740 [2024-12-06 05:01:38.650703] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:00.740 [2024-12-06 05:01:38.650930] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:14:00.740 [2024-12-06 05:01:38.650940] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:14:00.740 05:01:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.740 05:01:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:14:00.740 [2024-12-06 05:01:38.849763] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:00.740 [2024-12-06 05:01:38.851031] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:00.740 [2024-12-06 05:01:38.851060] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:00.740 05:01:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:14:00.740 05:01:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:00.740 05:01:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:14:00.740 05:01:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.740 05:01:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:00.740 05:01:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.740 05:01:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:00.740 05:01:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:14:00.740 05:01:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.740 05:01:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:00.999 05:01:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.999 05:01:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:00.999 05:01:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:14:00.999 05:01:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.999 05:01:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:00.999 05:01:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.999 05:01:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:00.999 05:01:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:14:00.999 05:01:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.999 05:01:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:00.999 05:01:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.999 05:01:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:14:00.999 05:01:39 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:00.999 05:01:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.999 05:01:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:00.999 05:01:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.999 05:01:39 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:14:00.999 05:01:39 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:14:00.999 05:01:39 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:14:00.999 05:01:39 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:14:00.999 05:01:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.999 05:01:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:00.999 05:01:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.999 05:01:39 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:14:01.257 05:01:39 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:14:01.257 05:01:39 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:14:01.257 00:14:01.257 real 0m2.157s 00:14:01.257 user 0m0.787s 00:14:01.257 sys 0m0.148s 00:14:01.257 05:01:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:01.257 05:01:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:01.257 ************************************ 00:14:01.257 END TEST test_create_multi_ublk 00:14:01.257 ************************************ 00:14:01.257 05:01:39 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:14:01.257 05:01:39 ublk -- ublk/ublk.sh@147 -- # cleanup 00:14:01.257 05:01:39 ublk -- ublk/ublk.sh@130 -- # killprocess 82531 00:14:01.257 05:01:39 ublk -- common/autotest_common.sh@950 -- # '[' -z 82531 ']' 00:14:01.257 05:01:39 ublk -- common/autotest_common.sh@954 -- # kill -0 82531 00:14:01.257 05:01:39 ublk -- common/autotest_common.sh@955 -- # uname 00:14:01.257 05:01:39 ublk -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:01.257 05:01:39 ublk -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82531 00:14:01.257 05:01:39 ublk -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:01.257 05:01:39 ublk -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:01.257 killing process with pid 82531 00:14:01.257 05:01:39 ublk -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82531' 00:14:01.257 05:01:39 ublk -- common/autotest_common.sh@969 -- # kill 82531 00:14:01.257 05:01:39 ublk -- common/autotest_common.sh@974 -- # wait 82531 00:14:01.517 [2024-12-06 05:01:39.531262] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:01.517 [2024-12-06 05:01:39.531338] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:01.779 00:14:01.779 real 0m19.512s 00:14:01.779 user 0m28.944s 00:14:01.779 sys 0m8.489s 00:14:01.779 05:01:39 ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:01.779 05:01:39 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:01.779 ************************************ 00:14:01.779 END TEST ublk 00:14:01.779 ************************************ 00:14:01.779 05:01:39 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:14:01.779 05:01:39 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:01.779 05:01:39 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:01.779 05:01:39 -- common/autotest_common.sh@10 -- # set +x 00:14:01.779 ************************************ 00:14:01.779 START TEST ublk_recovery 00:14:01.779 ************************************ 00:14:01.779 05:01:39 ublk_recovery -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:14:02.039 * Looking for test storage... 00:14:02.039 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:14:02.039 05:01:40 ublk_recovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:02.039 05:01:40 ublk_recovery -- common/autotest_common.sh@1681 -- # lcov --version 00:14:02.039 05:01:40 ublk_recovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:02.039 05:01:40 ublk_recovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:02.039 05:01:40 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:02.039 05:01:40 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:02.039 05:01:40 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:02.039 05:01:40 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:14:02.039 05:01:40 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:14:02.039 05:01:40 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:14:02.039 05:01:40 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:14:02.039 05:01:40 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:14:02.039 05:01:40 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:14:02.039 05:01:40 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:14:02.039 05:01:40 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:02.039 05:01:40 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:14:02.039 05:01:40 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:14:02.039 05:01:40 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:02.039 05:01:40 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:02.039 05:01:40 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:14:02.039 05:01:40 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:14:02.039 05:01:40 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:02.039 05:01:40 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:14:02.039 05:01:40 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:14:02.039 05:01:40 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:14:02.039 05:01:40 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:14:02.039 05:01:40 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:02.039 05:01:40 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:14:02.039 05:01:40 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:14:02.039 05:01:40 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:02.039 05:01:40 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:02.039 05:01:40 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:14:02.039 05:01:40 ublk_recovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:02.039 05:01:40 ublk_recovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:02.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:02.039 --rc genhtml_branch_coverage=1 00:14:02.039 --rc genhtml_function_coverage=1 00:14:02.039 --rc genhtml_legend=1 00:14:02.039 --rc geninfo_all_blocks=1 00:14:02.039 --rc geninfo_unexecuted_blocks=1 00:14:02.039 00:14:02.039 ' 00:14:02.039 05:01:40 ublk_recovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:02.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:02.039 --rc genhtml_branch_coverage=1 00:14:02.039 --rc genhtml_function_coverage=1 00:14:02.039 --rc genhtml_legend=1 00:14:02.039 --rc geninfo_all_blocks=1 00:14:02.039 --rc geninfo_unexecuted_blocks=1 00:14:02.039 00:14:02.039 ' 00:14:02.039 05:01:40 ublk_recovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:02.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:02.039 --rc genhtml_branch_coverage=1 00:14:02.039 --rc genhtml_function_coverage=1 00:14:02.039 --rc genhtml_legend=1 00:14:02.039 --rc geninfo_all_blocks=1 00:14:02.039 --rc geninfo_unexecuted_blocks=1 00:14:02.039 00:14:02.039 ' 00:14:02.039 05:01:40 ublk_recovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:02.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:02.039 --rc genhtml_branch_coverage=1 00:14:02.039 --rc genhtml_function_coverage=1 00:14:02.039 --rc genhtml_legend=1 00:14:02.039 --rc geninfo_all_blocks=1 00:14:02.039 --rc geninfo_unexecuted_blocks=1 00:14:02.039 00:14:02.039 ' 00:14:02.039 05:01:40 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:14:02.039 05:01:40 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:14:02.039 05:01:40 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:14:02.039 05:01:40 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:14:02.039 05:01:40 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:14:02.039 05:01:40 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:14:02.039 05:01:40 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:14:02.039 05:01:40 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:14:02.039 05:01:40 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:14:02.039 05:01:40 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:14:02.039 05:01:40 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=82905 00:14:02.039 05:01:40 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:02.039 05:01:40 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 82905 00:14:02.039 05:01:40 ublk_recovery -- common/autotest_common.sh@831 -- # '[' -z 82905 ']' 00:14:02.039 05:01:40 ublk_recovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:02.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:02.039 05:01:40 ublk_recovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:02.039 05:01:40 ublk_recovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:02.039 05:01:40 ublk_recovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:02.039 05:01:40 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:02.039 05:01:40 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:02.039 [2024-12-06 05:01:40.179443] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:14:02.039 [2024-12-06 05:01:40.179552] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82905 ] 00:14:02.297 [2024-12-06 05:01:40.309495] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:02.297 [2024-12-06 05:01:40.356148] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:02.297 [2024-12-06 05:01:40.356224] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:02.862 05:01:41 ublk_recovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:02.862 05:01:41 ublk_recovery -- common/autotest_common.sh@864 -- # return 0 00:14:02.862 05:01:41 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:14:02.862 05:01:41 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.862 05:01:41 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:02.862 [2024-12-06 05:01:41.022687] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:02.862 [2024-12-06 05:01:41.023892] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:02.862 05:01:41 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.862 05:01:41 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:14:02.862 05:01:41 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.862 05:01:41 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:02.862 malloc0 00:14:02.862 05:01:41 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.862 05:01:41 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:14:02.862 05:01:41 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.862 05:01:41 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:02.862 [2024-12-06 05:01:41.062792] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:14:02.862 [2024-12-06 05:01:41.062879] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:14:02.862 [2024-12-06 05:01:41.062885] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:02.862 [2024-12-06 05:01:41.062894] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:14:02.862 [2024-12-06 05:01:41.070821] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:02.862 [2024-12-06 05:01:41.070839] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:02.862 [2024-12-06 05:01:41.078693] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:02.862 [2024-12-06 05:01:41.078833] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:14:02.862 [2024-12-06 05:01:41.089709] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:14:03.120 1 00:14:03.120 05:01:41 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.120 05:01:41 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:14:04.055 05:01:42 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=82929 00:14:04.055 05:01:42 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:14:04.055 05:01:42 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:14:04.055 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:04.055 fio-3.35 00:14:04.055 Starting 1 process 00:14:09.321 05:01:47 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 82905 00:14:09.321 05:01:47 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:14:14.603 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 82905 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:14:14.603 05:01:52 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=83044 00:14:14.603 05:01:52 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:14.603 05:01:52 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:14.603 05:01:52 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 83044 00:14:14.603 05:01:52 ublk_recovery -- common/autotest_common.sh@831 -- # '[' -z 83044 ']' 00:14:14.603 05:01:52 ublk_recovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:14.603 05:01:52 ublk_recovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:14.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:14.603 05:01:52 ublk_recovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:14.603 05:01:52 ublk_recovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:14.603 05:01:52 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:14.603 [2024-12-06 05:01:52.178031] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:14:14.603 [2024-12-06 05:01:52.178142] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83044 ] 00:14:14.603 [2024-12-06 05:01:52.311472] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:14.603 [2024-12-06 05:01:52.353809] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:14.603 [2024-12-06 05:01:52.353827] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:14.862 05:01:53 ublk_recovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:14.862 05:01:53 ublk_recovery -- common/autotest_common.sh@864 -- # return 0 00:14:14.862 05:01:53 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:14:14.862 05:01:53 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.862 05:01:53 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:14.862 [2024-12-06 05:01:53.029684] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:14.862 [2024-12-06 05:01:53.030902] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:14.862 05:01:53 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.862 05:01:53 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:14:14.862 05:01:53 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.862 05:01:53 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:14.862 malloc0 00:14:14.862 05:01:53 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.862 05:01:53 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:14:14.862 05:01:53 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.862 05:01:53 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:14.862 [2024-12-06 05:01:53.069789] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:14:14.862 [2024-12-06 05:01:53.069823] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:14.862 [2024-12-06 05:01:53.069836] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:14.862 [2024-12-06 05:01:53.077731] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:14.862 [2024-12-06 05:01:53.077747] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:14:14.862 1 00:14:14.862 05:01:53 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.862 05:01:53 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 82929 00:14:16.234 [2024-12-06 05:01:54.077774] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:16.234 [2024-12-06 05:01:54.085687] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:16.234 [2024-12-06 05:01:54.085719] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:14:17.166 [2024-12-06 05:01:55.085737] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:17.166 [2024-12-06 05:01:55.089702] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:17.166 [2024-12-06 05:01:55.089715] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:14:18.098 [2024-12-06 05:01:56.089737] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:18.098 [2024-12-06 05:01:56.093702] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:18.099 [2024-12-06 05:01:56.093717] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:14:18.099 [2024-12-06 05:01:56.093722] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:14:18.099 [2024-12-06 05:01:56.093785] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:14:40.046 [2024-12-06 05:02:17.468693] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:14:40.046 [2024-12-06 05:02:17.475223] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:14:40.046 [2024-12-06 05:02:17.482897] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:14:40.046 [2024-12-06 05:02:17.482916] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:15:06.681 00:15:06.681 fio_test: (groupid=0, jobs=1): err= 0: pid=82937: Fri Dec 6 05:02:42 2024 00:15:06.681 read: IOPS=14.5k, BW=56.8MiB/s (59.6MB/s)(3410MiB/60002msec) 00:15:06.681 slat (nsec): min=1125, max=177890, avg=5028.43, stdev=1567.46 00:15:06.681 clat (usec): min=598, max=30389k, avg=4391.57, stdev=262191.89 00:15:06.681 lat (usec): min=603, max=30389k, avg=4396.60, stdev=262191.89 00:15:06.681 clat percentiles (usec): 00:15:06.681 | 1.00th=[ 1713], 5.00th=[ 1827], 10.00th=[ 1844], 20.00th=[ 1876], 00:15:06.681 | 30.00th=[ 1893], 40.00th=[ 1909], 50.00th=[ 1926], 60.00th=[ 1942], 00:15:06.681 | 70.00th=[ 1958], 80.00th=[ 2057], 90.00th=[ 2147], 95.00th=[ 3687], 00:15:06.681 | 99.00th=[ 5669], 99.50th=[ 6063], 99.90th=[12125], 99.95th=[12911], 00:15:06.681 | 99.99th=[13304] 00:15:06.681 bw ( KiB/s): min=38960, max=129096, per=100.00%, avg=116358.78, stdev=21577.82, samples=59 00:15:06.681 iops : min= 9740, max=32274, avg=29089.69, stdev=5394.45, samples=59 00:15:06.681 write: IOPS=14.5k, BW=56.7MiB/s (59.5MB/s)(3405MiB/60002msec); 0 zone resets 00:15:06.681 slat (nsec): min=1123, max=156074, avg=5098.00, stdev=1588.39 00:15:06.681 clat (usec): min=641, max=30389k, avg=4401.95, stdev=258320.79 00:15:06.681 lat (usec): min=645, max=30389k, avg=4407.05, stdev=258320.79 00:15:06.681 clat percentiles (usec): 00:15:06.681 | 1.00th=[ 1745], 5.00th=[ 1909], 10.00th=[ 1926], 20.00th=[ 1958], 00:15:06.681 | 30.00th=[ 1975], 40.00th=[ 1991], 50.00th=[ 2008], 60.00th=[ 2024], 00:15:06.681 | 70.00th=[ 2057], 80.00th=[ 2147], 90.00th=[ 2245], 95.00th=[ 3556], 00:15:06.681 | 99.00th=[ 5735], 99.50th=[ 6128], 99.90th=[12125], 99.95th=[13042], 00:15:06.681 | 99.99th=[13435] 00:15:06.681 bw ( KiB/s): min=39488, max=128136, per=100.00%, avg=116196.34, stdev=21632.65, samples=59 00:15:06.681 iops : min= 9872, max=32034, avg=29049.08, stdev=5408.16, samples=59 00:15:06.681 lat (usec) : 750=0.01%, 1000=0.01% 00:15:06.681 lat (msec) : 2=60.27%, 4=35.26%, 10=4.36%, 20=0.10%, >=2000=0.01% 00:15:06.681 cpu : usr=3.19%, sys=14.95%, ctx=58462, majf=0, minf=13 00:15:06.681 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:15:06.681 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:06.681 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:06.681 issued rwts: total=872935,871637,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:06.681 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:06.681 00:15:06.681 Run status group 0 (all jobs): 00:15:06.681 READ: bw=56.8MiB/s (59.6MB/s), 56.8MiB/s-56.8MiB/s (59.6MB/s-59.6MB/s), io=3410MiB (3576MB), run=60002-60002msec 00:15:06.681 WRITE: bw=56.7MiB/s (59.5MB/s), 56.7MiB/s-56.7MiB/s (59.5MB/s-59.5MB/s), io=3405MiB (3570MB), run=60002-60002msec 00:15:06.681 00:15:06.681 Disk stats (read/write): 00:15:06.681 ublkb1: ios=869572/868278, merge=0/0, ticks=3783009/3714931, in_queue=7497941, util=99.87% 00:15:06.681 05:02:42 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:15:06.681 05:02:42 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.681 05:02:42 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:06.681 [2024-12-06 05:02:42.347803] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:15:06.681 [2024-12-06 05:02:42.387703] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:06.681 [2024-12-06 05:02:42.387864] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:15:06.681 [2024-12-06 05:02:42.395762] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:06.681 [2024-12-06 05:02:42.395860] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:15:06.681 [2024-12-06 05:02:42.395873] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:15:06.682 05:02:42 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.682 05:02:42 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:15:06.682 05:02:42 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.682 05:02:42 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:06.682 [2024-12-06 05:02:42.411767] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:06.682 [2024-12-06 05:02:42.413068] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:06.682 [2024-12-06 05:02:42.413098] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:15:06.682 05:02:42 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.682 05:02:42 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:15:06.682 05:02:42 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:15:06.682 05:02:42 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 83044 00:15:06.682 05:02:42 ublk_recovery -- common/autotest_common.sh@950 -- # '[' -z 83044 ']' 00:15:06.682 05:02:42 ublk_recovery -- common/autotest_common.sh@954 -- # kill -0 83044 00:15:06.682 05:02:42 ublk_recovery -- common/autotest_common.sh@955 -- # uname 00:15:06.682 05:02:42 ublk_recovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:06.682 05:02:42 ublk_recovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83044 00:15:06.682 05:02:42 ublk_recovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:06.682 05:02:42 ublk_recovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:06.682 killing process with pid 83044 00:15:06.682 05:02:42 ublk_recovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83044' 00:15:06.682 05:02:42 ublk_recovery -- common/autotest_common.sh@969 -- # kill 83044 00:15:06.682 05:02:42 ublk_recovery -- common/autotest_common.sh@974 -- # wait 83044 00:15:06.682 [2024-12-06 05:02:42.675982] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:06.682 [2024-12-06 05:02:42.676027] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:06.682 00:15:06.682 real 1m3.109s 00:15:06.682 user 1m44.994s 00:15:06.682 sys 0m21.532s 00:15:06.682 ************************************ 00:15:06.682 END TEST ublk_recovery 00:15:06.682 05:02:43 ublk_recovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:06.682 05:02:43 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:06.682 ************************************ 00:15:06.682 05:02:43 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:15:06.682 05:02:43 -- spdk/autotest.sh@256 -- # timing_exit lib 00:15:06.682 05:02:43 -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:06.682 05:02:43 -- common/autotest_common.sh@10 -- # set +x 00:15:06.682 05:02:43 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:15:06.682 05:02:43 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:15:06.682 05:02:43 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:15:06.682 05:02:43 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:15:06.682 05:02:43 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:15:06.682 05:02:43 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:15:06.682 05:02:43 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:15:06.682 05:02:43 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:15:06.682 05:02:43 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:15:06.682 05:02:43 -- spdk/autotest.sh@338 -- # '[' 1 -eq 1 ']' 00:15:06.682 05:02:43 -- spdk/autotest.sh@339 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:06.682 05:02:43 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:06.682 05:02:43 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:06.682 05:02:43 -- common/autotest_common.sh@10 -- # set +x 00:15:06.682 ************************************ 00:15:06.682 START TEST ftl 00:15:06.682 ************************************ 00:15:06.682 05:02:43 ftl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:06.682 * Looking for test storage... 00:15:06.682 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:06.682 05:02:43 ftl -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:06.682 05:02:43 ftl -- common/autotest_common.sh@1681 -- # lcov --version 00:15:06.682 05:02:43 ftl -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:06.682 05:02:43 ftl -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:06.682 05:02:43 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:06.682 05:02:43 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:06.682 05:02:43 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:06.682 05:02:43 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:15:06.682 05:02:43 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:15:06.682 05:02:43 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:15:06.682 05:02:43 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:15:06.682 05:02:43 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:15:06.682 05:02:43 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:15:06.682 05:02:43 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:15:06.682 05:02:43 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:06.682 05:02:43 ftl -- scripts/common.sh@344 -- # case "$op" in 00:15:06.682 05:02:43 ftl -- scripts/common.sh@345 -- # : 1 00:15:06.682 05:02:43 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:06.682 05:02:43 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:06.682 05:02:43 ftl -- scripts/common.sh@365 -- # decimal 1 00:15:06.682 05:02:43 ftl -- scripts/common.sh@353 -- # local d=1 00:15:06.682 05:02:43 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:06.682 05:02:43 ftl -- scripts/common.sh@355 -- # echo 1 00:15:06.682 05:02:43 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:15:06.682 05:02:43 ftl -- scripts/common.sh@366 -- # decimal 2 00:15:06.682 05:02:43 ftl -- scripts/common.sh@353 -- # local d=2 00:15:06.682 05:02:43 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:06.682 05:02:43 ftl -- scripts/common.sh@355 -- # echo 2 00:15:06.682 05:02:43 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:15:06.682 05:02:43 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:06.682 05:02:43 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:06.682 05:02:43 ftl -- scripts/common.sh@368 -- # return 0 00:15:06.682 05:02:43 ftl -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:06.682 05:02:43 ftl -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:06.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:06.682 --rc genhtml_branch_coverage=1 00:15:06.682 --rc genhtml_function_coverage=1 00:15:06.682 --rc genhtml_legend=1 00:15:06.682 --rc geninfo_all_blocks=1 00:15:06.682 --rc geninfo_unexecuted_blocks=1 00:15:06.682 00:15:06.682 ' 00:15:06.682 05:02:43 ftl -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:06.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:06.682 --rc genhtml_branch_coverage=1 00:15:06.682 --rc genhtml_function_coverage=1 00:15:06.682 --rc genhtml_legend=1 00:15:06.682 --rc geninfo_all_blocks=1 00:15:06.682 --rc geninfo_unexecuted_blocks=1 00:15:06.682 00:15:06.682 ' 00:15:06.682 05:02:43 ftl -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:06.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:06.682 --rc genhtml_branch_coverage=1 00:15:06.682 --rc genhtml_function_coverage=1 00:15:06.682 --rc genhtml_legend=1 00:15:06.682 --rc geninfo_all_blocks=1 00:15:06.682 --rc geninfo_unexecuted_blocks=1 00:15:06.682 00:15:06.682 ' 00:15:06.682 05:02:43 ftl -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:06.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:06.682 --rc genhtml_branch_coverage=1 00:15:06.682 --rc genhtml_function_coverage=1 00:15:06.682 --rc genhtml_legend=1 00:15:06.682 --rc geninfo_all_blocks=1 00:15:06.682 --rc geninfo_unexecuted_blocks=1 00:15:06.682 00:15:06.682 ' 00:15:06.682 05:02:43 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:06.682 05:02:43 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:06.682 05:02:43 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:06.682 05:02:43 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:06.682 05:02:43 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:06.682 05:02:43 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:06.682 05:02:43 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:06.682 05:02:43 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:06.682 05:02:43 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:06.682 05:02:43 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:06.682 05:02:43 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:06.682 05:02:43 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:06.682 05:02:43 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:06.682 05:02:43 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:06.682 05:02:43 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:06.682 05:02:43 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:06.682 05:02:43 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:06.682 05:02:43 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:06.682 05:02:43 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:06.682 05:02:43 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:06.682 05:02:43 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:06.682 05:02:43 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:06.682 05:02:43 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:06.682 05:02:43 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:06.682 05:02:43 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:06.682 05:02:43 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:06.682 05:02:43 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:06.682 05:02:43 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:06.682 05:02:43 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:06.682 05:02:43 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:06.682 05:02:43 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:15:06.682 05:02:43 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:15:06.682 05:02:43 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:15:06.682 05:02:43 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:15:06.683 05:02:43 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:06.683 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:06.683 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:06.683 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:06.683 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:06.683 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:06.683 05:02:43 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=83841 00:15:06.683 05:02:43 ftl -- ftl/ftl.sh@38 -- # waitforlisten 83841 00:15:06.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:06.683 05:02:43 ftl -- common/autotest_common.sh@831 -- # '[' -z 83841 ']' 00:15:06.683 05:02:43 ftl -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:06.683 05:02:43 ftl -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:06.683 05:02:43 ftl -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:06.683 05:02:43 ftl -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:06.683 05:02:43 ftl -- common/autotest_common.sh@10 -- # set +x 00:15:06.683 05:02:43 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:15:06.683 [2024-12-06 05:02:43.915567] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:15:06.683 [2024-12-06 05:02:43.915703] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83841 ] 00:15:06.683 [2024-12-06 05:02:44.051040] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:06.683 [2024-12-06 05:02:44.084991] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:06.683 05:02:44 ftl -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:06.683 05:02:44 ftl -- common/autotest_common.sh@864 -- # return 0 00:15:06.683 05:02:44 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:15:06.944 05:02:44 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:15:07.205 05:02:45 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:07.205 05:02:45 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:15:07.777 05:02:45 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:15:07.777 05:02:45 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:15:07.777 05:02:45 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:15:08.039 05:02:46 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:15:08.039 05:02:46 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:15:08.039 05:02:46 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:15:08.039 05:02:46 ftl -- ftl/ftl.sh@50 -- # break 00:15:08.039 05:02:46 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:15:08.039 05:02:46 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:15:08.039 05:02:46 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:15:08.039 05:02:46 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:15:08.039 05:02:46 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:15:08.039 05:02:46 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:15:08.039 05:02:46 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:15:08.039 05:02:46 ftl -- ftl/ftl.sh@63 -- # break 00:15:08.039 05:02:46 ftl -- ftl/ftl.sh@66 -- # killprocess 83841 00:15:08.039 05:02:46 ftl -- common/autotest_common.sh@950 -- # '[' -z 83841 ']' 00:15:08.039 05:02:46 ftl -- common/autotest_common.sh@954 -- # kill -0 83841 00:15:08.039 05:02:46 ftl -- common/autotest_common.sh@955 -- # uname 00:15:08.039 05:02:46 ftl -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:08.039 05:02:46 ftl -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83841 00:15:08.039 05:02:46 ftl -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:08.301 killing process with pid 83841 00:15:08.301 05:02:46 ftl -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:08.301 05:02:46 ftl -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83841' 00:15:08.301 05:02:46 ftl -- common/autotest_common.sh@969 -- # kill 83841 00:15:08.301 05:02:46 ftl -- common/autotest_common.sh@974 -- # wait 83841 00:15:08.874 05:02:46 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:15:08.874 05:02:46 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:15:08.874 05:02:46 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:08.874 05:02:46 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:08.874 05:02:46 ftl -- common/autotest_common.sh@10 -- # set +x 00:15:08.874 ************************************ 00:15:08.874 START TEST ftl_fio_basic 00:15:08.874 ************************************ 00:15:08.874 05:02:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:15:08.874 * Looking for test storage... 00:15:08.874 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:08.874 05:02:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:08.874 05:02:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1681 -- # lcov --version 00:15:08.874 05:02:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:08.874 05:02:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:08.874 05:02:46 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:08.874 05:02:46 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:08.874 05:02:46 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:08.874 05:02:46 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:15:08.874 05:02:46 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:15:08.874 05:02:46 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:15:08.874 05:02:46 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:15:08.874 05:02:46 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:15:08.874 05:02:46 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:15:08.874 05:02:46 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:15:08.874 05:02:46 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:08.874 05:02:46 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:15:08.874 05:02:46 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:15:08.874 05:02:46 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:08.874 05:02:46 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:08.874 05:02:46 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:15:08.874 05:02:46 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:15:08.874 05:02:46 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:08.874 05:02:46 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:15:08.874 05:02:46 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:15:08.874 05:02:46 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:15:08.874 05:02:46 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:15:08.874 05:02:46 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:08.874 05:02:46 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:15:08.874 05:02:46 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:15:08.874 05:02:46 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:08.874 05:02:46 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:08.874 05:02:46 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:15:08.874 05:02:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:08.874 05:02:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:08.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:08.875 --rc genhtml_branch_coverage=1 00:15:08.875 --rc genhtml_function_coverage=1 00:15:08.875 --rc genhtml_legend=1 00:15:08.875 --rc geninfo_all_blocks=1 00:15:08.875 --rc geninfo_unexecuted_blocks=1 00:15:08.875 00:15:08.875 ' 00:15:08.875 05:02:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:08.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:08.875 --rc genhtml_branch_coverage=1 00:15:08.875 --rc genhtml_function_coverage=1 00:15:08.875 --rc genhtml_legend=1 00:15:08.875 --rc geninfo_all_blocks=1 00:15:08.875 --rc geninfo_unexecuted_blocks=1 00:15:08.875 00:15:08.875 ' 00:15:08.875 05:02:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:08.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:08.875 --rc genhtml_branch_coverage=1 00:15:08.875 --rc genhtml_function_coverage=1 00:15:08.875 --rc genhtml_legend=1 00:15:08.875 --rc geninfo_all_blocks=1 00:15:08.875 --rc geninfo_unexecuted_blocks=1 00:15:08.875 00:15:08.875 ' 00:15:08.875 05:02:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:08.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:08.875 --rc genhtml_branch_coverage=1 00:15:08.875 --rc genhtml_function_coverage=1 00:15:08.875 --rc genhtml_legend=1 00:15:08.875 --rc geninfo_all_blocks=1 00:15:08.875 --rc geninfo_unexecuted_blocks=1 00:15:08.875 00:15:08.875 ' 00:15:08.875 05:02:46 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:08.875 05:02:47 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:15:08.875 05:02:47 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:08.875 05:02:47 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:08.875 05:02:47 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:08.875 05:02:47 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:08.875 05:02:47 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:08.875 05:02:47 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:08.875 05:02:47 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:08.875 05:02:47 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:08.875 05:02:47 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:08.875 05:02:47 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:08.875 05:02:47 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:08.875 05:02:47 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:08.875 05:02:47 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:08.875 05:02:47 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:08.875 05:02:47 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:08.875 05:02:47 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:08.875 05:02:47 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:08.875 05:02:47 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:08.875 05:02:47 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:08.875 05:02:47 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:08.875 05:02:47 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:08.875 05:02:47 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:08.875 05:02:47 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:08.875 05:02:47 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:08.875 05:02:47 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:08.875 05:02:47 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:08.875 05:02:47 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:08.875 05:02:47 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:15:08.875 05:02:47 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:15:08.875 05:02:47 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:15:08.875 05:02:47 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:15:08.875 05:02:47 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:08.875 05:02:47 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:15:08.875 05:02:47 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:15:08.875 05:02:47 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:15:08.875 05:02:47 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:15:08.875 05:02:47 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:15:08.875 05:02:47 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:15:08.875 05:02:47 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:15:08.875 05:02:47 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:15:08.875 05:02:47 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:15:08.875 05:02:47 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:08.875 05:02:47 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:08.875 05:02:47 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:15:08.875 05:02:47 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=83959 00:15:08.875 05:02:47 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 83959 00:15:08.875 05:02:47 ftl.ftl_fio_basic -- common/autotest_common.sh@831 -- # '[' -z 83959 ']' 00:15:08.875 05:02:47 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:15:08.875 05:02:47 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:08.875 05:02:47 ftl.ftl_fio_basic -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:08.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:08.875 05:02:47 ftl.ftl_fio_basic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:08.875 05:02:47 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:08.875 05:02:47 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:08.875 [2024-12-06 05:02:47.090553] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:15:08.875 [2024-12-06 05:02:47.090689] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83959 ] 00:15:09.137 [2024-12-06 05:02:47.224790] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:09.137 [2024-12-06 05:02:47.288151] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:09.137 [2024-12-06 05:02:47.288450] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:09.137 [2024-12-06 05:02:47.288503] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:15:09.709 05:02:47 ftl.ftl_fio_basic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:09.709 05:02:47 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # return 0 00:15:09.709 05:02:47 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:15:09.709 05:02:47 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:15:09.709 05:02:47 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:15:09.709 05:02:47 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:15:09.709 05:02:47 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:15:09.709 05:02:47 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:15:09.970 05:02:48 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:15:09.970 05:02:48 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:15:09.970 05:02:48 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:15:09.970 05:02:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:15:09.970 05:02:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:09.970 05:02:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:15:09.970 05:02:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:15:09.970 05:02:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:15:10.231 05:02:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:10.231 { 00:15:10.231 "name": "nvme0n1", 00:15:10.231 "aliases": [ 00:15:10.231 "e2ec83d5-b9ee-4bd3-b9bf-3c575b44bb01" 00:15:10.231 ], 00:15:10.231 "product_name": "NVMe disk", 00:15:10.231 "block_size": 4096, 00:15:10.231 "num_blocks": 1310720, 00:15:10.231 "uuid": "e2ec83d5-b9ee-4bd3-b9bf-3c575b44bb01", 00:15:10.231 "numa_id": -1, 00:15:10.231 "assigned_rate_limits": { 00:15:10.231 "rw_ios_per_sec": 0, 00:15:10.231 "rw_mbytes_per_sec": 0, 00:15:10.232 "r_mbytes_per_sec": 0, 00:15:10.232 "w_mbytes_per_sec": 0 00:15:10.232 }, 00:15:10.232 "claimed": false, 00:15:10.232 "zoned": false, 00:15:10.232 "supported_io_types": { 00:15:10.232 "read": true, 00:15:10.232 "write": true, 00:15:10.232 "unmap": true, 00:15:10.232 "flush": true, 00:15:10.232 "reset": true, 00:15:10.232 "nvme_admin": true, 00:15:10.232 "nvme_io": true, 00:15:10.232 "nvme_io_md": false, 00:15:10.232 "write_zeroes": true, 00:15:10.232 "zcopy": false, 00:15:10.232 "get_zone_info": false, 00:15:10.232 "zone_management": false, 00:15:10.232 "zone_append": false, 00:15:10.232 "compare": true, 00:15:10.232 "compare_and_write": false, 00:15:10.232 "abort": true, 00:15:10.232 "seek_hole": false, 00:15:10.232 "seek_data": false, 00:15:10.232 "copy": true, 00:15:10.232 "nvme_iov_md": false 00:15:10.232 }, 00:15:10.232 "driver_specific": { 00:15:10.232 "nvme": [ 00:15:10.232 { 00:15:10.232 "pci_address": "0000:00:11.0", 00:15:10.232 "trid": { 00:15:10.232 "trtype": "PCIe", 00:15:10.232 "traddr": "0000:00:11.0" 00:15:10.232 }, 00:15:10.232 "ctrlr_data": { 00:15:10.232 "cntlid": 0, 00:15:10.232 "vendor_id": "0x1b36", 00:15:10.232 "model_number": "QEMU NVMe Ctrl", 00:15:10.232 "serial_number": "12341", 00:15:10.232 "firmware_revision": "8.0.0", 00:15:10.232 "subnqn": "nqn.2019-08.org.qemu:12341", 00:15:10.232 "oacs": { 00:15:10.232 "security": 0, 00:15:10.232 "format": 1, 00:15:10.232 "firmware": 0, 00:15:10.232 "ns_manage": 1 00:15:10.232 }, 00:15:10.232 "multi_ctrlr": false, 00:15:10.232 "ana_reporting": false 00:15:10.232 }, 00:15:10.232 "vs": { 00:15:10.232 "nvme_version": "1.4" 00:15:10.232 }, 00:15:10.232 "ns_data": { 00:15:10.232 "id": 1, 00:15:10.232 "can_share": false 00:15:10.232 } 00:15:10.232 } 00:15:10.232 ], 00:15:10.232 "mp_policy": "active_passive" 00:15:10.232 } 00:15:10.232 } 00:15:10.232 ]' 00:15:10.232 05:02:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:10.232 05:02:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:15:10.232 05:02:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:10.232 05:02:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=1310720 00:15:10.232 05:02:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:15:10.232 05:02:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 5120 00:15:10.232 05:02:48 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:15:10.232 05:02:48 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:15:10.232 05:02:48 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:15:10.232 05:02:48 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:15:10.232 05:02:48 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:10.490 05:02:48 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:15:10.490 05:02:48 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:15:10.748 05:02:48 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=3240a4f5-db62-48df-9298-e5dfae3a481d 00:15:10.748 05:02:48 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 3240a4f5-db62-48df-9298-e5dfae3a481d 00:15:11.005 05:02:49 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=4f80daee-3ab9-48a0-be2f-6201b7fb58cf 00:15:11.005 05:02:49 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 4f80daee-3ab9-48a0-be2f-6201b7fb58cf 00:15:11.005 05:02:49 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:15:11.005 05:02:49 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:15:11.005 05:02:49 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=4f80daee-3ab9-48a0-be2f-6201b7fb58cf 00:15:11.005 05:02:49 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:15:11.005 05:02:49 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 4f80daee-3ab9-48a0-be2f-6201b7fb58cf 00:15:11.005 05:02:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=4f80daee-3ab9-48a0-be2f-6201b7fb58cf 00:15:11.005 05:02:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:11.005 05:02:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:15:11.005 05:02:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:15:11.005 05:02:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4f80daee-3ab9-48a0-be2f-6201b7fb58cf 00:15:11.263 05:02:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:11.263 { 00:15:11.263 "name": "4f80daee-3ab9-48a0-be2f-6201b7fb58cf", 00:15:11.263 "aliases": [ 00:15:11.263 "lvs/nvme0n1p0" 00:15:11.263 ], 00:15:11.263 "product_name": "Logical Volume", 00:15:11.263 "block_size": 4096, 00:15:11.263 "num_blocks": 26476544, 00:15:11.263 "uuid": "4f80daee-3ab9-48a0-be2f-6201b7fb58cf", 00:15:11.263 "assigned_rate_limits": { 00:15:11.263 "rw_ios_per_sec": 0, 00:15:11.263 "rw_mbytes_per_sec": 0, 00:15:11.263 "r_mbytes_per_sec": 0, 00:15:11.263 "w_mbytes_per_sec": 0 00:15:11.263 }, 00:15:11.263 "claimed": false, 00:15:11.263 "zoned": false, 00:15:11.263 "supported_io_types": { 00:15:11.263 "read": true, 00:15:11.263 "write": true, 00:15:11.263 "unmap": true, 00:15:11.263 "flush": false, 00:15:11.263 "reset": true, 00:15:11.263 "nvme_admin": false, 00:15:11.263 "nvme_io": false, 00:15:11.263 "nvme_io_md": false, 00:15:11.263 "write_zeroes": true, 00:15:11.263 "zcopy": false, 00:15:11.263 "get_zone_info": false, 00:15:11.263 "zone_management": false, 00:15:11.263 "zone_append": false, 00:15:11.263 "compare": false, 00:15:11.263 "compare_and_write": false, 00:15:11.263 "abort": false, 00:15:11.263 "seek_hole": true, 00:15:11.263 "seek_data": true, 00:15:11.263 "copy": false, 00:15:11.263 "nvme_iov_md": false 00:15:11.263 }, 00:15:11.263 "driver_specific": { 00:15:11.263 "lvol": { 00:15:11.263 "lvol_store_uuid": "3240a4f5-db62-48df-9298-e5dfae3a481d", 00:15:11.263 "base_bdev": "nvme0n1", 00:15:11.263 "thin_provision": true, 00:15:11.263 "num_allocated_clusters": 0, 00:15:11.263 "snapshot": false, 00:15:11.263 "clone": false, 00:15:11.263 "esnap_clone": false 00:15:11.263 } 00:15:11.263 } 00:15:11.263 } 00:15:11.263 ]' 00:15:11.263 05:02:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:11.263 05:02:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:15:11.263 05:02:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:11.263 05:02:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:15:11.263 05:02:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:15:11.263 05:02:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:15:11.263 05:02:49 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:15:11.263 05:02:49 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:15:11.263 05:02:49 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:15:11.521 05:02:49 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:15:11.521 05:02:49 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:15:11.521 05:02:49 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 4f80daee-3ab9-48a0-be2f-6201b7fb58cf 00:15:11.521 05:02:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=4f80daee-3ab9-48a0-be2f-6201b7fb58cf 00:15:11.521 05:02:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:11.521 05:02:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:15:11.521 05:02:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:15:11.521 05:02:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4f80daee-3ab9-48a0-be2f-6201b7fb58cf 00:15:11.779 05:02:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:11.779 { 00:15:11.779 "name": "4f80daee-3ab9-48a0-be2f-6201b7fb58cf", 00:15:11.779 "aliases": [ 00:15:11.779 "lvs/nvme0n1p0" 00:15:11.779 ], 00:15:11.779 "product_name": "Logical Volume", 00:15:11.779 "block_size": 4096, 00:15:11.779 "num_blocks": 26476544, 00:15:11.779 "uuid": "4f80daee-3ab9-48a0-be2f-6201b7fb58cf", 00:15:11.779 "assigned_rate_limits": { 00:15:11.779 "rw_ios_per_sec": 0, 00:15:11.779 "rw_mbytes_per_sec": 0, 00:15:11.779 "r_mbytes_per_sec": 0, 00:15:11.779 "w_mbytes_per_sec": 0 00:15:11.779 }, 00:15:11.779 "claimed": false, 00:15:11.779 "zoned": false, 00:15:11.779 "supported_io_types": { 00:15:11.779 "read": true, 00:15:11.779 "write": true, 00:15:11.779 "unmap": true, 00:15:11.779 "flush": false, 00:15:11.779 "reset": true, 00:15:11.779 "nvme_admin": false, 00:15:11.779 "nvme_io": false, 00:15:11.779 "nvme_io_md": false, 00:15:11.779 "write_zeroes": true, 00:15:11.779 "zcopy": false, 00:15:11.779 "get_zone_info": false, 00:15:11.779 "zone_management": false, 00:15:11.779 "zone_append": false, 00:15:11.779 "compare": false, 00:15:11.779 "compare_and_write": false, 00:15:11.779 "abort": false, 00:15:11.779 "seek_hole": true, 00:15:11.779 "seek_data": true, 00:15:11.779 "copy": false, 00:15:11.779 "nvme_iov_md": false 00:15:11.779 }, 00:15:11.779 "driver_specific": { 00:15:11.779 "lvol": { 00:15:11.779 "lvol_store_uuid": "3240a4f5-db62-48df-9298-e5dfae3a481d", 00:15:11.779 "base_bdev": "nvme0n1", 00:15:11.779 "thin_provision": true, 00:15:11.779 "num_allocated_clusters": 0, 00:15:11.779 "snapshot": false, 00:15:11.779 "clone": false, 00:15:11.779 "esnap_clone": false 00:15:11.779 } 00:15:11.779 } 00:15:11.779 } 00:15:11.779 ]' 00:15:11.779 05:02:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:11.779 05:02:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:15:11.779 05:02:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:11.779 05:02:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:15:11.779 05:02:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:15:11.779 05:02:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:15:11.779 05:02:49 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:15:11.779 05:02:49 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:15:12.036 05:02:50 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:15:12.036 05:02:50 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:15:12.036 05:02:50 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:15:12.036 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:15:12.036 05:02:50 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 4f80daee-3ab9-48a0-be2f-6201b7fb58cf 00:15:12.036 05:02:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=4f80daee-3ab9-48a0-be2f-6201b7fb58cf 00:15:12.036 05:02:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:12.036 05:02:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:15:12.036 05:02:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:15:12.036 05:02:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4f80daee-3ab9-48a0-be2f-6201b7fb58cf 00:15:12.036 05:02:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:12.036 { 00:15:12.036 "name": "4f80daee-3ab9-48a0-be2f-6201b7fb58cf", 00:15:12.036 "aliases": [ 00:15:12.036 "lvs/nvme0n1p0" 00:15:12.036 ], 00:15:12.036 "product_name": "Logical Volume", 00:15:12.036 "block_size": 4096, 00:15:12.036 "num_blocks": 26476544, 00:15:12.036 "uuid": "4f80daee-3ab9-48a0-be2f-6201b7fb58cf", 00:15:12.036 "assigned_rate_limits": { 00:15:12.036 "rw_ios_per_sec": 0, 00:15:12.036 "rw_mbytes_per_sec": 0, 00:15:12.036 "r_mbytes_per_sec": 0, 00:15:12.036 "w_mbytes_per_sec": 0 00:15:12.036 }, 00:15:12.036 "claimed": false, 00:15:12.036 "zoned": false, 00:15:12.036 "supported_io_types": { 00:15:12.036 "read": true, 00:15:12.036 "write": true, 00:15:12.036 "unmap": true, 00:15:12.036 "flush": false, 00:15:12.036 "reset": true, 00:15:12.036 "nvme_admin": false, 00:15:12.036 "nvme_io": false, 00:15:12.036 "nvme_io_md": false, 00:15:12.036 "write_zeroes": true, 00:15:12.036 "zcopy": false, 00:15:12.036 "get_zone_info": false, 00:15:12.036 "zone_management": false, 00:15:12.036 "zone_append": false, 00:15:12.036 "compare": false, 00:15:12.036 "compare_and_write": false, 00:15:12.036 "abort": false, 00:15:12.036 "seek_hole": true, 00:15:12.036 "seek_data": true, 00:15:12.036 "copy": false, 00:15:12.036 "nvme_iov_md": false 00:15:12.036 }, 00:15:12.036 "driver_specific": { 00:15:12.036 "lvol": { 00:15:12.036 "lvol_store_uuid": "3240a4f5-db62-48df-9298-e5dfae3a481d", 00:15:12.036 "base_bdev": "nvme0n1", 00:15:12.036 "thin_provision": true, 00:15:12.036 "num_allocated_clusters": 0, 00:15:12.036 "snapshot": false, 00:15:12.036 "clone": false, 00:15:12.036 "esnap_clone": false 00:15:12.036 } 00:15:12.036 } 00:15:12.036 } 00:15:12.036 ]' 00:15:12.036 05:02:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:12.296 05:02:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:15:12.296 05:02:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:12.296 05:02:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:15:12.296 05:02:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:15:12.296 05:02:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:15:12.296 05:02:50 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:15:12.296 05:02:50 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:15:12.296 05:02:50 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 4f80daee-3ab9-48a0-be2f-6201b7fb58cf -c nvc0n1p0 --l2p_dram_limit 60 00:15:12.296 [2024-12-06 05:02:50.493367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:12.296 [2024-12-06 05:02:50.493504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:15:12.296 [2024-12-06 05:02:50.493521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:15:12.296 [2024-12-06 05:02:50.493530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:12.296 [2024-12-06 05:02:50.493600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:12.296 [2024-12-06 05:02:50.493610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:12.296 [2024-12-06 05:02:50.493628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:15:12.296 [2024-12-06 05:02:50.493647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:12.296 [2024-12-06 05:02:50.493702] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:15:12.296 [2024-12-06 05:02:50.493929] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:15:12.296 [2024-12-06 05:02:50.493947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:12.296 [2024-12-06 05:02:50.493955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:12.296 [2024-12-06 05:02:50.493963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.251 ms 00:15:12.296 [2024-12-06 05:02:50.493970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:12.296 [2024-12-06 05:02:50.494032] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 1e391b3a-2210-4ea3-8819-0ca8368d3d28 00:15:12.296 [2024-12-06 05:02:50.495345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:12.296 [2024-12-06 05:02:50.495375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:15:12.296 [2024-12-06 05:02:50.495386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:15:12.296 [2024-12-06 05:02:50.495394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:12.296 [2024-12-06 05:02:50.502212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:12.296 [2024-12-06 05:02:50.502253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:12.296 [2024-12-06 05:02:50.502274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.757 ms 00:15:12.296 [2024-12-06 05:02:50.502281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:12.296 [2024-12-06 05:02:50.502371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:12.296 [2024-12-06 05:02:50.502383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:12.296 [2024-12-06 05:02:50.502392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:15:12.296 [2024-12-06 05:02:50.502407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:12.296 [2024-12-06 05:02:50.502463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:12.296 [2024-12-06 05:02:50.502472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:15:12.296 [2024-12-06 05:02:50.502490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:15:12.296 [2024-12-06 05:02:50.502496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:12.296 [2024-12-06 05:02:50.502524] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:12.296 [2024-12-06 05:02:50.504152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:12.296 [2024-12-06 05:02:50.504279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:12.296 [2024-12-06 05:02:50.504291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.635 ms 00:15:12.296 [2024-12-06 05:02:50.504310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:12.296 [2024-12-06 05:02:50.504347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:12.296 [2024-12-06 05:02:50.504357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:15:12.296 [2024-12-06 05:02:50.504373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:15:12.296 [2024-12-06 05:02:50.504383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:12.296 [2024-12-06 05:02:50.504407] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:15:12.297 [2024-12-06 05:02:50.504522] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:15:12.297 [2024-12-06 05:02:50.504533] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:15:12.297 [2024-12-06 05:02:50.504546] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:15:12.297 [2024-12-06 05:02:50.504554] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:15:12.297 [2024-12-06 05:02:50.504566] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:15:12.297 [2024-12-06 05:02:50.504572] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:15:12.297 [2024-12-06 05:02:50.504582] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:15:12.297 [2024-12-06 05:02:50.504588] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:15:12.297 [2024-12-06 05:02:50.504595] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:15:12.297 [2024-12-06 05:02:50.504602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:12.297 [2024-12-06 05:02:50.504609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:15:12.297 [2024-12-06 05:02:50.504616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.196 ms 00:15:12.297 [2024-12-06 05:02:50.504623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:12.297 [2024-12-06 05:02:50.504714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:12.297 [2024-12-06 05:02:50.504726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:15:12.297 [2024-12-06 05:02:50.504742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:15:12.297 [2024-12-06 05:02:50.504749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:12.297 [2024-12-06 05:02:50.504848] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:15:12.297 [2024-12-06 05:02:50.504859] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:15:12.297 [2024-12-06 05:02:50.504868] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:12.297 [2024-12-06 05:02:50.504876] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:12.297 [2024-12-06 05:02:50.504883] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:15:12.297 [2024-12-06 05:02:50.504890] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:15:12.297 [2024-12-06 05:02:50.504897] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:15:12.297 [2024-12-06 05:02:50.504906] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:15:12.297 [2024-12-06 05:02:50.504913] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:15:12.297 [2024-12-06 05:02:50.504920] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:12.297 [2024-12-06 05:02:50.504926] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:15:12.297 [2024-12-06 05:02:50.504934] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:15:12.297 [2024-12-06 05:02:50.504940] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:12.297 [2024-12-06 05:02:50.504951] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:15:12.297 [2024-12-06 05:02:50.504957] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:15:12.297 [2024-12-06 05:02:50.504965] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:12.297 [2024-12-06 05:02:50.504971] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:15:12.297 [2024-12-06 05:02:50.504979] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:15:12.297 [2024-12-06 05:02:50.504984] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:12.297 [2024-12-06 05:02:50.504993] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:15:12.297 [2024-12-06 05:02:50.504999] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:15:12.297 [2024-12-06 05:02:50.505011] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:12.297 [2024-12-06 05:02:50.505016] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:15:12.297 [2024-12-06 05:02:50.505024] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:15:12.297 [2024-12-06 05:02:50.505031] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:12.297 [2024-12-06 05:02:50.505039] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:15:12.297 [2024-12-06 05:02:50.505044] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:15:12.297 [2024-12-06 05:02:50.505052] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:12.297 [2024-12-06 05:02:50.505057] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:15:12.297 [2024-12-06 05:02:50.505066] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:15:12.297 [2024-12-06 05:02:50.505073] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:12.297 [2024-12-06 05:02:50.505080] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:15:12.297 [2024-12-06 05:02:50.505086] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:15:12.297 [2024-12-06 05:02:50.505094] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:12.297 [2024-12-06 05:02:50.505100] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:15:12.297 [2024-12-06 05:02:50.505108] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:15:12.297 [2024-12-06 05:02:50.505114] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:12.297 [2024-12-06 05:02:50.505121] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:15:12.297 [2024-12-06 05:02:50.505127] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:15:12.297 [2024-12-06 05:02:50.505134] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:12.297 [2024-12-06 05:02:50.505140] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:15:12.297 [2024-12-06 05:02:50.505147] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:15:12.297 [2024-12-06 05:02:50.505154] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:12.297 [2024-12-06 05:02:50.505161] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:15:12.297 [2024-12-06 05:02:50.505177] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:15:12.297 [2024-12-06 05:02:50.505188] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:12.297 [2024-12-06 05:02:50.505196] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:12.297 [2024-12-06 05:02:50.505204] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:15:12.297 [2024-12-06 05:02:50.505210] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:15:12.297 [2024-12-06 05:02:50.505217] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:15:12.297 [2024-12-06 05:02:50.505223] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:15:12.297 [2024-12-06 05:02:50.505230] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:15:12.297 [2024-12-06 05:02:50.505236] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:15:12.297 [2024-12-06 05:02:50.505248] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:15:12.297 [2024-12-06 05:02:50.505256] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:12.297 [2024-12-06 05:02:50.505269] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:15:12.297 [2024-12-06 05:02:50.505275] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:15:12.297 [2024-12-06 05:02:50.505282] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:15:12.297 [2024-12-06 05:02:50.505287] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:15:12.297 [2024-12-06 05:02:50.505293] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:15:12.297 [2024-12-06 05:02:50.505298] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:15:12.297 [2024-12-06 05:02:50.505308] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:15:12.297 [2024-12-06 05:02:50.505314] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:15:12.297 [2024-12-06 05:02:50.505321] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:15:12.297 [2024-12-06 05:02:50.505326] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:15:12.297 [2024-12-06 05:02:50.505332] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:15:12.297 [2024-12-06 05:02:50.505337] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:15:12.297 [2024-12-06 05:02:50.505344] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:15:12.297 [2024-12-06 05:02:50.505349] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:15:12.297 [2024-12-06 05:02:50.505355] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:15:12.297 [2024-12-06 05:02:50.505371] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:12.297 [2024-12-06 05:02:50.505378] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:15:12.297 [2024-12-06 05:02:50.505384] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:15:12.297 [2024-12-06 05:02:50.505391] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:15:12.297 [2024-12-06 05:02:50.505396] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:15:12.297 [2024-12-06 05:02:50.505404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:12.297 [2024-12-06 05:02:50.505411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:15:12.297 [2024-12-06 05:02:50.505420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.605 ms 00:15:12.297 [2024-12-06 05:02:50.505425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:12.297 [2024-12-06 05:02:50.505487] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:15:12.297 [2024-12-06 05:02:50.505495] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:15:14.822 [2024-12-06 05:02:52.802037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:14.822 [2024-12-06 05:02:52.802100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:15:14.822 [2024-12-06 05:02:52.802116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2296.537 ms 00:15:14.822 [2024-12-06 05:02:52.802126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:14.822 [2024-12-06 05:02:52.823693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:14.822 [2024-12-06 05:02:52.823765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:14.822 [2024-12-06 05:02:52.823814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.446 ms 00:15:14.822 [2024-12-06 05:02:52.823832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:14.822 [2024-12-06 05:02:52.824031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:14.822 [2024-12-06 05:02:52.824059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:15:14.822 [2024-12-06 05:02:52.824079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:15:14.822 [2024-12-06 05:02:52.824093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:14.822 [2024-12-06 05:02:52.836943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:14.822 [2024-12-06 05:02:52.837151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:14.822 [2024-12-06 05:02:52.837172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.736 ms 00:15:14.822 [2024-12-06 05:02:52.837180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:14.822 [2024-12-06 05:02:52.837233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:14.822 [2024-12-06 05:02:52.837242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:14.822 [2024-12-06 05:02:52.837253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:15:14.822 [2024-12-06 05:02:52.837260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:14.822 [2024-12-06 05:02:52.837740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:14.822 [2024-12-06 05:02:52.837763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:14.822 [2024-12-06 05:02:52.837776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.407 ms 00:15:14.822 [2024-12-06 05:02:52.837785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:14.822 [2024-12-06 05:02:52.837918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:14.822 [2024-12-06 05:02:52.837931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:14.822 [2024-12-06 05:02:52.837956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:15:14.822 [2024-12-06 05:02:52.837965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:14.822 [2024-12-06 05:02:52.844862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:14.822 [2024-12-06 05:02:52.844901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:14.822 [2024-12-06 05:02:52.844913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.848 ms 00:15:14.822 [2024-12-06 05:02:52.844920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:14.822 [2024-12-06 05:02:52.853911] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:15:14.822 [2024-12-06 05:02:52.871049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:14.822 [2024-12-06 05:02:52.871220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:15:14.822 [2024-12-06 05:02:52.871236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.042 ms 00:15:14.822 [2024-12-06 05:02:52.871247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:14.822 [2024-12-06 05:02:52.910001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:14.822 [2024-12-06 05:02:52.910141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:15:14.822 [2024-12-06 05:02:52.910160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.709 ms 00:15:14.822 [2024-12-06 05:02:52.910174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:14.822 [2024-12-06 05:02:52.910606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:14.822 [2024-12-06 05:02:52.910646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:15:14.822 [2024-12-06 05:02:52.910658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.153 ms 00:15:14.822 [2024-12-06 05:02:52.910697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:14.822 [2024-12-06 05:02:52.914057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:14.822 [2024-12-06 05:02:52.914095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:15:14.822 [2024-12-06 05:02:52.914106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.328 ms 00:15:14.822 [2024-12-06 05:02:52.914118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:14.822 [2024-12-06 05:02:52.916571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:14.822 [2024-12-06 05:02:52.916604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:15:14.822 [2024-12-06 05:02:52.916614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.398 ms 00:15:14.822 [2024-12-06 05:02:52.916623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:14.822 [2024-12-06 05:02:52.916995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:14.822 [2024-12-06 05:02:52.917017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:15:14.822 [2024-12-06 05:02:52.917026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.298 ms 00:15:14.822 [2024-12-06 05:02:52.917038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:14.822 [2024-12-06 05:02:52.942481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:14.822 [2024-12-06 05:02:52.942530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:15:14.822 [2024-12-06 05:02:52.942541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.414 ms 00:15:14.822 [2024-12-06 05:02:52.942553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:14.822 [2024-12-06 05:02:52.946462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:14.822 [2024-12-06 05:02:52.946498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:15:14.822 [2024-12-06 05:02:52.946508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.847 ms 00:15:14.822 [2024-12-06 05:02:52.946520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:14.822 [2024-12-06 05:02:52.949225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:14.822 [2024-12-06 05:02:52.949256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:15:14.822 [2024-12-06 05:02:52.949265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.664 ms 00:15:14.822 [2024-12-06 05:02:52.949274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:14.822 [2024-12-06 05:02:52.952136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:14.822 [2024-12-06 05:02:52.952272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:15:14.822 [2024-12-06 05:02:52.952288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.812 ms 00:15:14.823 [2024-12-06 05:02:52.952299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:14.823 [2024-12-06 05:02:52.952345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:14.823 [2024-12-06 05:02:52.952357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:15:14.823 [2024-12-06 05:02:52.952366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:15:14.823 [2024-12-06 05:02:52.952376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:14.823 [2024-12-06 05:02:52.952459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:14.823 [2024-12-06 05:02:52.952471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:15:14.823 [2024-12-06 05:02:52.952480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:15:14.823 [2024-12-06 05:02:52.952489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:14.823 [2024-12-06 05:02:52.953548] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2459.744 ms, result 0 00:15:14.823 { 00:15:14.823 "name": "ftl0", 00:15:14.823 "uuid": "1e391b3a-2210-4ea3-8819-0ca8368d3d28" 00:15:14.823 } 00:15:14.823 05:02:52 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:15:14.823 05:02:52 ftl.ftl_fio_basic -- common/autotest_common.sh@899 -- # local bdev_name=ftl0 00:15:14.823 05:02:52 ftl.ftl_fio_basic -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:14.823 05:02:52 ftl.ftl_fio_basic -- common/autotest_common.sh@901 -- # local i 00:15:14.823 05:02:52 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:14.823 05:02:52 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:14.823 05:02:52 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:15.080 05:02:53 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:15:15.338 [ 00:15:15.338 { 00:15:15.338 "name": "ftl0", 00:15:15.338 "aliases": [ 00:15:15.338 "1e391b3a-2210-4ea3-8819-0ca8368d3d28" 00:15:15.338 ], 00:15:15.338 "product_name": "FTL disk", 00:15:15.338 "block_size": 4096, 00:15:15.338 "num_blocks": 20971520, 00:15:15.338 "uuid": "1e391b3a-2210-4ea3-8819-0ca8368d3d28", 00:15:15.338 "assigned_rate_limits": { 00:15:15.338 "rw_ios_per_sec": 0, 00:15:15.338 "rw_mbytes_per_sec": 0, 00:15:15.338 "r_mbytes_per_sec": 0, 00:15:15.338 "w_mbytes_per_sec": 0 00:15:15.338 }, 00:15:15.338 "claimed": false, 00:15:15.338 "zoned": false, 00:15:15.338 "supported_io_types": { 00:15:15.338 "read": true, 00:15:15.338 "write": true, 00:15:15.338 "unmap": true, 00:15:15.338 "flush": true, 00:15:15.338 "reset": false, 00:15:15.338 "nvme_admin": false, 00:15:15.338 "nvme_io": false, 00:15:15.338 "nvme_io_md": false, 00:15:15.338 "write_zeroes": true, 00:15:15.338 "zcopy": false, 00:15:15.338 "get_zone_info": false, 00:15:15.338 "zone_management": false, 00:15:15.338 "zone_append": false, 00:15:15.338 "compare": false, 00:15:15.338 "compare_and_write": false, 00:15:15.338 "abort": false, 00:15:15.338 "seek_hole": false, 00:15:15.338 "seek_data": false, 00:15:15.338 "copy": false, 00:15:15.338 "nvme_iov_md": false 00:15:15.338 }, 00:15:15.338 "driver_specific": { 00:15:15.338 "ftl": { 00:15:15.338 "base_bdev": "4f80daee-3ab9-48a0-be2f-6201b7fb58cf", 00:15:15.338 "cache": "nvc0n1p0" 00:15:15.338 } 00:15:15.338 } 00:15:15.338 } 00:15:15.338 ] 00:15:15.338 05:02:53 ftl.ftl_fio_basic -- common/autotest_common.sh@907 -- # return 0 00:15:15.338 05:02:53 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:15:15.338 05:02:53 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:15:15.338 05:02:53 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:15:15.338 05:02:53 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:15:15.597 [2024-12-06 05:02:53.747875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:15.597 [2024-12-06 05:02:53.747922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:15:15.597 [2024-12-06 05:02:53.747938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:15:15.597 [2024-12-06 05:02:53.747947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.597 [2024-12-06 05:02:53.748004] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:15:15.597 [2024-12-06 05:02:53.748571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:15.597 [2024-12-06 05:02:53.748601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:15:15.597 [2024-12-06 05:02:53.748611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.552 ms 00:15:15.597 [2024-12-06 05:02:53.748621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.597 [2024-12-06 05:02:53.749142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:15.597 [2024-12-06 05:02:53.749167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:15:15.597 [2024-12-06 05:02:53.749177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.488 ms 00:15:15.597 [2024-12-06 05:02:53.749187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.597 [2024-12-06 05:02:53.752449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:15.597 [2024-12-06 05:02:53.752473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:15:15.597 [2024-12-06 05:02:53.752483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.228 ms 00:15:15.597 [2024-12-06 05:02:53.752493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.597 [2024-12-06 05:02:53.758691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:15.597 [2024-12-06 05:02:53.758723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:15:15.597 [2024-12-06 05:02:53.758732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.172 ms 00:15:15.597 [2024-12-06 05:02:53.758741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.597 [2024-12-06 05:02:53.760995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:15.597 [2024-12-06 05:02:53.761033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:15:15.597 [2024-12-06 05:02:53.761043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.158 ms 00:15:15.597 [2024-12-06 05:02:53.761052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.597 [2024-12-06 05:02:53.765971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:15.597 [2024-12-06 05:02:53.766115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:15:15.597 [2024-12-06 05:02:53.766131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.875 ms 00:15:15.597 [2024-12-06 05:02:53.766142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.597 [2024-12-06 05:02:53.766319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:15.597 [2024-12-06 05:02:53.766349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:15:15.597 [2024-12-06 05:02:53.766358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:15:15.597 [2024-12-06 05:02:53.766367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.597 [2024-12-06 05:02:53.768383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:15.597 [2024-12-06 05:02:53.768418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:15:15.597 [2024-12-06 05:02:53.768428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.987 ms 00:15:15.597 [2024-12-06 05:02:53.768436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.597 [2024-12-06 05:02:53.769727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:15.597 [2024-12-06 05:02:53.769761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:15:15.597 [2024-12-06 05:02:53.769770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.252 ms 00:15:15.597 [2024-12-06 05:02:53.769779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.597 [2024-12-06 05:02:53.771109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:15.597 [2024-12-06 05:02:53.771141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:15:15.597 [2024-12-06 05:02:53.771150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.295 ms 00:15:15.597 [2024-12-06 05:02:53.771160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.597 [2024-12-06 05:02:53.772546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:15.597 [2024-12-06 05:02:53.772580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:15:15.597 [2024-12-06 05:02:53.772588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.291 ms 00:15:15.597 [2024-12-06 05:02:53.772598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.597 [2024-12-06 05:02:53.772638] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:15:15.597 [2024-12-06 05:02:53.772654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:15:15.597 [2024-12-06 05:02:53.772681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:15:15.597 [2024-12-06 05:02:53.772693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:15:15.597 [2024-12-06 05:02:53.772701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:15:15.597 [2024-12-06 05:02:53.772713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:15:15.597 [2024-12-06 05:02:53.772721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:15:15.597 [2024-12-06 05:02:53.772731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:15:15.597 [2024-12-06 05:02:53.772739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:15:15.597 [2024-12-06 05:02:53.772749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:15:15.597 [2024-12-06 05:02:53.772757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:15:15.597 [2024-12-06 05:02:53.772766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:15:15.597 [2024-12-06 05:02:53.772774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:15:15.597 [2024-12-06 05:02:53.772783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:15:15.597 [2024-12-06 05:02:53.772791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:15:15.597 [2024-12-06 05:02:53.772801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:15:15.597 [2024-12-06 05:02:53.772808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:15:15.597 [2024-12-06 05:02:53.772817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:15:15.597 [2024-12-06 05:02:53.772824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:15:15.597 [2024-12-06 05:02:53.772834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:15:15.597 [2024-12-06 05:02:53.772842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:15:15.597 [2024-12-06 05:02:53.772852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:15:15.597 [2024-12-06 05:02:53.772859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:15:15.597 [2024-12-06 05:02:53.772868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:15:15.597 [2024-12-06 05:02:53.772890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:15:15.597 [2024-12-06 05:02:53.772899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:15:15.598 [2024-12-06 05:02:53.772907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:15:15.598 [2024-12-06 05:02:53.772917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:15:15.598 [2024-12-06 05:02:53.772924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:15:15.598 [2024-12-06 05:02:53.772932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:15:15.598 [2024-12-06 05:02:53.772940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:15:15.598 [2024-12-06 05:02:53.772949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:15:15.598 [2024-12-06 05:02:53.772957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:15:15.598 [2024-12-06 05:02:53.772966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:15:15.598 [2024-12-06 05:02:53.772974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:15:15.598 [2024-12-06 05:02:53.772983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:15:15.598 [2024-12-06 05:02:53.772994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:15:15.598 [2024-12-06 05:02:53.773005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:15:15.598 [2024-12-06 05:02:53.773015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:15:15.598 [2024-12-06 05:02:53.773025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:15:15.598 [2024-12-06 05:02:53.773032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:15:15.598 [2024-12-06 05:02:53.773041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:15:15.598 [2024-12-06 05:02:53.773049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:15:15.598 [2024-12-06 05:02:53.773058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:15:15.598 [2024-12-06 05:02:53.773065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:15:15.598 [2024-12-06 05:02:53.773074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:15:15.598 [2024-12-06 05:02:53.773082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:15:15.598 [2024-12-06 05:02:53.773092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:15:15.598 [2024-12-06 05:02:53.773100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:15:15.598 [2024-12-06 05:02:53.773109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:15:15.598 [2024-12-06 05:02:53.773117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:15:15.598 [2024-12-06 05:02:53.773125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:15:15.598 [2024-12-06 05:02:53.773133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:15:15.598 [2024-12-06 05:02:53.773145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:15:15.598 [2024-12-06 05:02:53.773153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:15:15.598 [2024-12-06 05:02:53.773162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:15:15.598 [2024-12-06 05:02:53.773170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:15:15.598 [2024-12-06 05:02:53.773179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:15:15.598 [2024-12-06 05:02:53.773187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:15:15.598 [2024-12-06 05:02:53.773196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:15:15.598 [2024-12-06 05:02:53.773203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:15:15.598 [2024-12-06 05:02:53.773211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:15:15.598 [2024-12-06 05:02:53.773221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:15:15.598 [2024-12-06 05:02:53.773229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:15:15.598 [2024-12-06 05:02:53.773236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:15:15.598 [2024-12-06 05:02:53.773244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:15:15.598 [2024-12-06 05:02:53.773252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:15:15.598 [2024-12-06 05:02:53.773261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:15:15.598 [2024-12-06 05:02:53.773271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:15:15.598 [2024-12-06 05:02:53.773282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:15:15.598 [2024-12-06 05:02:53.773291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:15:15.598 [2024-12-06 05:02:53.773299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:15:15.598 [2024-12-06 05:02:53.773307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:15:15.598 [2024-12-06 05:02:53.773317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:15:15.598 [2024-12-06 05:02:53.773324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:15:15.598 [2024-12-06 05:02:53.773333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:15:15.598 [2024-12-06 05:02:53.773340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:15:15.598 [2024-12-06 05:02:53.773349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:15:15.598 [2024-12-06 05:02:53.773357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:15:15.598 [2024-12-06 05:02:53.773367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:15:15.598 [2024-12-06 05:02:53.773375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:15:15.598 [2024-12-06 05:02:53.773386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:15:15.598 [2024-12-06 05:02:53.773394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:15:15.598 [2024-12-06 05:02:53.773403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:15:15.598 [2024-12-06 05:02:53.773411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:15:15.598 [2024-12-06 05:02:53.773422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:15:15.598 [2024-12-06 05:02:53.773429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:15:15.598 [2024-12-06 05:02:53.773437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:15:15.598 [2024-12-06 05:02:53.773444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:15:15.598 [2024-12-06 05:02:53.773454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:15:15.598 [2024-12-06 05:02:53.773461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:15:15.598 [2024-12-06 05:02:53.773469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:15:15.598 [2024-12-06 05:02:53.773476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:15:15.598 [2024-12-06 05:02:53.773485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:15:15.598 [2024-12-06 05:02:53.773491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:15:15.598 [2024-12-06 05:02:53.773501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:15:15.598 [2024-12-06 05:02:53.773507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:15:15.598 [2024-12-06 05:02:53.773516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:15:15.598 [2024-12-06 05:02:53.773523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:15:15.598 [2024-12-06 05:02:53.773532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:15:15.598 [2024-12-06 05:02:53.773541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:15:15.598 [2024-12-06 05:02:53.773561] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:15:15.598 [2024-12-06 05:02:53.773570] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1e391b3a-2210-4ea3-8819-0ca8368d3d28 00:15:15.598 [2024-12-06 05:02:53.773579] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:15:15.598 [2024-12-06 05:02:53.773587] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:15:15.598 [2024-12-06 05:02:53.773599] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:15:15.598 [2024-12-06 05:02:53.773607] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:15:15.598 [2024-12-06 05:02:53.773626] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:15:15.598 [2024-12-06 05:02:53.773634] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:15:15.598 [2024-12-06 05:02:53.773643] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:15:15.598 [2024-12-06 05:02:53.773650] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:15:15.598 [2024-12-06 05:02:53.773658] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:15:15.598 [2024-12-06 05:02:53.773690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:15.598 [2024-12-06 05:02:53.773701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:15:15.598 [2024-12-06 05:02:53.773709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.054 ms 00:15:15.598 [2024-12-06 05:02:53.773719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.598 [2024-12-06 05:02:53.775592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:15.598 [2024-12-06 05:02:53.775618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:15:15.598 [2024-12-06 05:02:53.775627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.832 ms 00:15:15.598 [2024-12-06 05:02:53.775636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.598 [2024-12-06 05:02:53.775782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:15.598 [2024-12-06 05:02:53.775795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:15:15.598 [2024-12-06 05:02:53.775804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:15:15.598 [2024-12-06 05:02:53.775814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.598 [2024-12-06 05:02:53.782259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:15.598 [2024-12-06 05:02:53.782293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:15.598 [2024-12-06 05:02:53.782303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:15.598 [2024-12-06 05:02:53.782314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.598 [2024-12-06 05:02:53.782376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:15.598 [2024-12-06 05:02:53.782387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:15.598 [2024-12-06 05:02:53.782395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:15.598 [2024-12-06 05:02:53.782406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.598 [2024-12-06 05:02:53.782475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:15.598 [2024-12-06 05:02:53.782493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:15.598 [2024-12-06 05:02:53.782501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:15.598 [2024-12-06 05:02:53.782521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.598 [2024-12-06 05:02:53.782546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:15.598 [2024-12-06 05:02:53.782557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:15.598 [2024-12-06 05:02:53.782565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:15.598 [2024-12-06 05:02:53.782575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.598 [2024-12-06 05:02:53.794638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:15.598 [2024-12-06 05:02:53.794694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:15.598 [2024-12-06 05:02:53.794705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:15.598 [2024-12-06 05:02:53.794715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.598 [2024-12-06 05:02:53.804533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:15.598 [2024-12-06 05:02:53.804588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:15.598 [2024-12-06 05:02:53.804598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:15.598 [2024-12-06 05:02:53.804608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.598 [2024-12-06 05:02:53.804714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:15.598 [2024-12-06 05:02:53.804731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:15.598 [2024-12-06 05:02:53.804742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:15.599 [2024-12-06 05:02:53.804752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.599 [2024-12-06 05:02:53.804849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:15.599 [2024-12-06 05:02:53.804862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:15.599 [2024-12-06 05:02:53.804871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:15.599 [2024-12-06 05:02:53.804881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.599 [2024-12-06 05:02:53.804973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:15.599 [2024-12-06 05:02:53.804985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:15.599 [2024-12-06 05:02:53.804993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:15.599 [2024-12-06 05:02:53.805016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.599 [2024-12-06 05:02:53.805080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:15.599 [2024-12-06 05:02:53.805093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:15:15.599 [2024-12-06 05:02:53.805101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:15.599 [2024-12-06 05:02:53.805110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.599 [2024-12-06 05:02:53.805172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:15.599 [2024-12-06 05:02:53.805186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:15.599 [2024-12-06 05:02:53.805195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:15.599 [2024-12-06 05:02:53.805207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.599 [2024-12-06 05:02:53.805261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:15.599 [2024-12-06 05:02:53.805275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:15.599 [2024-12-06 05:02:53.805284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:15.599 [2024-12-06 05:02:53.805294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.599 [2024-12-06 05:02:53.805471] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 57.555 ms, result 0 00:15:15.599 true 00:15:15.856 05:02:53 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 83959 00:15:15.856 05:02:53 ftl.ftl_fio_basic -- common/autotest_common.sh@950 -- # '[' -z 83959 ']' 00:15:15.856 05:02:53 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # kill -0 83959 00:15:15.856 05:02:53 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # uname 00:15:15.856 05:02:53 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:15.856 05:02:53 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83959 00:15:15.856 killing process with pid 83959 00:15:15.856 05:02:53 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:15.856 05:02:53 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:15.856 05:02:53 ftl.ftl_fio_basic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83959' 00:15:15.856 05:02:53 ftl.ftl_fio_basic -- common/autotest_common.sh@969 -- # kill 83959 00:15:15.856 05:02:53 ftl.ftl_fio_basic -- common/autotest_common.sh@974 -- # wait 83959 00:15:19.136 05:02:56 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:15:19.136 05:02:56 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:19.136 05:02:56 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:15:19.136 05:02:56 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:19.136 05:02:56 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:19.136 05:02:56 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:19.136 05:02:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:19.136 05:02:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:15:19.136 05:02:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:19.136 05:02:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:15:19.136 05:02:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:19.136 05:02:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:15:19.136 05:02:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:15:19.136 05:02:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:19.136 05:02:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:19.136 05:02:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:15:19.136 05:02:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:19.136 05:02:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:19.136 05:02:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:19.136 05:02:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:15:19.136 05:02:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:19.136 05:02:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:19.136 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:15:19.136 fio-3.35 00:15:19.136 Starting 1 thread 00:15:24.421 00:15:24.421 test: (groupid=0, jobs=1): err= 0: pid=84121: Fri Dec 6 05:03:02 2024 00:15:24.421 read: IOPS=856, BW=56.9MiB/s (59.6MB/s)(255MiB/4477msec) 00:15:24.421 slat (nsec): min=2863, max=25038, avg=3780.64, stdev=1576.11 00:15:24.421 clat (usec): min=274, max=1300, avg=530.73, stdev=119.81 00:15:24.421 lat (usec): min=277, max=1303, avg=534.51, stdev=119.91 00:15:24.421 clat percentiles (usec): 00:15:24.421 | 1.00th=[ 314], 5.00th=[ 343], 10.00th=[ 420], 20.00th=[ 469], 00:15:24.421 | 30.00th=[ 510], 40.00th=[ 519], 50.00th=[ 523], 60.00th=[ 529], 00:15:24.421 | 70.00th=[ 529], 80.00th=[ 537], 90.00th=[ 611], 95.00th=[ 816], 00:15:24.421 | 99.00th=[ 1004], 99.50th=[ 1074], 99.90th=[ 1205], 99.95th=[ 1287], 00:15:24.421 | 99.99th=[ 1303] 00:15:24.421 write: IOPS=862, BW=57.3MiB/s (60.1MB/s)(256MiB/4469msec); 0 zone resets 00:15:24.421 slat (usec): min=13, max=153, avg=19.23, stdev= 4.66 00:15:24.421 clat (usec): min=323, max=1992, avg=599.58, stdev=141.69 00:15:24.421 lat (usec): min=350, max=2009, avg=618.81, stdev=141.57 00:15:24.421 clat percentiles (usec): 00:15:24.421 | 1.00th=[ 388], 5.00th=[ 453], 10.00th=[ 478], 20.00th=[ 537], 00:15:24.421 | 30.00th=[ 545], 40.00th=[ 553], 50.00th=[ 586], 60.00th=[ 603], 00:15:24.421 | 70.00th=[ 611], 80.00th=[ 619], 90.00th=[ 685], 95.00th=[ 922], 00:15:24.421 | 99.00th=[ 1205], 99.50th=[ 1270], 99.90th=[ 1745], 99.95th=[ 1926], 00:15:24.421 | 99.99th=[ 1991] 00:15:24.421 bw ( KiB/s): min=55624, max=62016, per=100.00%, avg=58786.00, stdev=2005.05, samples=8 00:15:24.421 iops : min= 818, max= 912, avg=864.50, stdev=29.49, samples=8 00:15:24.421 lat (usec) : 500=19.81%, 750=72.26%, 1000=6.13% 00:15:24.421 lat (msec) : 2=1.81% 00:15:24.421 cpu : usr=99.17%, sys=0.00%, ctx=9, majf=0, minf=1326 00:15:24.421 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:24.421 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:24.421 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:24.421 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:24.421 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:24.421 00:15:24.421 Run status group 0 (all jobs): 00:15:24.421 READ: bw=56.9MiB/s (59.6MB/s), 56.9MiB/s-56.9MiB/s (59.6MB/s-59.6MB/s), io=255MiB (267MB), run=4477-4477msec 00:15:24.421 WRITE: bw=57.3MiB/s (60.1MB/s), 57.3MiB/s-57.3MiB/s (60.1MB/s-60.1MB/s), io=256MiB (269MB), run=4469-4469msec 00:15:24.994 ----------------------------------------------------- 00:15:24.994 Suppressions used: 00:15:24.994 count bytes template 00:15:24.994 1 5 /usr/src/fio/parse.c 00:15:24.994 1 8 libtcmalloc_minimal.so 00:15:24.994 1 904 libcrypto.so 00:15:24.994 ----------------------------------------------------- 00:15:24.994 00:15:24.994 05:03:03 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:15:24.994 05:03:03 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:24.994 05:03:03 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:24.994 05:03:03 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:24.994 05:03:03 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:15:24.994 05:03:03 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:24.994 05:03:03 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:24.994 05:03:03 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:24.994 05:03:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:24.994 05:03:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:15:24.994 05:03:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:24.994 05:03:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:15:24.994 05:03:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:24.994 05:03:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:15:24.994 05:03:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:15:24.994 05:03:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:24.994 05:03:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:24.994 05:03:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:15:24.994 05:03:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:24.994 05:03:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:24.994 05:03:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:24.994 05:03:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:15:24.994 05:03:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:24.994 05:03:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:25.255 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:25.256 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:25.256 fio-3.35 00:15:25.256 Starting 2 threads 00:15:51.809 00:15:51.809 first_half: (groupid=0, jobs=1): err= 0: pid=84224: Fri Dec 6 05:03:26 2024 00:15:51.809 read: IOPS=2959, BW=11.6MiB/s (12.1MB/s)(256MiB/22123msec) 00:15:51.809 slat (nsec): min=3033, max=44222, avg=5197.39, stdev=1607.25 00:15:51.809 clat (usec): min=512, max=333227, avg=36413.87, stdev=22824.41 00:15:51.809 lat (usec): min=517, max=333235, avg=36419.07, stdev=22824.60 00:15:51.809 clat percentiles (msec): 00:15:51.809 | 1.00th=[ 8], 5.00th=[ 27], 10.00th=[ 29], 20.00th=[ 30], 00:15:51.809 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 32], 00:15:51.809 | 70.00th=[ 35], 80.00th=[ 36], 90.00th=[ 42], 95.00th=[ 72], 00:15:51.809 | 99.00th=[ 148], 99.50th=[ 159], 99.90th=[ 262], 99.95th=[ 309], 00:15:51.809 | 99.99th=[ 330] 00:15:51.809 write: IOPS=2966, BW=11.6MiB/s (12.1MB/s)(256MiB/22092msec); 0 zone resets 00:15:51.809 slat (usec): min=3, max=979, avg= 6.47, stdev= 5.98 00:15:51.809 clat (usec): min=334, max=41974, avg=6800.61, stdev=6817.66 00:15:51.809 lat (usec): min=341, max=41980, avg=6807.08, stdev=6818.34 00:15:51.809 clat percentiles (usec): 00:15:51.809 | 1.00th=[ 709], 5.00th=[ 898], 10.00th=[ 1270], 20.00th=[ 2638], 00:15:51.809 | 30.00th=[ 3359], 40.00th=[ 4015], 50.00th=[ 4752], 60.00th=[ 5407], 00:15:51.809 | 70.00th=[ 5932], 80.00th=[ 7701], 90.00th=[17695], 95.00th=[22152], 00:15:51.809 | 99.00th=[32637], 99.50th=[34341], 99.90th=[40109], 99.95th=[40633], 00:15:51.809 | 99.99th=[41157] 00:15:51.809 bw ( KiB/s): min= 6680, max=48216, per=100.00%, avg=27418.53, stdev=12060.97, samples=19 00:15:51.809 iops : min= 1670, max=12054, avg=6854.63, stdev=3015.24, samples=19 00:15:51.809 lat (usec) : 500=0.04%, 750=0.82%, 1000=2.42% 00:15:51.809 lat (msec) : 2=3.95%, 4=12.71%, 10=22.57%, 20=5.30%, 50=48.96% 00:15:51.809 lat (msec) : 100=1.45%, 250=1.72%, 500=0.05% 00:15:51.809 cpu : usr=99.23%, sys=0.10%, ctx=29, majf=0, minf=5587 00:15:51.809 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:15:51.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:51.809 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:51.809 issued rwts: total=65476,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:51.809 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:51.809 second_half: (groupid=0, jobs=1): err= 0: pid=84225: Fri Dec 6 05:03:26 2024 00:15:51.809 read: IOPS=2982, BW=11.7MiB/s (12.2MB/s)(256MiB/21957msec) 00:15:51.809 slat (nsec): min=3043, max=45607, avg=4664.32, stdev=1768.25 00:15:51.809 clat (msec): min=13, max=237, avg=36.65, stdev=19.97 00:15:51.809 lat (msec): min=13, max=237, avg=36.65, stdev=19.97 00:15:51.809 clat percentiles (msec): 00:15:51.809 | 1.00th=[ 27], 5.00th=[ 27], 10.00th=[ 30], 20.00th=[ 30], 00:15:51.809 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 32], 00:15:51.809 | 70.00th=[ 35], 80.00th=[ 36], 90.00th=[ 43], 95.00th=[ 67], 00:15:51.809 | 99.00th=[ 146], 99.50th=[ 157], 99.90th=[ 171], 99.95th=[ 180], 00:15:51.809 | 99.99th=[ 222] 00:15:51.809 write: IOPS=3001, BW=11.7MiB/s (12.3MB/s)(256MiB/21832msec); 0 zone resets 00:15:51.809 slat (usec): min=3, max=1310, avg= 6.01, stdev= 7.72 00:15:51.809 clat (usec): min=321, max=33319, avg=6245.53, stdev=4964.71 00:15:51.809 lat (usec): min=331, max=33324, avg=6251.54, stdev=4965.79 00:15:51.809 clat percentiles (usec): 00:15:51.809 | 1.00th=[ 848], 5.00th=[ 1729], 10.00th=[ 2474], 20.00th=[ 3261], 00:15:51.809 | 30.00th=[ 3916], 40.00th=[ 4359], 50.00th=[ 4817], 60.00th=[ 5276], 00:15:51.809 | 70.00th=[ 5604], 80.00th=[ 6587], 90.00th=[14353], 95.00th=[18744], 00:15:51.810 | 99.00th=[22938], 99.50th=[24773], 99.90th=[30802], 99.95th=[31851], 00:15:51.810 | 99.99th=[32900] 00:15:51.810 bw ( KiB/s): min= 576, max=47632, per=100.00%, avg=27484.21, stdev=15108.63, samples=19 00:15:51.810 iops : min= 144, max=11908, avg=6871.05, stdev=3777.16, samples=19 00:15:51.810 lat (usec) : 500=0.06%, 750=0.22%, 1000=0.56% 00:15:51.810 lat (msec) : 2=2.19%, 4=12.64%, 10=26.99%, 20=5.60%, 50=48.48% 00:15:51.810 lat (msec) : 100=1.73%, 250=1.54% 00:15:51.810 cpu : usr=99.32%, sys=0.12%, ctx=62, majf=0, minf=5551 00:15:51.810 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:15:51.810 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:51.810 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:51.810 issued rwts: total=65489,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:51.810 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:51.810 00:15:51.810 Run status group 0 (all jobs): 00:15:51.810 READ: bw=23.1MiB/s (24.2MB/s), 11.6MiB/s-11.7MiB/s (12.1MB/s-12.2MB/s), io=512MiB (536MB), run=21957-22123msec 00:15:51.810 WRITE: bw=23.2MiB/s (24.3MB/s), 11.6MiB/s-11.7MiB/s (12.1MB/s-12.3MB/s), io=512MiB (537MB), run=21832-22092msec 00:15:51.810 ----------------------------------------------------- 00:15:51.810 Suppressions used: 00:15:51.810 count bytes template 00:15:51.810 2 10 /usr/src/fio/parse.c 00:15:51.810 3 288 /usr/src/fio/iolog.c 00:15:51.810 1 8 libtcmalloc_minimal.so 00:15:51.810 1 904 libcrypto.so 00:15:51.810 ----------------------------------------------------- 00:15:51.810 00:15:51.810 05:03:28 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:15:51.810 05:03:28 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:51.810 05:03:28 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:51.810 05:03:28 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:51.810 05:03:28 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:15:51.810 05:03:28 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:51.810 05:03:28 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:51.810 05:03:28 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:15:51.810 05:03:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:15:51.810 05:03:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:15:51.810 05:03:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:51.810 05:03:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:15:51.810 05:03:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:51.810 05:03:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:15:51.810 05:03:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:15:51.810 05:03:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:51.810 05:03:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:51.810 05:03:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:15:51.810 05:03:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:51.810 05:03:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:51.810 05:03:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:51.810 05:03:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:15:51.810 05:03:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:51.810 05:03:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:15:51.810 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:51.810 fio-3.35 00:15:51.810 Starting 1 thread 00:16:09.942 00:16:09.942 test: (groupid=0, jobs=1): err= 0: pid=84516: Fri Dec 6 05:03:45 2024 00:16:09.942 read: IOPS=6528, BW=25.5MiB/s (26.7MB/s)(255MiB/9987msec) 00:16:09.942 slat (nsec): min=2909, max=47633, avg=4656.54, stdev=1336.68 00:16:09.942 clat (usec): min=555, max=43022, avg=19598.20, stdev=2985.47 00:16:09.942 lat (usec): min=563, max=43027, avg=19602.85, stdev=2985.43 00:16:09.942 clat percentiles (usec): 00:16:09.942 | 1.00th=[14746], 5.00th=[15270], 10.00th=[15664], 20.00th=[16581], 00:16:09.942 | 30.00th=[17957], 40.00th=[19006], 50.00th=[19792], 60.00th=[20317], 00:16:09.942 | 70.00th=[21103], 80.00th=[21890], 90.00th=[22938], 95.00th=[24249], 00:16:09.942 | 99.00th=[28181], 99.50th=[29230], 99.90th=[32113], 99.95th=[36963], 00:16:09.942 | 99.99th=[41681] 00:16:09.942 write: IOPS=10.6k, BW=41.6MiB/s (43.6MB/s)(256MiB/6156msec); 0 zone resets 00:16:09.942 slat (usec): min=3, max=625, avg= 6.87, stdev= 5.89 00:16:09.942 clat (usec): min=483, max=83675, avg=11961.43, stdev=15903.50 00:16:09.942 lat (usec): min=488, max=83681, avg=11968.29, stdev=15903.63 00:16:09.942 clat percentiles (usec): 00:16:09.942 | 1.00th=[ 668], 5.00th=[ 1020], 10.00th=[ 1319], 20.00th=[ 1729], 00:16:09.942 | 30.00th=[ 2180], 40.00th=[ 2933], 50.00th=[ 4883], 60.00th=[ 6521], 00:16:09.942 | 70.00th=[ 9372], 80.00th=[19006], 90.00th=[44827], 95.00th=[51119], 00:16:09.942 | 99.00th=[55837], 99.50th=[57934], 99.90th=[63701], 99.95th=[68682], 00:16:09.942 | 99.99th=[81265] 00:16:09.942 bw ( KiB/s): min= 8632, max=97904, per=94.70%, avg=40325.77, stdev=21173.94, samples=13 00:16:09.942 iops : min= 2158, max=24476, avg=10081.38, stdev=5293.53, samples=13 00:16:09.942 lat (usec) : 500=0.01%, 750=0.77%, 1000=1.58% 00:16:09.942 lat (msec) : 2=10.87%, 4=8.24%, 10=14.43%, 20=31.25%, 50=29.76% 00:16:09.942 lat (msec) : 100=3.10% 00:16:09.942 cpu : usr=99.00%, sys=0.21%, ctx=46, majf=0, minf=5577 00:16:09.942 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:16:09.942 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:09.942 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:09.942 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:09.942 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:09.942 00:16:09.942 Run status group 0 (all jobs): 00:16:09.942 READ: bw=25.5MiB/s (26.7MB/s), 25.5MiB/s-25.5MiB/s (26.7MB/s-26.7MB/s), io=255MiB (267MB), run=9987-9987msec 00:16:09.942 WRITE: bw=41.6MiB/s (43.6MB/s), 41.6MiB/s-41.6MiB/s (43.6MB/s-43.6MB/s), io=256MiB (268MB), run=6156-6156msec 00:16:09.942 ----------------------------------------------------- 00:16:09.942 Suppressions used: 00:16:09.942 count bytes template 00:16:09.942 1 5 /usr/src/fio/parse.c 00:16:09.942 2 192 /usr/src/fio/iolog.c 00:16:09.942 1 8 libtcmalloc_minimal.so 00:16:09.942 1 904 libcrypto.so 00:16:09.942 ----------------------------------------------------- 00:16:09.942 00:16:09.942 05:03:46 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:16:09.942 05:03:46 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:09.942 05:03:46 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:09.942 05:03:46 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:09.942 Remove shared memory files 00:16:09.942 05:03:46 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:16:09.942 05:03:46 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:16:09.942 05:03:46 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:16:09.942 05:03:46 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:16:09.942 05:03:46 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid69482 /dev/shm/spdk_tgt_trace.pid82905 00:16:09.942 05:03:46 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:16:09.942 05:03:46 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:16:09.942 ************************************ 00:16:09.942 END TEST ftl_fio_basic 00:16:09.942 ************************************ 00:16:09.942 00:16:09.942 real 0m59.564s 00:16:09.942 user 2m8.776s 00:16:09.942 sys 0m2.977s 00:16:09.942 05:03:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:09.942 05:03:46 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:09.942 05:03:46 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:16:09.942 05:03:46 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:16:09.942 05:03:46 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:09.942 05:03:46 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:09.942 ************************************ 00:16:09.943 START TEST ftl_bdevperf 00:16:09.943 ************************************ 00:16:09.943 05:03:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:16:09.943 * Looking for test storage... 00:16:09.943 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:09.943 05:03:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:09.943 05:03:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1681 -- # lcov --version 00:16:09.943 05:03:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:09.943 05:03:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:09.943 05:03:46 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:09.943 05:03:46 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:09.943 05:03:46 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:09.943 05:03:46 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:16:09.943 05:03:46 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:16:09.943 05:03:46 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:16:09.943 05:03:46 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:16:09.943 05:03:46 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:16:09.943 05:03:46 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:16:09.943 05:03:46 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:16:09.943 05:03:46 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:09.943 05:03:46 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:16:09.943 05:03:46 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:16:09.943 05:03:46 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:09.943 05:03:46 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:09.943 05:03:46 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:16:09.943 05:03:46 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:16:09.943 05:03:46 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:09.943 05:03:46 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:16:09.943 05:03:46 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:16:09.943 05:03:46 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:16:09.943 05:03:46 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:16:09.943 05:03:46 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:09.943 05:03:46 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:16:09.943 05:03:46 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:16:09.943 05:03:46 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:09.943 05:03:46 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:09.943 05:03:46 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:16:09.943 05:03:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:09.943 05:03:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:09.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:09.943 --rc genhtml_branch_coverage=1 00:16:09.943 --rc genhtml_function_coverage=1 00:16:09.943 --rc genhtml_legend=1 00:16:09.943 --rc geninfo_all_blocks=1 00:16:09.943 --rc geninfo_unexecuted_blocks=1 00:16:09.943 00:16:09.943 ' 00:16:09.943 05:03:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:09.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:09.943 --rc genhtml_branch_coverage=1 00:16:09.943 --rc genhtml_function_coverage=1 00:16:09.943 --rc genhtml_legend=1 00:16:09.943 --rc geninfo_all_blocks=1 00:16:09.943 --rc geninfo_unexecuted_blocks=1 00:16:09.943 00:16:09.943 ' 00:16:09.943 05:03:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:09.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:09.943 --rc genhtml_branch_coverage=1 00:16:09.943 --rc genhtml_function_coverage=1 00:16:09.943 --rc genhtml_legend=1 00:16:09.943 --rc geninfo_all_blocks=1 00:16:09.943 --rc geninfo_unexecuted_blocks=1 00:16:09.943 00:16:09.943 ' 00:16:09.943 05:03:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:09.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:09.943 --rc genhtml_branch_coverage=1 00:16:09.943 --rc genhtml_function_coverage=1 00:16:09.943 --rc genhtml_legend=1 00:16:09.943 --rc geninfo_all_blocks=1 00:16:09.943 --rc geninfo_unexecuted_blocks=1 00:16:09.943 00:16:09.943 ' 00:16:09.943 05:03:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:09.943 05:03:46 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:16:09.943 05:03:46 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:09.943 05:03:46 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:09.943 05:03:46 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:09.943 05:03:46 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:09.943 05:03:46 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:09.943 05:03:46 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:09.943 05:03:46 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:09.943 05:03:46 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:09.943 05:03:46 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:09.943 05:03:46 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:09.943 05:03:46 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:09.943 05:03:46 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:09.943 05:03:46 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:09.943 05:03:46 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:09.943 05:03:46 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:09.943 05:03:46 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:09.943 05:03:46 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:09.943 05:03:46 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:09.943 05:03:46 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:09.943 05:03:46 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:09.943 05:03:46 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:09.943 05:03:46 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:09.943 05:03:46 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:09.943 05:03:46 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:09.943 05:03:46 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:09.943 05:03:46 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:09.943 05:03:46 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:09.943 05:03:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:16:09.943 05:03:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:16:09.943 05:03:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:16:09.944 05:03:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:09.944 05:03:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:16:09.944 05:03:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=84776 00:16:09.944 05:03:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:16:09.944 05:03:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:16:09.944 05:03:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 84776 00:16:09.944 05:03:46 ftl.ftl_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 84776 ']' 00:16:09.944 05:03:46 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:09.944 05:03:46 ftl.ftl_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:09.944 05:03:46 ftl.ftl_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:09.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:09.944 05:03:46 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:09.944 05:03:46 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:09.944 [2024-12-06 05:03:46.721837] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:16:09.944 [2024-12-06 05:03:46.722183] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84776 ] 00:16:09.944 [2024-12-06 05:03:46.859610] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:09.944 [2024-12-06 05:03:46.931373] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:09.944 05:03:47 ftl.ftl_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:09.944 05:03:47 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:16:09.944 05:03:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:16:09.944 05:03:47 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:16:09.944 05:03:47 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:16:09.944 05:03:47 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:16:09.944 05:03:47 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:16:09.944 05:03:47 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:16:09.944 05:03:47 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:09.944 05:03:47 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:16:09.944 05:03:47 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:09.944 05:03:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:16:09.944 05:03:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:09.944 05:03:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:16:09.944 05:03:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:16:09.944 05:03:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:09.944 05:03:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:09.944 { 00:16:09.944 "name": "nvme0n1", 00:16:09.944 "aliases": [ 00:16:09.944 "3acfe5f3-9043-4702-8b81-a8602dcc1a42" 00:16:09.944 ], 00:16:09.944 "product_name": "NVMe disk", 00:16:09.944 "block_size": 4096, 00:16:09.944 "num_blocks": 1310720, 00:16:09.944 "uuid": "3acfe5f3-9043-4702-8b81-a8602dcc1a42", 00:16:09.944 "numa_id": -1, 00:16:09.944 "assigned_rate_limits": { 00:16:09.944 "rw_ios_per_sec": 0, 00:16:09.944 "rw_mbytes_per_sec": 0, 00:16:09.944 "r_mbytes_per_sec": 0, 00:16:09.944 "w_mbytes_per_sec": 0 00:16:09.944 }, 00:16:09.944 "claimed": true, 00:16:09.944 "claim_type": "read_many_write_one", 00:16:09.944 "zoned": false, 00:16:09.944 "supported_io_types": { 00:16:09.944 "read": true, 00:16:09.944 "write": true, 00:16:09.944 "unmap": true, 00:16:09.944 "flush": true, 00:16:09.944 "reset": true, 00:16:09.944 "nvme_admin": true, 00:16:09.944 "nvme_io": true, 00:16:09.944 "nvme_io_md": false, 00:16:09.944 "write_zeroes": true, 00:16:09.944 "zcopy": false, 00:16:09.944 "get_zone_info": false, 00:16:09.944 "zone_management": false, 00:16:09.944 "zone_append": false, 00:16:09.944 "compare": true, 00:16:09.944 "compare_and_write": false, 00:16:09.944 "abort": true, 00:16:09.944 "seek_hole": false, 00:16:09.944 "seek_data": false, 00:16:09.944 "copy": true, 00:16:09.944 "nvme_iov_md": false 00:16:09.944 }, 00:16:09.944 "driver_specific": { 00:16:09.944 "nvme": [ 00:16:09.944 { 00:16:09.944 "pci_address": "0000:00:11.0", 00:16:09.944 "trid": { 00:16:09.944 "trtype": "PCIe", 00:16:09.944 "traddr": "0000:00:11.0" 00:16:09.944 }, 00:16:09.944 "ctrlr_data": { 00:16:09.944 "cntlid": 0, 00:16:09.944 "vendor_id": "0x1b36", 00:16:09.944 "model_number": "QEMU NVMe Ctrl", 00:16:09.944 "serial_number": "12341", 00:16:09.944 "firmware_revision": "8.0.0", 00:16:09.944 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:09.944 "oacs": { 00:16:09.944 "security": 0, 00:16:09.944 "format": 1, 00:16:09.944 "firmware": 0, 00:16:09.944 "ns_manage": 1 00:16:09.944 }, 00:16:09.944 "multi_ctrlr": false, 00:16:09.944 "ana_reporting": false 00:16:09.944 }, 00:16:09.944 "vs": { 00:16:09.944 "nvme_version": "1.4" 00:16:09.944 }, 00:16:09.944 "ns_data": { 00:16:09.944 "id": 1, 00:16:09.944 "can_share": false 00:16:09.944 } 00:16:09.944 } 00:16:09.944 ], 00:16:09.944 "mp_policy": "active_passive" 00:16:09.944 } 00:16:09.944 } 00:16:09.944 ]' 00:16:09.944 05:03:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:10.206 05:03:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:16:10.206 05:03:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:10.206 05:03:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=1310720 00:16:10.206 05:03:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:16:10.206 05:03:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 5120 00:16:10.206 05:03:48 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:16:10.206 05:03:48 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:10.206 05:03:48 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:16:10.206 05:03:48 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:10.206 05:03:48 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:10.206 05:03:48 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=3240a4f5-db62-48df-9298-e5dfae3a481d 00:16:10.206 05:03:48 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:16:10.206 05:03:48 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3240a4f5-db62-48df-9298-e5dfae3a481d 00:16:10.466 05:03:48 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:10.726 05:03:48 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=686346a5-423d-4310-a6bd-0b9cb652f1f8 00:16:10.726 05:03:48 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 686346a5-423d-4310-a6bd-0b9cb652f1f8 00:16:10.987 05:03:49 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=e3e2bdb7-abb5-4be0-9ec6-1ab586c156aa 00:16:10.987 05:03:49 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 e3e2bdb7-abb5-4be0-9ec6-1ab586c156aa 00:16:10.987 05:03:49 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:16:10.987 05:03:49 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:16:10.987 05:03:49 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=e3e2bdb7-abb5-4be0-9ec6-1ab586c156aa 00:16:10.987 05:03:49 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:16:10.987 05:03:49 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size e3e2bdb7-abb5-4be0-9ec6-1ab586c156aa 00:16:10.987 05:03:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=e3e2bdb7-abb5-4be0-9ec6-1ab586c156aa 00:16:10.987 05:03:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:10.987 05:03:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:16:10.987 05:03:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:16:10.987 05:03:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e3e2bdb7-abb5-4be0-9ec6-1ab586c156aa 00:16:11.248 05:03:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:11.248 { 00:16:11.248 "name": "e3e2bdb7-abb5-4be0-9ec6-1ab586c156aa", 00:16:11.248 "aliases": [ 00:16:11.248 "lvs/nvme0n1p0" 00:16:11.248 ], 00:16:11.248 "product_name": "Logical Volume", 00:16:11.248 "block_size": 4096, 00:16:11.248 "num_blocks": 26476544, 00:16:11.248 "uuid": "e3e2bdb7-abb5-4be0-9ec6-1ab586c156aa", 00:16:11.248 "assigned_rate_limits": { 00:16:11.248 "rw_ios_per_sec": 0, 00:16:11.248 "rw_mbytes_per_sec": 0, 00:16:11.248 "r_mbytes_per_sec": 0, 00:16:11.248 "w_mbytes_per_sec": 0 00:16:11.248 }, 00:16:11.248 "claimed": false, 00:16:11.248 "zoned": false, 00:16:11.248 "supported_io_types": { 00:16:11.248 "read": true, 00:16:11.248 "write": true, 00:16:11.248 "unmap": true, 00:16:11.248 "flush": false, 00:16:11.248 "reset": true, 00:16:11.248 "nvme_admin": false, 00:16:11.248 "nvme_io": false, 00:16:11.248 "nvme_io_md": false, 00:16:11.248 "write_zeroes": true, 00:16:11.248 "zcopy": false, 00:16:11.248 "get_zone_info": false, 00:16:11.248 "zone_management": false, 00:16:11.248 "zone_append": false, 00:16:11.248 "compare": false, 00:16:11.248 "compare_and_write": false, 00:16:11.248 "abort": false, 00:16:11.248 "seek_hole": true, 00:16:11.248 "seek_data": true, 00:16:11.248 "copy": false, 00:16:11.248 "nvme_iov_md": false 00:16:11.248 }, 00:16:11.248 "driver_specific": { 00:16:11.248 "lvol": { 00:16:11.248 "lvol_store_uuid": "686346a5-423d-4310-a6bd-0b9cb652f1f8", 00:16:11.248 "base_bdev": "nvme0n1", 00:16:11.248 "thin_provision": true, 00:16:11.248 "num_allocated_clusters": 0, 00:16:11.248 "snapshot": false, 00:16:11.248 "clone": false, 00:16:11.248 "esnap_clone": false 00:16:11.248 } 00:16:11.248 } 00:16:11.248 } 00:16:11.248 ]' 00:16:11.248 05:03:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:11.248 05:03:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:16:11.248 05:03:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:11.248 05:03:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:11.248 05:03:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:11.248 05:03:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:16:11.248 05:03:49 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:16:11.249 05:03:49 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:16:11.249 05:03:49 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:16:11.510 05:03:49 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:11.510 05:03:49 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:11.510 05:03:49 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size e3e2bdb7-abb5-4be0-9ec6-1ab586c156aa 00:16:11.510 05:03:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=e3e2bdb7-abb5-4be0-9ec6-1ab586c156aa 00:16:11.510 05:03:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:11.510 05:03:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:16:11.510 05:03:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:16:11.510 05:03:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e3e2bdb7-abb5-4be0-9ec6-1ab586c156aa 00:16:11.771 05:03:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:11.771 { 00:16:11.771 "name": "e3e2bdb7-abb5-4be0-9ec6-1ab586c156aa", 00:16:11.771 "aliases": [ 00:16:11.771 "lvs/nvme0n1p0" 00:16:11.771 ], 00:16:11.771 "product_name": "Logical Volume", 00:16:11.771 "block_size": 4096, 00:16:11.771 "num_blocks": 26476544, 00:16:11.771 "uuid": "e3e2bdb7-abb5-4be0-9ec6-1ab586c156aa", 00:16:11.771 "assigned_rate_limits": { 00:16:11.771 "rw_ios_per_sec": 0, 00:16:11.771 "rw_mbytes_per_sec": 0, 00:16:11.771 "r_mbytes_per_sec": 0, 00:16:11.771 "w_mbytes_per_sec": 0 00:16:11.771 }, 00:16:11.771 "claimed": false, 00:16:11.771 "zoned": false, 00:16:11.771 "supported_io_types": { 00:16:11.771 "read": true, 00:16:11.771 "write": true, 00:16:11.771 "unmap": true, 00:16:11.771 "flush": false, 00:16:11.771 "reset": true, 00:16:11.771 "nvme_admin": false, 00:16:11.771 "nvme_io": false, 00:16:11.771 "nvme_io_md": false, 00:16:11.771 "write_zeroes": true, 00:16:11.771 "zcopy": false, 00:16:11.771 "get_zone_info": false, 00:16:11.771 "zone_management": false, 00:16:11.771 "zone_append": false, 00:16:11.771 "compare": false, 00:16:11.771 "compare_and_write": false, 00:16:11.771 "abort": false, 00:16:11.771 "seek_hole": true, 00:16:11.771 "seek_data": true, 00:16:11.771 "copy": false, 00:16:11.771 "nvme_iov_md": false 00:16:11.771 }, 00:16:11.771 "driver_specific": { 00:16:11.771 "lvol": { 00:16:11.771 "lvol_store_uuid": "686346a5-423d-4310-a6bd-0b9cb652f1f8", 00:16:11.771 "base_bdev": "nvme0n1", 00:16:11.771 "thin_provision": true, 00:16:11.771 "num_allocated_clusters": 0, 00:16:11.771 "snapshot": false, 00:16:11.771 "clone": false, 00:16:11.771 "esnap_clone": false 00:16:11.771 } 00:16:11.771 } 00:16:11.771 } 00:16:11.771 ]' 00:16:11.771 05:03:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:11.771 05:03:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:16:11.771 05:03:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:11.771 05:03:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:11.771 05:03:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:11.771 05:03:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:16:11.771 05:03:49 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:16:11.771 05:03:49 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:12.033 05:03:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:16:12.033 05:03:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size e3e2bdb7-abb5-4be0-9ec6-1ab586c156aa 00:16:12.033 05:03:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=e3e2bdb7-abb5-4be0-9ec6-1ab586c156aa 00:16:12.033 05:03:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:12.033 05:03:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:16:12.033 05:03:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:16:12.033 05:03:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e3e2bdb7-abb5-4be0-9ec6-1ab586c156aa 00:16:12.294 05:03:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:12.294 { 00:16:12.294 "name": "e3e2bdb7-abb5-4be0-9ec6-1ab586c156aa", 00:16:12.294 "aliases": [ 00:16:12.294 "lvs/nvme0n1p0" 00:16:12.294 ], 00:16:12.294 "product_name": "Logical Volume", 00:16:12.294 "block_size": 4096, 00:16:12.294 "num_blocks": 26476544, 00:16:12.294 "uuid": "e3e2bdb7-abb5-4be0-9ec6-1ab586c156aa", 00:16:12.294 "assigned_rate_limits": { 00:16:12.294 "rw_ios_per_sec": 0, 00:16:12.294 "rw_mbytes_per_sec": 0, 00:16:12.294 "r_mbytes_per_sec": 0, 00:16:12.294 "w_mbytes_per_sec": 0 00:16:12.294 }, 00:16:12.294 "claimed": false, 00:16:12.294 "zoned": false, 00:16:12.294 "supported_io_types": { 00:16:12.294 "read": true, 00:16:12.294 "write": true, 00:16:12.294 "unmap": true, 00:16:12.294 "flush": false, 00:16:12.294 "reset": true, 00:16:12.294 "nvme_admin": false, 00:16:12.294 "nvme_io": false, 00:16:12.294 "nvme_io_md": false, 00:16:12.294 "write_zeroes": true, 00:16:12.294 "zcopy": false, 00:16:12.294 "get_zone_info": false, 00:16:12.294 "zone_management": false, 00:16:12.294 "zone_append": false, 00:16:12.294 "compare": false, 00:16:12.294 "compare_and_write": false, 00:16:12.294 "abort": false, 00:16:12.294 "seek_hole": true, 00:16:12.294 "seek_data": true, 00:16:12.294 "copy": false, 00:16:12.294 "nvme_iov_md": false 00:16:12.294 }, 00:16:12.294 "driver_specific": { 00:16:12.294 "lvol": { 00:16:12.294 "lvol_store_uuid": "686346a5-423d-4310-a6bd-0b9cb652f1f8", 00:16:12.294 "base_bdev": "nvme0n1", 00:16:12.294 "thin_provision": true, 00:16:12.294 "num_allocated_clusters": 0, 00:16:12.294 "snapshot": false, 00:16:12.294 "clone": false, 00:16:12.294 "esnap_clone": false 00:16:12.294 } 00:16:12.294 } 00:16:12.294 } 00:16:12.294 ]' 00:16:12.294 05:03:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:12.294 05:03:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:16:12.294 05:03:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:12.294 05:03:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:12.294 05:03:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:12.294 05:03:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:16:12.294 05:03:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:16:12.294 05:03:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d e3e2bdb7-abb5-4be0-9ec6-1ab586c156aa -c nvc0n1p0 --l2p_dram_limit 20 00:16:12.555 [2024-12-06 05:03:50.585793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.555 [2024-12-06 05:03:50.585832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:12.555 [2024-12-06 05:03:50.585846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:12.555 [2024-12-06 05:03:50.585855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.555 [2024-12-06 05:03:50.585896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.555 [2024-12-06 05:03:50.585904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:12.555 [2024-12-06 05:03:50.585914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:16:12.555 [2024-12-06 05:03:50.585920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.555 [2024-12-06 05:03:50.585938] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:12.555 [2024-12-06 05:03:50.586135] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:12.555 [2024-12-06 05:03:50.586151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.555 [2024-12-06 05:03:50.586157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:12.555 [2024-12-06 05:03:50.586166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.216 ms 00:16:12.556 [2024-12-06 05:03:50.586173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.556 [2024-12-06 05:03:50.586200] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 860beb4f-efe9-45ec-905a-975d5871f450 00:16:12.556 [2024-12-06 05:03:50.587481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.556 [2024-12-06 05:03:50.587506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:12.556 [2024-12-06 05:03:50.587517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:16:12.556 [2024-12-06 05:03:50.587525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.556 [2024-12-06 05:03:50.594370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.556 [2024-12-06 05:03:50.594400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:12.556 [2024-12-06 05:03:50.594409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.811 ms 00:16:12.556 [2024-12-06 05:03:50.594420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.556 [2024-12-06 05:03:50.594514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.556 [2024-12-06 05:03:50.594525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:12.556 [2024-12-06 05:03:50.594533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:16:12.556 [2024-12-06 05:03:50.594542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.556 [2024-12-06 05:03:50.594578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.556 [2024-12-06 05:03:50.594595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:12.556 [2024-12-06 05:03:50.594604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:12.556 [2024-12-06 05:03:50.594613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.556 [2024-12-06 05:03:50.594630] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:12.556 [2024-12-06 05:03:50.596294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.556 [2024-12-06 05:03:50.596320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:12.556 [2024-12-06 05:03:50.596330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.666 ms 00:16:12.556 [2024-12-06 05:03:50.596337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.556 [2024-12-06 05:03:50.596367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.556 [2024-12-06 05:03:50.596373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:12.556 [2024-12-06 05:03:50.596384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:16:12.556 [2024-12-06 05:03:50.596390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.556 [2024-12-06 05:03:50.596410] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:12.556 [2024-12-06 05:03:50.596527] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:12.556 [2024-12-06 05:03:50.596539] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:12.556 [2024-12-06 05:03:50.596551] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:12.556 [2024-12-06 05:03:50.596561] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:12.556 [2024-12-06 05:03:50.596569] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:12.556 [2024-12-06 05:03:50.596582] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:16:12.556 [2024-12-06 05:03:50.596589] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:12.556 [2024-12-06 05:03:50.596596] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:12.556 [2024-12-06 05:03:50.596602] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:12.556 [2024-12-06 05:03:50.596609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.556 [2024-12-06 05:03:50.596616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:12.556 [2024-12-06 05:03:50.596626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.201 ms 00:16:12.556 [2024-12-06 05:03:50.596632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.556 [2024-12-06 05:03:50.596713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.556 [2024-12-06 05:03:50.596721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:12.556 [2024-12-06 05:03:50.596730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:16:12.556 [2024-12-06 05:03:50.596736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.556 [2024-12-06 05:03:50.596834] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:12.556 [2024-12-06 05:03:50.596846] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:12.556 [2024-12-06 05:03:50.596854] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:12.556 [2024-12-06 05:03:50.596863] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:12.556 [2024-12-06 05:03:50.596873] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:12.556 [2024-12-06 05:03:50.596880] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:12.556 [2024-12-06 05:03:50.596887] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:16:12.556 [2024-12-06 05:03:50.596893] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:12.556 [2024-12-06 05:03:50.596900] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:16:12.556 [2024-12-06 05:03:50.596905] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:12.556 [2024-12-06 05:03:50.596912] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:12.556 [2024-12-06 05:03:50.596918] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:16:12.556 [2024-12-06 05:03:50.596930] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:12.556 [2024-12-06 05:03:50.596936] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:12.556 [2024-12-06 05:03:50.596944] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:16:12.556 [2024-12-06 05:03:50.596950] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:12.556 [2024-12-06 05:03:50.596958] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:12.556 [2024-12-06 05:03:50.596964] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:16:12.556 [2024-12-06 05:03:50.596971] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:12.556 [2024-12-06 05:03:50.596977] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:12.556 [2024-12-06 05:03:50.596985] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:16:12.556 [2024-12-06 05:03:50.596991] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:12.556 [2024-12-06 05:03:50.596999] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:12.556 [2024-12-06 05:03:50.597005] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:16:12.556 [2024-12-06 05:03:50.597013] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:12.556 [2024-12-06 05:03:50.597019] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:12.556 [2024-12-06 05:03:50.597026] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:16:12.556 [2024-12-06 05:03:50.597032] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:12.556 [2024-12-06 05:03:50.597041] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:12.556 [2024-12-06 05:03:50.597047] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:16:12.556 [2024-12-06 05:03:50.597054] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:12.556 [2024-12-06 05:03:50.597061] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:12.556 [2024-12-06 05:03:50.597069] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:16:12.556 [2024-12-06 05:03:50.597076] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:12.556 [2024-12-06 05:03:50.597084] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:12.556 [2024-12-06 05:03:50.597089] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:16:12.556 [2024-12-06 05:03:50.597096] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:12.556 [2024-12-06 05:03:50.597103] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:12.556 [2024-12-06 05:03:50.597111] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:16:12.556 [2024-12-06 05:03:50.597117] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:12.556 [2024-12-06 05:03:50.597125] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:12.556 [2024-12-06 05:03:50.597131] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:16:12.556 [2024-12-06 05:03:50.597139] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:12.556 [2024-12-06 05:03:50.597144] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:12.556 [2024-12-06 05:03:50.597156] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:12.556 [2024-12-06 05:03:50.597163] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:12.556 [2024-12-06 05:03:50.597173] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:12.556 [2024-12-06 05:03:50.597182] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:12.556 [2024-12-06 05:03:50.597190] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:12.556 [2024-12-06 05:03:50.597195] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:12.556 [2024-12-06 05:03:50.597202] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:12.556 [2024-12-06 05:03:50.597208] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:12.556 [2024-12-06 05:03:50.597214] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:12.556 [2024-12-06 05:03:50.597223] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:12.556 [2024-12-06 05:03:50.597234] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:12.556 [2024-12-06 05:03:50.597242] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:16:12.557 [2024-12-06 05:03:50.597249] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:16:12.557 [2024-12-06 05:03:50.597255] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:16:12.557 [2024-12-06 05:03:50.597262] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:16:12.557 [2024-12-06 05:03:50.597268] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:16:12.557 [2024-12-06 05:03:50.597277] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:16:12.557 [2024-12-06 05:03:50.597282] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:16:12.557 [2024-12-06 05:03:50.597295] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:16:12.557 [2024-12-06 05:03:50.597301] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:16:12.557 [2024-12-06 05:03:50.597309] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:16:12.557 [2024-12-06 05:03:50.597315] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:16:12.557 [2024-12-06 05:03:50.597322] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:16:12.557 [2024-12-06 05:03:50.597327] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:16:12.557 [2024-12-06 05:03:50.597334] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:16:12.557 [2024-12-06 05:03:50.597340] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:12.557 [2024-12-06 05:03:50.597347] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:12.557 [2024-12-06 05:03:50.597354] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:12.557 [2024-12-06 05:03:50.597362] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:12.557 [2024-12-06 05:03:50.597367] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:12.557 [2024-12-06 05:03:50.597375] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:12.557 [2024-12-06 05:03:50.597381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.557 [2024-12-06 05:03:50.597390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:12.557 [2024-12-06 05:03:50.597400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.601 ms 00:16:12.557 [2024-12-06 05:03:50.597406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.557 [2024-12-06 05:03:50.597432] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:16:12.557 [2024-12-06 05:03:50.597444] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:16:16.772 [2024-12-06 05:03:54.362845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.772 [2024-12-06 05:03:54.363162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:16.772 [2024-12-06 05:03:54.363201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3765.396 ms 00:16:16.772 [2024-12-06 05:03:54.363214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.772 [2024-12-06 05:03:54.406371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.772 [2024-12-06 05:03:54.406781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:16.772 [2024-12-06 05:03:54.406830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.009 ms 00:16:16.772 [2024-12-06 05:03:54.406860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.772 [2024-12-06 05:03:54.407133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.772 [2024-12-06 05:03:54.407169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:16.772 [2024-12-06 05:03:54.407195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.156 ms 00:16:16.772 [2024-12-06 05:03:54.407219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.772 [2024-12-06 05:03:54.423958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.772 [2024-12-06 05:03:54.424017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:16.772 [2024-12-06 05:03:54.424030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.652 ms 00:16:16.772 [2024-12-06 05:03:54.424042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.772 [2024-12-06 05:03:54.424085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.772 [2024-12-06 05:03:54.424098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:16.772 [2024-12-06 05:03:54.424108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:16.772 [2024-12-06 05:03:54.424119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.772 [2024-12-06 05:03:54.424915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.772 [2024-12-06 05:03:54.424965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:16.772 [2024-12-06 05:03:54.424984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.733 ms 00:16:16.772 [2024-12-06 05:03:54.425004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.772 [2024-12-06 05:03:54.425138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.772 [2024-12-06 05:03:54.425150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:16.772 [2024-12-06 05:03:54.425161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:16:16.772 [2024-12-06 05:03:54.425178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.772 [2024-12-06 05:03:54.435140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.772 [2024-12-06 05:03:54.435197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:16.772 [2024-12-06 05:03:54.435210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.941 ms 00:16:16.772 [2024-12-06 05:03:54.435221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.772 [2024-12-06 05:03:54.447114] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:16:16.772 [2024-12-06 05:03:54.456603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.772 [2024-12-06 05:03:54.456656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:16.772 [2024-12-06 05:03:54.456691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.281 ms 00:16:16.772 [2024-12-06 05:03:54.456701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.772 [2024-12-06 05:03:54.557754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.772 [2024-12-06 05:03:54.557997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:16.772 [2024-12-06 05:03:54.558031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 101.012 ms 00:16:16.772 [2024-12-06 05:03:54.558041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.772 [2024-12-06 05:03:54.558298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.772 [2024-12-06 05:03:54.558314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:16.772 [2024-12-06 05:03:54.558332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.179 ms 00:16:16.772 [2024-12-06 05:03:54.558341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.772 [2024-12-06 05:03:54.564965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.772 [2024-12-06 05:03:54.565174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:16.772 [2024-12-06 05:03:54.565200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.536 ms 00:16:16.772 [2024-12-06 05:03:54.565210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.772 [2024-12-06 05:03:54.570754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.772 [2024-12-06 05:03:54.570803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:16.772 [2024-12-06 05:03:54.570818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.441 ms 00:16:16.772 [2024-12-06 05:03:54.570826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.772 [2024-12-06 05:03:54.571212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.772 [2024-12-06 05:03:54.571226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:16.772 [2024-12-06 05:03:54.571246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.336 ms 00:16:16.772 [2024-12-06 05:03:54.571253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.772 [2024-12-06 05:03:54.624311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.772 [2024-12-06 05:03:54.624514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:16.772 [2024-12-06 05:03:54.624548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.010 ms 00:16:16.773 [2024-12-06 05:03:54.624558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.773 [2024-12-06 05:03:54.633013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.773 [2024-12-06 05:03:54.633075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:16.773 [2024-12-06 05:03:54.633095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.302 ms 00:16:16.773 [2024-12-06 05:03:54.633109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.773 [2024-12-06 05:03:54.639454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.773 [2024-12-06 05:03:54.639676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:16:16.773 [2024-12-06 05:03:54.639704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.284 ms 00:16:16.773 [2024-12-06 05:03:54.639713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.773 [2024-12-06 05:03:54.646612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.773 [2024-12-06 05:03:54.646822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:16.773 [2024-12-06 05:03:54.646854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.847 ms 00:16:16.773 [2024-12-06 05:03:54.646865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.773 [2024-12-06 05:03:54.646923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.773 [2024-12-06 05:03:54.646933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:16.773 [2024-12-06 05:03:54.646954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:16:16.773 [2024-12-06 05:03:54.646963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.773 [2024-12-06 05:03:54.647073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.773 [2024-12-06 05:03:54.647085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:16.773 [2024-12-06 05:03:54.647097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:16:16.773 [2024-12-06 05:03:54.647108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.773 [2024-12-06 05:03:54.648521] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4062.112 ms, result 0 00:16:16.773 { 00:16:16.773 "name": "ftl0", 00:16:16.773 "uuid": "860beb4f-efe9-45ec-905a-975d5871f450" 00:16:16.773 } 00:16:16.773 05:03:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:16:16.773 05:03:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:16:16.773 05:03:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:16:16.773 05:03:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:16:16.773 [2024-12-06 05:03:54.988356] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:16.773 I/O size of 69632 is greater than zero copy threshold (65536). 00:16:16.773 Zero copy mechanism will not be used. 00:16:16.773 Running I/O for 4 seconds... 00:16:19.105 682.00 IOPS, 45.29 MiB/s [2024-12-06T05:03:58.280Z] 740.50 IOPS, 49.17 MiB/s [2024-12-06T05:03:59.226Z] 840.00 IOPS, 55.78 MiB/s [2024-12-06T05:03:59.226Z] 797.75 IOPS, 52.98 MiB/s 00:16:20.994 Latency(us) 00:16:20.994 [2024-12-06T05:03:59.226Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:20.994 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:16:20.994 ftl0 : 4.00 797.64 52.97 0.00 0.00 1332.73 188.26 3037.34 00:16:20.994 [2024-12-06T05:03:59.226Z] =================================================================================================================== 00:16:20.994 [2024-12-06T05:03:59.226Z] Total : 797.64 52.97 0.00 0.00 1332.73 188.26 3037.34 00:16:20.994 { 00:16:20.994 "results": [ 00:16:20.994 { 00:16:20.994 "job": "ftl0", 00:16:20.994 "core_mask": "0x1", 00:16:20.994 "workload": "randwrite", 00:16:20.994 "status": "finished", 00:16:20.994 "queue_depth": 1, 00:16:20.994 "io_size": 69632, 00:16:20.994 "runtime": 4.001798, 00:16:20.994 "iops": 797.6414601636565, 00:16:20.994 "mibps": 52.968378213992814, 00:16:20.994 "io_failed": 0, 00:16:20.994 "io_timeout": 0, 00:16:20.994 "avg_latency_us": 1332.7259070753807, 00:16:20.994 "min_latency_us": 188.25846153846155, 00:16:20.994 "max_latency_us": 3037.3415384615387 00:16:20.994 } 00:16:20.994 ], 00:16:20.994 "core_count": 1 00:16:20.994 } 00:16:20.994 [2024-12-06 05:03:58.997029] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:20.994 05:03:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:16:20.994 [2024-12-06 05:03:59.105999] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:20.994 Running I/O for 4 seconds... 00:16:23.324 5790.00 IOPS, 22.62 MiB/s [2024-12-06T05:04:02.128Z] 5301.50 IOPS, 20.71 MiB/s [2024-12-06T05:04:03.515Z] 5187.00 IOPS, 20.26 MiB/s [2024-12-06T05:04:03.515Z] 5085.00 IOPS, 19.86 MiB/s 00:16:25.283 Latency(us) 00:16:25.283 [2024-12-06T05:04:03.515Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:25.283 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:16:25.283 ftl0 : 4.03 5071.00 19.81 0.00 0.00 25141.90 368.64 47790.87 00:16:25.283 [2024-12-06T05:04:03.515Z] =================================================================================================================== 00:16:25.283 [2024-12-06T05:04:03.515Z] Total : 5071.00 19.81 0.00 0.00 25141.90 0.00 47790.87 00:16:25.283 [2024-12-06 05:04:03.148697] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:25.283 { 00:16:25.283 "results": [ 00:16:25.283 { 00:16:25.283 "job": "ftl0", 00:16:25.283 "core_mask": "0x1", 00:16:25.283 "workload": "randwrite", 00:16:25.283 "status": "finished", 00:16:25.283 "queue_depth": 128, 00:16:25.283 "io_size": 4096, 00:16:25.283 "runtime": 4.034509, 00:16:25.283 "iops": 5071.001204855411, 00:16:25.283 "mibps": 19.80859845646645, 00:16:25.283 "io_failed": 0, 00:16:25.283 "io_timeout": 0, 00:16:25.283 "avg_latency_us": 25141.90223238221, 00:16:25.283 "min_latency_us": 368.64, 00:16:25.283 "max_latency_us": 47790.86769230769 00:16:25.283 } 00:16:25.283 ], 00:16:25.283 "core_count": 1 00:16:25.283 } 00:16:25.283 05:04:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:16:25.283 [2024-12-06 05:04:03.259396] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:25.283 Running I/O for 4 seconds... 00:16:27.169 4784.00 IOPS, 18.69 MiB/s [2024-12-06T05:04:06.343Z] 5104.50 IOPS, 19.94 MiB/s [2024-12-06T05:04:07.287Z] 5004.67 IOPS, 19.55 MiB/s [2024-12-06T05:04:07.287Z] 5090.75 IOPS, 19.89 MiB/s 00:16:29.055 Latency(us) 00:16:29.055 [2024-12-06T05:04:07.287Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:29.055 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:29.055 Verification LBA range: start 0x0 length 0x1400000 00:16:29.055 ftl0 : 4.01 5106.08 19.95 0.00 0.00 24998.07 352.89 44161.18 00:16:29.055 [2024-12-06T05:04:07.287Z] =================================================================================================================== 00:16:29.055 [2024-12-06T05:04:07.287Z] Total : 5106.08 19.95 0.00 0.00 24998.07 0.00 44161.18 00:16:29.055 [2024-12-06 05:04:07.280531] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:29.055 { 00:16:29.055 "results": [ 00:16:29.055 { 00:16:29.055 "job": "ftl0", 00:16:29.055 "core_mask": "0x1", 00:16:29.055 "workload": "verify", 00:16:29.055 "status": "finished", 00:16:29.055 "verify_range": { 00:16:29.055 "start": 0, 00:16:29.055 "length": 20971520 00:16:29.055 }, 00:16:29.055 "queue_depth": 128, 00:16:29.055 "io_size": 4096, 00:16:29.055 "runtime": 4.011103, 00:16:29.055 "iops": 5106.076807302131, 00:16:29.055 "mibps": 19.94561252852395, 00:16:29.055 "io_failed": 0, 00:16:29.055 "io_timeout": 0, 00:16:29.055 "avg_latency_us": 24998.06662204745, 00:16:29.055 "min_latency_us": 352.88615384615383, 00:16:29.055 "max_latency_us": 44161.18153846154 00:16:29.055 } 00:16:29.055 ], 00:16:29.055 "core_count": 1 00:16:29.055 } 00:16:29.316 05:04:07 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:16:29.316 [2024-12-06 05:04:07.453464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.316 [2024-12-06 05:04:07.453503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:29.316 [2024-12-06 05:04:07.453518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:29.316 [2024-12-06 05:04:07.453526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.316 [2024-12-06 05:04:07.453549] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:29.316 [2024-12-06 05:04:07.454122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.316 [2024-12-06 05:04:07.454154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:29.316 [2024-12-06 05:04:07.454164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.559 ms 00:16:29.316 [2024-12-06 05:04:07.454177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.316 [2024-12-06 05:04:07.456766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.316 [2024-12-06 05:04:07.456801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:29.316 [2024-12-06 05:04:07.456811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.568 ms 00:16:29.316 [2024-12-06 05:04:07.456824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.579 [2024-12-06 05:04:07.661008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.579 [2024-12-06 05:04:07.661049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:29.579 [2024-12-06 05:04:07.661062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 204.168 ms 00:16:29.579 [2024-12-06 05:04:07.661073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.579 [2024-12-06 05:04:07.667286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.579 [2024-12-06 05:04:07.667318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:29.579 [2024-12-06 05:04:07.667329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.186 ms 00:16:29.579 [2024-12-06 05:04:07.667345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.579 [2024-12-06 05:04:07.669506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.579 [2024-12-06 05:04:07.669543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:29.579 [2024-12-06 05:04:07.669554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.112 ms 00:16:29.579 [2024-12-06 05:04:07.669565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.579 [2024-12-06 05:04:07.674852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.579 [2024-12-06 05:04:07.674889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:29.579 [2024-12-06 05:04:07.674904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.257 ms 00:16:29.579 [2024-12-06 05:04:07.674921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.579 [2024-12-06 05:04:07.675031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.579 [2024-12-06 05:04:07.675052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:29.579 [2024-12-06 05:04:07.675060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:16:29.579 [2024-12-06 05:04:07.675070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.579 [2024-12-06 05:04:07.677653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.579 [2024-12-06 05:04:07.677704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:29.579 [2024-12-06 05:04:07.677712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.568 ms 00:16:29.579 [2024-12-06 05:04:07.677722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.579 [2024-12-06 05:04:07.679993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.579 [2024-12-06 05:04:07.680133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:29.579 [2024-12-06 05:04:07.680148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.243 ms 00:16:29.579 [2024-12-06 05:04:07.680157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.579 [2024-12-06 05:04:07.681805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.579 [2024-12-06 05:04:07.681840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:29.579 [2024-12-06 05:04:07.681849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.580 ms 00:16:29.579 [2024-12-06 05:04:07.681861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.579 [2024-12-06 05:04:07.683014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.579 [2024-12-06 05:04:07.683049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:29.579 [2024-12-06 05:04:07.683057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.091 ms 00:16:29.579 [2024-12-06 05:04:07.683069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.579 [2024-12-06 05:04:07.683096] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:29.579 [2024-12-06 05:04:07.683113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:29.579 [2024-12-06 05:04:07.683124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:29.579 [2024-12-06 05:04:07.683133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:29.579 [2024-12-06 05:04:07.683141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:29.579 [2024-12-06 05:04:07.683151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:29.579 [2024-12-06 05:04:07.683159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:29.579 [2024-12-06 05:04:07.683168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:29.579 [2024-12-06 05:04:07.683176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:29.579 [2024-12-06 05:04:07.683186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:29.579 [2024-12-06 05:04:07.683193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:29.579 [2024-12-06 05:04:07.683205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:29.579 [2024-12-06 05:04:07.683212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:29.579 [2024-12-06 05:04:07.683221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:29.579 [2024-12-06 05:04:07.683228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:29.579 [2024-12-06 05:04:07.683238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:29.579 [2024-12-06 05:04:07.683245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:29.579 [2024-12-06 05:04:07.683254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:29.579 [2024-12-06 05:04:07.683262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:29.579 [2024-12-06 05:04:07.683270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:29.579 [2024-12-06 05:04:07.683279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:29.579 [2024-12-06 05:04:07.683288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:29.579 [2024-12-06 05:04:07.683295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:29.580 [2024-12-06 05:04:07.683304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:29.580 [2024-12-06 05:04:07.683311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:29.580 [2024-12-06 05:04:07.683320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:29.580 [2024-12-06 05:04:07.683327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:29.580 [2024-12-06 05:04:07.683340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:29.580 [2024-12-06 05:04:07.683348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:29.580 [2024-12-06 05:04:07.683358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:29.580 [2024-12-06 05:04:07.683365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:29.580 [2024-12-06 05:04:07.683374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:29.580 [2024-12-06 05:04:07.683384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:29.580 [2024-12-06 05:04:07.683394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:29.580 [2024-12-06 05:04:07.683401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:29.580 [2024-12-06 05:04:07.683410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:29.580 [2024-12-06 05:04:07.683417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:29.580 [2024-12-06 05:04:07.683427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:29.580 [2024-12-06 05:04:07.683435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:29.580 [2024-12-06 05:04:07.683444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:29.580 [2024-12-06 05:04:07.683452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:29.580 [2024-12-06 05:04:07.683460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:29.580 [2024-12-06 05:04:07.683468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:29.580 [2024-12-06 05:04:07.683479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:29.580 [2024-12-06 05:04:07.683487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:29.580 [2024-12-06 05:04:07.683496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:29.580 [2024-12-06 05:04:07.683504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:29.580 [2024-12-06 05:04:07.683513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:29.580 [2024-12-06 05:04:07.683520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:29.580 [2024-12-06 05:04:07.683530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:29.580 [2024-12-06 05:04:07.683538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:29.580 [2024-12-06 05:04:07.683548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:29.580 [2024-12-06 05:04:07.683556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:29.580 [2024-12-06 05:04:07.683567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:29.580 [2024-12-06 05:04:07.683575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:29.580 [2024-12-06 05:04:07.683584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:29.580 [2024-12-06 05:04:07.683592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:29.580 [2024-12-06 05:04:07.683601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:29.580 [2024-12-06 05:04:07.683608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:29.580 [2024-12-06 05:04:07.683619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:29.580 [2024-12-06 05:04:07.683627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:29.580 [2024-12-06 05:04:07.683636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:29.580 [2024-12-06 05:04:07.683643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:29.580 [2024-12-06 05:04:07.683652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:29.580 [2024-12-06 05:04:07.683662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:29.580 [2024-12-06 05:04:07.683685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:29.580 [2024-12-06 05:04:07.683694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:29.580 [2024-12-06 05:04:07.683703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:29.580 [2024-12-06 05:04:07.683711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:29.580 [2024-12-06 05:04:07.683721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:29.580 [2024-12-06 05:04:07.683728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:29.580 [2024-12-06 05:04:07.683737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:29.580 [2024-12-06 05:04:07.683744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:29.580 [2024-12-06 05:04:07.683754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:29.580 [2024-12-06 05:04:07.683762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:29.580 [2024-12-06 05:04:07.683773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:29.580 [2024-12-06 05:04:07.683780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:29.580 [2024-12-06 05:04:07.683789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:29.580 [2024-12-06 05:04:07.683797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:29.580 [2024-12-06 05:04:07.683808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:29.580 [2024-12-06 05:04:07.683815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:29.580 [2024-12-06 05:04:07.683825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:29.580 [2024-12-06 05:04:07.683832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:29.580 [2024-12-06 05:04:07.683842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:29.580 [2024-12-06 05:04:07.683851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:29.580 [2024-12-06 05:04:07.683859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:29.580 [2024-12-06 05:04:07.683866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:29.580 [2024-12-06 05:04:07.683876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:29.580 [2024-12-06 05:04:07.683883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:29.580 [2024-12-06 05:04:07.683893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:29.580 [2024-12-06 05:04:07.683900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:29.580 [2024-12-06 05:04:07.683911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:29.580 [2024-12-06 05:04:07.683918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:29.580 [2024-12-06 05:04:07.683927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:29.580 [2024-12-06 05:04:07.683936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:29.580 [2024-12-06 05:04:07.683946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:29.580 [2024-12-06 05:04:07.683958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:29.580 [2024-12-06 05:04:07.683967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:29.580 [2024-12-06 05:04:07.683975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:29.580 [2024-12-06 05:04:07.683985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:29.580 [2024-12-06 05:04:07.683992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:29.580 [2024-12-06 05:04:07.684009] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:29.580 [2024-12-06 05:04:07.684016] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 860beb4f-efe9-45ec-905a-975d5871f450 00:16:29.580 [2024-12-06 05:04:07.684026] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:29.580 [2024-12-06 05:04:07.684037] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:29.580 [2024-12-06 05:04:07.684047] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:29.580 [2024-12-06 05:04:07.684055] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:29.580 [2024-12-06 05:04:07.684067] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:29.580 [2024-12-06 05:04:07.684074] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:29.580 [2024-12-06 05:04:07.684090] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:29.580 [2024-12-06 05:04:07.684096] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:29.580 [2024-12-06 05:04:07.684105] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:29.580 [2024-12-06 05:04:07.684112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.580 [2024-12-06 05:04:07.684121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:29.581 [2024-12-06 05:04:07.684129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.017 ms 00:16:29.581 [2024-12-06 05:04:07.684139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.581 [2024-12-06 05:04:07.686037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.581 [2024-12-06 05:04:07.686063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:29.581 [2024-12-06 05:04:07.686072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.876 ms 00:16:29.581 [2024-12-06 05:04:07.686081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.581 [2024-12-06 05:04:07.686188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.581 [2024-12-06 05:04:07.686200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:29.581 [2024-12-06 05:04:07.686208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:16:29.581 [2024-12-06 05:04:07.686221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.581 [2024-12-06 05:04:07.692258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:29.581 [2024-12-06 05:04:07.692294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:29.581 [2024-12-06 05:04:07.692304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:29.581 [2024-12-06 05:04:07.692314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.581 [2024-12-06 05:04:07.692370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:29.581 [2024-12-06 05:04:07.692380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:29.581 [2024-12-06 05:04:07.692388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:29.581 [2024-12-06 05:04:07.692397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.581 [2024-12-06 05:04:07.692464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:29.581 [2024-12-06 05:04:07.692477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:29.581 [2024-12-06 05:04:07.692485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:29.581 [2024-12-06 05:04:07.692495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.581 [2024-12-06 05:04:07.692509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:29.581 [2024-12-06 05:04:07.692520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:29.581 [2024-12-06 05:04:07.692528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:29.581 [2024-12-06 05:04:07.692540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.581 [2024-12-06 05:04:07.704648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:29.581 [2024-12-06 05:04:07.704698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:29.581 [2024-12-06 05:04:07.704709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:29.581 [2024-12-06 05:04:07.704720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.581 [2024-12-06 05:04:07.715300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:29.581 [2024-12-06 05:04:07.715346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:29.581 [2024-12-06 05:04:07.715363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:29.581 [2024-12-06 05:04:07.715373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.581 [2024-12-06 05:04:07.715447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:29.581 [2024-12-06 05:04:07.715466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:29.581 [2024-12-06 05:04:07.715474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:29.581 [2024-12-06 05:04:07.715484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.581 [2024-12-06 05:04:07.715526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:29.581 [2024-12-06 05:04:07.715538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:29.581 [2024-12-06 05:04:07.715547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:29.581 [2024-12-06 05:04:07.715559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.581 [2024-12-06 05:04:07.715629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:29.581 [2024-12-06 05:04:07.715641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:29.581 [2024-12-06 05:04:07.715657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:29.581 [2024-12-06 05:04:07.715719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.581 [2024-12-06 05:04:07.715751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:29.581 [2024-12-06 05:04:07.715762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:29.581 [2024-12-06 05:04:07.715771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:29.581 [2024-12-06 05:04:07.715781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.581 [2024-12-06 05:04:07.715822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:29.581 [2024-12-06 05:04:07.715834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:29.581 [2024-12-06 05:04:07.715845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:29.581 [2024-12-06 05:04:07.715858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.581 [2024-12-06 05:04:07.715903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:29.581 [2024-12-06 05:04:07.715917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:29.581 [2024-12-06 05:04:07.715927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:29.581 [2024-12-06 05:04:07.715940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.581 [2024-12-06 05:04:07.716081] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 262.576 ms, result 0 00:16:29.581 true 00:16:29.581 05:04:07 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 84776 00:16:29.581 05:04:07 ftl.ftl_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 84776 ']' 00:16:29.581 05:04:07 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # kill -0 84776 00:16:29.581 05:04:07 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # uname 00:16:29.581 05:04:07 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:29.581 05:04:07 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84776 00:16:29.581 killing process with pid 84776 00:16:29.581 Received shutdown signal, test time was about 4.000000 seconds 00:16:29.581 00:16:29.581 Latency(us) 00:16:29.581 [2024-12-06T05:04:07.813Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:29.581 [2024-12-06T05:04:07.813Z] =================================================================================================================== 00:16:29.581 [2024-12-06T05:04:07.813Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:29.581 05:04:07 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:29.581 05:04:07 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:29.581 05:04:07 ftl.ftl_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84776' 00:16:29.581 05:04:07 ftl.ftl_bdevperf -- common/autotest_common.sh@969 -- # kill 84776 00:16:29.581 05:04:07 ftl.ftl_bdevperf -- common/autotest_common.sh@974 -- # wait 84776 00:16:32.130 Remove shared memory files 00:16:32.130 05:04:10 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:16:32.130 05:04:10 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:16:32.130 05:04:10 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:16:32.130 05:04:10 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:16:32.130 05:04:10 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:16:32.130 05:04:10 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:16:32.131 05:04:10 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:16:32.131 05:04:10 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:16:32.131 ************************************ 00:16:32.131 END TEST ftl_bdevperf 00:16:32.131 ************************************ 00:16:32.131 00:16:32.131 real 0m23.883s 00:16:32.131 user 0m26.374s 00:16:32.131 sys 0m1.092s 00:16:32.131 05:04:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:32.131 05:04:10 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:32.392 05:04:10 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:16:32.392 05:04:10 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:16:32.392 05:04:10 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:32.392 05:04:10 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:32.392 ************************************ 00:16:32.392 START TEST ftl_trim 00:16:32.392 ************************************ 00:16:32.392 05:04:10 ftl.ftl_trim -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:16:32.392 * Looking for test storage... 00:16:32.392 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:32.392 05:04:10 ftl.ftl_trim -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:32.392 05:04:10 ftl.ftl_trim -- common/autotest_common.sh@1681 -- # lcov --version 00:16:32.392 05:04:10 ftl.ftl_trim -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:32.392 05:04:10 ftl.ftl_trim -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:32.392 05:04:10 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:32.392 05:04:10 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:32.392 05:04:10 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:32.392 05:04:10 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:16:32.392 05:04:10 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:16:32.392 05:04:10 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:16:32.392 05:04:10 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:16:32.392 05:04:10 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:16:32.392 05:04:10 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:16:32.392 05:04:10 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:16:32.392 05:04:10 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:32.392 05:04:10 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:16:32.392 05:04:10 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:16:32.392 05:04:10 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:32.392 05:04:10 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:32.392 05:04:10 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:16:32.392 05:04:10 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:16:32.392 05:04:10 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:32.392 05:04:10 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:16:32.392 05:04:10 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:16:32.392 05:04:10 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:16:32.392 05:04:10 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:16:32.392 05:04:10 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:32.392 05:04:10 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:16:32.392 05:04:10 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:16:32.392 05:04:10 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:32.392 05:04:10 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:32.392 05:04:10 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:16:32.392 05:04:10 ftl.ftl_trim -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:32.392 05:04:10 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:32.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:32.392 --rc genhtml_branch_coverage=1 00:16:32.392 --rc genhtml_function_coverage=1 00:16:32.392 --rc genhtml_legend=1 00:16:32.392 --rc geninfo_all_blocks=1 00:16:32.392 --rc geninfo_unexecuted_blocks=1 00:16:32.392 00:16:32.392 ' 00:16:32.392 05:04:10 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:32.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:32.392 --rc genhtml_branch_coverage=1 00:16:32.392 --rc genhtml_function_coverage=1 00:16:32.392 --rc genhtml_legend=1 00:16:32.392 --rc geninfo_all_blocks=1 00:16:32.392 --rc geninfo_unexecuted_blocks=1 00:16:32.392 00:16:32.392 ' 00:16:32.392 05:04:10 ftl.ftl_trim -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:32.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:32.392 --rc genhtml_branch_coverage=1 00:16:32.392 --rc genhtml_function_coverage=1 00:16:32.392 --rc genhtml_legend=1 00:16:32.392 --rc geninfo_all_blocks=1 00:16:32.392 --rc geninfo_unexecuted_blocks=1 00:16:32.392 00:16:32.392 ' 00:16:32.392 05:04:10 ftl.ftl_trim -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:32.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:32.392 --rc genhtml_branch_coverage=1 00:16:32.392 --rc genhtml_function_coverage=1 00:16:32.392 --rc genhtml_legend=1 00:16:32.392 --rc geninfo_all_blocks=1 00:16:32.392 --rc geninfo_unexecuted_blocks=1 00:16:32.392 00:16:32.392 ' 00:16:32.392 05:04:10 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:32.392 05:04:10 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:16:32.392 05:04:10 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:32.392 05:04:10 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:32.392 05:04:10 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:32.654 05:04:10 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:32.654 05:04:10 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:32.654 05:04:10 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:32.654 05:04:10 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:32.654 05:04:10 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:32.654 05:04:10 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:32.654 05:04:10 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:32.654 05:04:10 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:32.654 05:04:10 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:32.654 05:04:10 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:32.654 05:04:10 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:32.654 05:04:10 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:32.654 05:04:10 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:32.654 05:04:10 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:32.654 05:04:10 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:32.654 05:04:10 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:32.654 05:04:10 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:32.654 05:04:10 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:32.654 05:04:10 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:32.654 05:04:10 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:32.654 05:04:10 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:32.654 05:04:10 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:32.654 05:04:10 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:32.654 05:04:10 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:32.654 05:04:10 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:32.654 05:04:10 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:16:32.654 05:04:10 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:16:32.654 05:04:10 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:16:32.654 05:04:10 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:16:32.654 05:04:10 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:16:32.654 05:04:10 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:16:32.654 05:04:10 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:16:32.654 05:04:10 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:16:32.654 05:04:10 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:32.654 05:04:10 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:32.654 05:04:10 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:16:32.654 05:04:10 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=85122 00:16:32.654 05:04:10 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 85122 00:16:32.654 05:04:10 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 85122 ']' 00:16:32.654 05:04:10 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:16:32.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:32.654 05:04:10 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:32.654 05:04:10 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:32.654 05:04:10 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:32.654 05:04:10 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:32.654 05:04:10 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:16:32.654 [2024-12-06 05:04:10.715010] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:16:32.654 [2024-12-06 05:04:10.715356] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85122 ] 00:16:32.654 [2024-12-06 05:04:10.853255] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:32.915 [2024-12-06 05:04:10.926808] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:32.915 [2024-12-06 05:04:10.927148] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:16:32.915 [2024-12-06 05:04:10.927193] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:33.485 05:04:11 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:33.485 05:04:11 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:16:33.485 05:04:11 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:16:33.485 05:04:11 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:16:33.485 05:04:11 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:16:33.485 05:04:11 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:16:33.485 05:04:11 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:16:33.485 05:04:11 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:16:33.745 05:04:11 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:33.745 05:04:11 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:16:33.745 05:04:11 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:33.745 05:04:11 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:16:33.745 05:04:11 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:33.745 05:04:11 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:16:33.745 05:04:11 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:16:33.745 05:04:11 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:34.005 05:04:12 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:34.005 { 00:16:34.005 "name": "nvme0n1", 00:16:34.005 "aliases": [ 00:16:34.005 "9d0412d4-ef4a-4fb9-a062-627ba7af0d04" 00:16:34.005 ], 00:16:34.005 "product_name": "NVMe disk", 00:16:34.005 "block_size": 4096, 00:16:34.005 "num_blocks": 1310720, 00:16:34.005 "uuid": "9d0412d4-ef4a-4fb9-a062-627ba7af0d04", 00:16:34.005 "numa_id": -1, 00:16:34.005 "assigned_rate_limits": { 00:16:34.005 "rw_ios_per_sec": 0, 00:16:34.005 "rw_mbytes_per_sec": 0, 00:16:34.005 "r_mbytes_per_sec": 0, 00:16:34.005 "w_mbytes_per_sec": 0 00:16:34.005 }, 00:16:34.005 "claimed": true, 00:16:34.005 "claim_type": "read_many_write_one", 00:16:34.005 "zoned": false, 00:16:34.005 "supported_io_types": { 00:16:34.005 "read": true, 00:16:34.005 "write": true, 00:16:34.005 "unmap": true, 00:16:34.005 "flush": true, 00:16:34.005 "reset": true, 00:16:34.005 "nvme_admin": true, 00:16:34.006 "nvme_io": true, 00:16:34.006 "nvme_io_md": false, 00:16:34.006 "write_zeroes": true, 00:16:34.006 "zcopy": false, 00:16:34.006 "get_zone_info": false, 00:16:34.006 "zone_management": false, 00:16:34.006 "zone_append": false, 00:16:34.006 "compare": true, 00:16:34.006 "compare_and_write": false, 00:16:34.006 "abort": true, 00:16:34.006 "seek_hole": false, 00:16:34.006 "seek_data": false, 00:16:34.006 "copy": true, 00:16:34.006 "nvme_iov_md": false 00:16:34.006 }, 00:16:34.006 "driver_specific": { 00:16:34.006 "nvme": [ 00:16:34.006 { 00:16:34.006 "pci_address": "0000:00:11.0", 00:16:34.006 "trid": { 00:16:34.006 "trtype": "PCIe", 00:16:34.006 "traddr": "0000:00:11.0" 00:16:34.006 }, 00:16:34.006 "ctrlr_data": { 00:16:34.006 "cntlid": 0, 00:16:34.006 "vendor_id": "0x1b36", 00:16:34.006 "model_number": "QEMU NVMe Ctrl", 00:16:34.006 "serial_number": "12341", 00:16:34.006 "firmware_revision": "8.0.0", 00:16:34.006 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:34.006 "oacs": { 00:16:34.006 "security": 0, 00:16:34.006 "format": 1, 00:16:34.006 "firmware": 0, 00:16:34.006 "ns_manage": 1 00:16:34.006 }, 00:16:34.006 "multi_ctrlr": false, 00:16:34.006 "ana_reporting": false 00:16:34.006 }, 00:16:34.006 "vs": { 00:16:34.006 "nvme_version": "1.4" 00:16:34.006 }, 00:16:34.006 "ns_data": { 00:16:34.006 "id": 1, 00:16:34.006 "can_share": false 00:16:34.006 } 00:16:34.006 } 00:16:34.006 ], 00:16:34.006 "mp_policy": "active_passive" 00:16:34.006 } 00:16:34.006 } 00:16:34.006 ]' 00:16:34.006 05:04:12 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:34.006 05:04:12 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:16:34.006 05:04:12 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:34.006 05:04:12 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=1310720 00:16:34.006 05:04:12 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:16:34.006 05:04:12 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 5120 00:16:34.006 05:04:12 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:16:34.006 05:04:12 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:34.006 05:04:12 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:16:34.006 05:04:12 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:34.006 05:04:12 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:34.267 05:04:12 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=686346a5-423d-4310-a6bd-0b9cb652f1f8 00:16:34.267 05:04:12 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:16:34.267 05:04:12 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 686346a5-423d-4310-a6bd-0b9cb652f1f8 00:16:34.526 05:04:12 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:34.786 05:04:12 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=40238c0f-69a5-445f-af7f-17ad69dff3c6 00:16:34.786 05:04:12 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 40238c0f-69a5-445f-af7f-17ad69dff3c6 00:16:35.046 05:04:13 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=af6c09ec-0a72-4908-b7ff-7843a0ea717b 00:16:35.046 05:04:13 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 af6c09ec-0a72-4908-b7ff-7843a0ea717b 00:16:35.046 05:04:13 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:16:35.046 05:04:13 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:16:35.046 05:04:13 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=af6c09ec-0a72-4908-b7ff-7843a0ea717b 00:16:35.046 05:04:13 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:16:35.046 05:04:13 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size af6c09ec-0a72-4908-b7ff-7843a0ea717b 00:16:35.046 05:04:13 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=af6c09ec-0a72-4908-b7ff-7843a0ea717b 00:16:35.046 05:04:13 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:35.046 05:04:13 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:16:35.046 05:04:13 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:16:35.046 05:04:13 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b af6c09ec-0a72-4908-b7ff-7843a0ea717b 00:16:35.305 05:04:13 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:35.305 { 00:16:35.305 "name": "af6c09ec-0a72-4908-b7ff-7843a0ea717b", 00:16:35.305 "aliases": [ 00:16:35.305 "lvs/nvme0n1p0" 00:16:35.305 ], 00:16:35.305 "product_name": "Logical Volume", 00:16:35.305 "block_size": 4096, 00:16:35.305 "num_blocks": 26476544, 00:16:35.305 "uuid": "af6c09ec-0a72-4908-b7ff-7843a0ea717b", 00:16:35.305 "assigned_rate_limits": { 00:16:35.305 "rw_ios_per_sec": 0, 00:16:35.305 "rw_mbytes_per_sec": 0, 00:16:35.305 "r_mbytes_per_sec": 0, 00:16:35.305 "w_mbytes_per_sec": 0 00:16:35.305 }, 00:16:35.305 "claimed": false, 00:16:35.305 "zoned": false, 00:16:35.305 "supported_io_types": { 00:16:35.305 "read": true, 00:16:35.305 "write": true, 00:16:35.305 "unmap": true, 00:16:35.305 "flush": false, 00:16:35.305 "reset": true, 00:16:35.305 "nvme_admin": false, 00:16:35.306 "nvme_io": false, 00:16:35.306 "nvme_io_md": false, 00:16:35.306 "write_zeroes": true, 00:16:35.306 "zcopy": false, 00:16:35.306 "get_zone_info": false, 00:16:35.306 "zone_management": false, 00:16:35.306 "zone_append": false, 00:16:35.306 "compare": false, 00:16:35.306 "compare_and_write": false, 00:16:35.306 "abort": false, 00:16:35.306 "seek_hole": true, 00:16:35.306 "seek_data": true, 00:16:35.306 "copy": false, 00:16:35.306 "nvme_iov_md": false 00:16:35.306 }, 00:16:35.306 "driver_specific": { 00:16:35.306 "lvol": { 00:16:35.306 "lvol_store_uuid": "40238c0f-69a5-445f-af7f-17ad69dff3c6", 00:16:35.306 "base_bdev": "nvme0n1", 00:16:35.306 "thin_provision": true, 00:16:35.306 "num_allocated_clusters": 0, 00:16:35.306 "snapshot": false, 00:16:35.306 "clone": false, 00:16:35.306 "esnap_clone": false 00:16:35.306 } 00:16:35.306 } 00:16:35.306 } 00:16:35.306 ]' 00:16:35.306 05:04:13 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:35.306 05:04:13 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:16:35.306 05:04:13 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:35.306 05:04:13 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:35.306 05:04:13 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:35.306 05:04:13 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:16:35.306 05:04:13 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:16:35.306 05:04:13 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:16:35.306 05:04:13 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:16:35.564 05:04:13 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:35.564 05:04:13 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:35.564 05:04:13 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size af6c09ec-0a72-4908-b7ff-7843a0ea717b 00:16:35.564 05:04:13 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=af6c09ec-0a72-4908-b7ff-7843a0ea717b 00:16:35.564 05:04:13 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:35.564 05:04:13 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:16:35.564 05:04:13 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:16:35.564 05:04:13 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b af6c09ec-0a72-4908-b7ff-7843a0ea717b 00:16:35.821 05:04:13 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:35.821 { 00:16:35.821 "name": "af6c09ec-0a72-4908-b7ff-7843a0ea717b", 00:16:35.821 "aliases": [ 00:16:35.821 "lvs/nvme0n1p0" 00:16:35.821 ], 00:16:35.821 "product_name": "Logical Volume", 00:16:35.821 "block_size": 4096, 00:16:35.821 "num_blocks": 26476544, 00:16:35.821 "uuid": "af6c09ec-0a72-4908-b7ff-7843a0ea717b", 00:16:35.821 "assigned_rate_limits": { 00:16:35.821 "rw_ios_per_sec": 0, 00:16:35.821 "rw_mbytes_per_sec": 0, 00:16:35.821 "r_mbytes_per_sec": 0, 00:16:35.821 "w_mbytes_per_sec": 0 00:16:35.821 }, 00:16:35.821 "claimed": false, 00:16:35.821 "zoned": false, 00:16:35.821 "supported_io_types": { 00:16:35.821 "read": true, 00:16:35.821 "write": true, 00:16:35.821 "unmap": true, 00:16:35.821 "flush": false, 00:16:35.821 "reset": true, 00:16:35.821 "nvme_admin": false, 00:16:35.821 "nvme_io": false, 00:16:35.821 "nvme_io_md": false, 00:16:35.821 "write_zeroes": true, 00:16:35.821 "zcopy": false, 00:16:35.821 "get_zone_info": false, 00:16:35.821 "zone_management": false, 00:16:35.821 "zone_append": false, 00:16:35.821 "compare": false, 00:16:35.821 "compare_and_write": false, 00:16:35.821 "abort": false, 00:16:35.821 "seek_hole": true, 00:16:35.821 "seek_data": true, 00:16:35.821 "copy": false, 00:16:35.821 "nvme_iov_md": false 00:16:35.821 }, 00:16:35.821 "driver_specific": { 00:16:35.821 "lvol": { 00:16:35.821 "lvol_store_uuid": "40238c0f-69a5-445f-af7f-17ad69dff3c6", 00:16:35.821 "base_bdev": "nvme0n1", 00:16:35.821 "thin_provision": true, 00:16:35.821 "num_allocated_clusters": 0, 00:16:35.821 "snapshot": false, 00:16:35.821 "clone": false, 00:16:35.821 "esnap_clone": false 00:16:35.821 } 00:16:35.821 } 00:16:35.821 } 00:16:35.821 ]' 00:16:35.821 05:04:13 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:35.821 05:04:13 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:16:35.821 05:04:13 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:35.821 05:04:13 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:35.821 05:04:13 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:35.821 05:04:13 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:16:35.821 05:04:13 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:16:35.821 05:04:13 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:36.079 05:04:14 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:16:36.079 05:04:14 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:16:36.079 05:04:14 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size af6c09ec-0a72-4908-b7ff-7843a0ea717b 00:16:36.079 05:04:14 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=af6c09ec-0a72-4908-b7ff-7843a0ea717b 00:16:36.079 05:04:14 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:36.079 05:04:14 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:16:36.079 05:04:14 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:16:36.079 05:04:14 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b af6c09ec-0a72-4908-b7ff-7843a0ea717b 00:16:36.079 05:04:14 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:36.079 { 00:16:36.079 "name": "af6c09ec-0a72-4908-b7ff-7843a0ea717b", 00:16:36.079 "aliases": [ 00:16:36.079 "lvs/nvme0n1p0" 00:16:36.079 ], 00:16:36.079 "product_name": "Logical Volume", 00:16:36.079 "block_size": 4096, 00:16:36.079 "num_blocks": 26476544, 00:16:36.079 "uuid": "af6c09ec-0a72-4908-b7ff-7843a0ea717b", 00:16:36.079 "assigned_rate_limits": { 00:16:36.079 "rw_ios_per_sec": 0, 00:16:36.079 "rw_mbytes_per_sec": 0, 00:16:36.079 "r_mbytes_per_sec": 0, 00:16:36.079 "w_mbytes_per_sec": 0 00:16:36.079 }, 00:16:36.079 "claimed": false, 00:16:36.079 "zoned": false, 00:16:36.079 "supported_io_types": { 00:16:36.079 "read": true, 00:16:36.079 "write": true, 00:16:36.079 "unmap": true, 00:16:36.079 "flush": false, 00:16:36.079 "reset": true, 00:16:36.079 "nvme_admin": false, 00:16:36.079 "nvme_io": false, 00:16:36.079 "nvme_io_md": false, 00:16:36.079 "write_zeroes": true, 00:16:36.079 "zcopy": false, 00:16:36.079 "get_zone_info": false, 00:16:36.079 "zone_management": false, 00:16:36.079 "zone_append": false, 00:16:36.079 "compare": false, 00:16:36.079 "compare_and_write": false, 00:16:36.079 "abort": false, 00:16:36.079 "seek_hole": true, 00:16:36.079 "seek_data": true, 00:16:36.079 "copy": false, 00:16:36.079 "nvme_iov_md": false 00:16:36.079 }, 00:16:36.079 "driver_specific": { 00:16:36.079 "lvol": { 00:16:36.079 "lvol_store_uuid": "40238c0f-69a5-445f-af7f-17ad69dff3c6", 00:16:36.079 "base_bdev": "nvme0n1", 00:16:36.079 "thin_provision": true, 00:16:36.079 "num_allocated_clusters": 0, 00:16:36.079 "snapshot": false, 00:16:36.079 "clone": false, 00:16:36.079 "esnap_clone": false 00:16:36.079 } 00:16:36.079 } 00:16:36.079 } 00:16:36.079 ]' 00:16:36.079 05:04:14 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:36.079 05:04:14 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:16:36.079 05:04:14 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:36.337 05:04:14 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:36.337 05:04:14 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:36.337 05:04:14 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:16:36.337 05:04:14 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:16:36.337 05:04:14 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d af6c09ec-0a72-4908-b7ff-7843a0ea717b -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:16:36.337 [2024-12-06 05:04:14.518315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.337 [2024-12-06 05:04:14.518367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:36.337 [2024-12-06 05:04:14.518380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:36.337 [2024-12-06 05:04:14.518388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.337 [2024-12-06 05:04:14.520353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.337 [2024-12-06 05:04:14.520383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:36.337 [2024-12-06 05:04:14.520392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.935 ms 00:16:36.337 [2024-12-06 05:04:14.520401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.337 [2024-12-06 05:04:14.520468] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:36.337 [2024-12-06 05:04:14.520679] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:36.337 [2024-12-06 05:04:14.520699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.337 [2024-12-06 05:04:14.520708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:36.337 [2024-12-06 05:04:14.520715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.236 ms 00:16:36.337 [2024-12-06 05:04:14.520722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.338 [2024-12-06 05:04:14.520849] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 73ef595a-5226-44c7-8985-04a18b3bbf5e 00:16:36.338 [2024-12-06 05:04:14.522094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.338 [2024-12-06 05:04:14.522120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:36.338 [2024-12-06 05:04:14.522131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:16:36.338 [2024-12-06 05:04:14.522137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.338 [2024-12-06 05:04:14.528792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.338 [2024-12-06 05:04:14.528815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:36.338 [2024-12-06 05:04:14.528824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.585 ms 00:16:36.338 [2024-12-06 05:04:14.528830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.338 [2024-12-06 05:04:14.528947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.338 [2024-12-06 05:04:14.528956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:36.338 [2024-12-06 05:04:14.528964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:16:36.338 [2024-12-06 05:04:14.528970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.338 [2024-12-06 05:04:14.529001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.338 [2024-12-06 05:04:14.529009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:36.338 [2024-12-06 05:04:14.529017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:36.338 [2024-12-06 05:04:14.529023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.338 [2024-12-06 05:04:14.529052] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:36.338 [2024-12-06 05:04:14.530624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.338 [2024-12-06 05:04:14.530651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:36.338 [2024-12-06 05:04:14.530661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.579 ms 00:16:36.338 [2024-12-06 05:04:14.530685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.338 [2024-12-06 05:04:14.530725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.338 [2024-12-06 05:04:14.530735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:36.338 [2024-12-06 05:04:14.530741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:16:36.338 [2024-12-06 05:04:14.530760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.338 [2024-12-06 05:04:14.530789] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:36.338 [2024-12-06 05:04:14.530922] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:36.338 [2024-12-06 05:04:14.530940] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:36.338 [2024-12-06 05:04:14.530950] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:36.338 [2024-12-06 05:04:14.530958] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:36.338 [2024-12-06 05:04:14.530975] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:36.338 [2024-12-06 05:04:14.530982] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:36.338 [2024-12-06 05:04:14.530989] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:36.338 [2024-12-06 05:04:14.530994] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:36.338 [2024-12-06 05:04:14.531002] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:36.338 [2024-12-06 05:04:14.531008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.338 [2024-12-06 05:04:14.531015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:36.338 [2024-12-06 05:04:14.531021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.220 ms 00:16:36.338 [2024-12-06 05:04:14.531031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.338 [2024-12-06 05:04:14.531109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.338 [2024-12-06 05:04:14.531119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:36.338 [2024-12-06 05:04:14.531125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:16:36.338 [2024-12-06 05:04:14.531142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.338 [2024-12-06 05:04:14.531240] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:36.338 [2024-12-06 05:04:14.531248] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:36.338 [2024-12-06 05:04:14.531255] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:36.338 [2024-12-06 05:04:14.531262] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:36.338 [2024-12-06 05:04:14.531271] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:36.338 [2024-12-06 05:04:14.531279] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:36.338 [2024-12-06 05:04:14.531284] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:36.338 [2024-12-06 05:04:14.531292] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:36.338 [2024-12-06 05:04:14.531298] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:36.338 [2024-12-06 05:04:14.531305] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:36.338 [2024-12-06 05:04:14.531311] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:36.338 [2024-12-06 05:04:14.531318] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:36.338 [2024-12-06 05:04:14.531324] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:36.338 [2024-12-06 05:04:14.531333] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:36.338 [2024-12-06 05:04:14.531339] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:36.338 [2024-12-06 05:04:14.531347] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:36.338 [2024-12-06 05:04:14.531353] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:36.338 [2024-12-06 05:04:14.531360] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:36.338 [2024-12-06 05:04:14.531366] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:36.338 [2024-12-06 05:04:14.531373] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:36.338 [2024-12-06 05:04:14.531379] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:36.338 [2024-12-06 05:04:14.531386] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:36.338 [2024-12-06 05:04:14.531392] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:36.338 [2024-12-06 05:04:14.531399] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:36.338 [2024-12-06 05:04:14.531405] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:36.338 [2024-12-06 05:04:14.531412] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:36.338 [2024-12-06 05:04:14.531418] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:36.338 [2024-12-06 05:04:14.531436] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:36.338 [2024-12-06 05:04:14.531442] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:36.338 [2024-12-06 05:04:14.531452] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:36.338 [2024-12-06 05:04:14.531458] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:36.338 [2024-12-06 05:04:14.531465] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:36.338 [2024-12-06 05:04:14.531471] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:36.338 [2024-12-06 05:04:14.531480] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:36.339 [2024-12-06 05:04:14.531487] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:36.339 [2024-12-06 05:04:14.531494] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:36.339 [2024-12-06 05:04:14.531500] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:36.339 [2024-12-06 05:04:14.531508] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:36.339 [2024-12-06 05:04:14.531514] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:36.339 [2024-12-06 05:04:14.531521] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:36.339 [2024-12-06 05:04:14.531527] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:36.339 [2024-12-06 05:04:14.531534] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:36.339 [2024-12-06 05:04:14.531540] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:36.339 [2024-12-06 05:04:14.531547] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:36.339 [2024-12-06 05:04:14.531562] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:36.339 [2024-12-06 05:04:14.531572] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:36.339 [2024-12-06 05:04:14.531587] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:36.339 [2024-12-06 05:04:14.531596] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:36.339 [2024-12-06 05:04:14.531602] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:36.339 [2024-12-06 05:04:14.531609] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:36.339 [2024-12-06 05:04:14.531615] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:36.339 [2024-12-06 05:04:14.531623] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:36.339 [2024-12-06 05:04:14.531629] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:36.339 [2024-12-06 05:04:14.531639] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:36.339 [2024-12-06 05:04:14.531647] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:36.339 [2024-12-06 05:04:14.531656] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:36.339 [2024-12-06 05:04:14.531674] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:36.339 [2024-12-06 05:04:14.531682] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:36.339 [2024-12-06 05:04:14.531687] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:36.339 [2024-12-06 05:04:14.531694] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:36.339 [2024-12-06 05:04:14.531699] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:36.339 [2024-12-06 05:04:14.531707] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:36.339 [2024-12-06 05:04:14.531713] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:36.339 [2024-12-06 05:04:14.531719] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:36.339 [2024-12-06 05:04:14.531724] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:36.339 [2024-12-06 05:04:14.531732] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:36.339 [2024-12-06 05:04:14.531737] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:36.339 [2024-12-06 05:04:14.531744] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:36.339 [2024-12-06 05:04:14.531749] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:36.339 [2024-12-06 05:04:14.531756] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:36.339 [2024-12-06 05:04:14.531762] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:36.339 [2024-12-06 05:04:14.531770] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:36.339 [2024-12-06 05:04:14.531776] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:36.339 [2024-12-06 05:04:14.531783] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:36.339 [2024-12-06 05:04:14.531789] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:36.339 [2024-12-06 05:04:14.531796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.339 [2024-12-06 05:04:14.531801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:36.339 [2024-12-06 05:04:14.531813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.605 ms 00:16:36.339 [2024-12-06 05:04:14.531818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.339 [2024-12-06 05:04:14.531886] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:16:36.339 [2024-12-06 05:04:14.531894] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:16:39.652 [2024-12-06 05:04:17.255855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.652 [2024-12-06 05:04:17.255907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:39.652 [2024-12-06 05:04:17.255923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2723.955 ms 00:16:39.652 [2024-12-06 05:04:17.255932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.652 [2024-12-06 05:04:17.276696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.652 [2024-12-06 05:04:17.276737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:39.652 [2024-12-06 05:04:17.276752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.654 ms 00:16:39.652 [2024-12-06 05:04:17.276761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.652 [2024-12-06 05:04:17.276913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.652 [2024-12-06 05:04:17.276926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:39.652 [2024-12-06 05:04:17.276937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:16:39.652 [2024-12-06 05:04:17.276945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.652 [2024-12-06 05:04:17.287962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.652 [2024-12-06 05:04:17.288010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:39.652 [2024-12-06 05:04:17.288025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.982 ms 00:16:39.652 [2024-12-06 05:04:17.288035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.652 [2024-12-06 05:04:17.288118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.652 [2024-12-06 05:04:17.288129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:39.652 [2024-12-06 05:04:17.288141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:39.652 [2024-12-06 05:04:17.288162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.652 [2024-12-06 05:04:17.288582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.652 [2024-12-06 05:04:17.288601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:39.652 [2024-12-06 05:04:17.288615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.381 ms 00:16:39.652 [2024-12-06 05:04:17.288626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.652 [2024-12-06 05:04:17.288826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.652 [2024-12-06 05:04:17.288846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:39.652 [2024-12-06 05:04:17.288859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:16:39.652 [2024-12-06 05:04:17.288870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.652 [2024-12-06 05:04:17.296127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.652 [2024-12-06 05:04:17.296174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:39.652 [2024-12-06 05:04:17.296185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.216 ms 00:16:39.652 [2024-12-06 05:04:17.296203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.652 [2024-12-06 05:04:17.305185] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:39.652 [2024-12-06 05:04:17.322481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.652 [2024-12-06 05:04:17.322676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:39.652 [2024-12-06 05:04:17.322693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.179 ms 00:16:39.652 [2024-12-06 05:04:17.322714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.652 [2024-12-06 05:04:17.380805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.652 [2024-12-06 05:04:17.380855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:39.652 [2024-12-06 05:04:17.380867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.022 ms 00:16:39.652 [2024-12-06 05:04:17.380880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.652 [2024-12-06 05:04:17.381078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.652 [2024-12-06 05:04:17.381091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:39.652 [2024-12-06 05:04:17.381103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.144 ms 00:16:39.652 [2024-12-06 05:04:17.381124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.652 [2024-12-06 05:04:17.384270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.652 [2024-12-06 05:04:17.384305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:39.652 [2024-12-06 05:04:17.384316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.116 ms 00:16:39.652 [2024-12-06 05:04:17.384326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.652 [2024-12-06 05:04:17.387198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.652 [2024-12-06 05:04:17.387231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:39.652 [2024-12-06 05:04:17.387242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.816 ms 00:16:39.652 [2024-12-06 05:04:17.387253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.652 [2024-12-06 05:04:17.387577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.652 [2024-12-06 05:04:17.387597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:39.652 [2024-12-06 05:04:17.387609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 00:16:39.652 [2024-12-06 05:04:17.387621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.652 [2024-12-06 05:04:17.418127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.652 [2024-12-06 05:04:17.418264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:39.652 [2024-12-06 05:04:17.418281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.476 ms 00:16:39.652 [2024-12-06 05:04:17.418291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.652 [2024-12-06 05:04:17.422440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.652 [2024-12-06 05:04:17.422474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:39.652 [2024-12-06 05:04:17.422488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.080 ms 00:16:39.652 [2024-12-06 05:04:17.422512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.652 [2024-12-06 05:04:17.425427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.652 [2024-12-06 05:04:17.425545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:16:39.652 [2024-12-06 05:04:17.425560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.871 ms 00:16:39.652 [2024-12-06 05:04:17.425570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.652 [2024-12-06 05:04:17.429222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.652 [2024-12-06 05:04:17.429257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:39.652 [2024-12-06 05:04:17.429267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.606 ms 00:16:39.652 [2024-12-06 05:04:17.429279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.652 [2024-12-06 05:04:17.429330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.652 [2024-12-06 05:04:17.429341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:39.652 [2024-12-06 05:04:17.429361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:39.652 [2024-12-06 05:04:17.429373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.652 [2024-12-06 05:04:17.429480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.652 [2024-12-06 05:04:17.429495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:39.652 [2024-12-06 05:04:17.429504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:16:39.652 [2024-12-06 05:04:17.429513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.652 [2024-12-06 05:04:17.430504] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:39.652 [2024-12-06 05:04:17.431494] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2911.866 ms, result 0 00:16:39.652 [2024-12-06 05:04:17.432222] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ap{ 00:16:39.652 "name": "ftl0", 00:16:39.652 "uuid": "73ef595a-5226-44c7-8985-04a18b3bbf5e" 00:16:39.652 } 00:16:39.652 p_thread 00:16:39.652 05:04:17 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:16:39.652 05:04:17 ftl.ftl_trim -- common/autotest_common.sh@899 -- # local bdev_name=ftl0 00:16:39.652 05:04:17 ftl.ftl_trim -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:39.652 05:04:17 ftl.ftl_trim -- common/autotest_common.sh@901 -- # local i 00:16:39.652 05:04:17 ftl.ftl_trim -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:39.652 05:04:17 ftl.ftl_trim -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:39.652 05:04:17 ftl.ftl_trim -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:39.652 05:04:17 ftl.ftl_trim -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:16:39.652 [ 00:16:39.652 { 00:16:39.652 "name": "ftl0", 00:16:39.652 "aliases": [ 00:16:39.652 "73ef595a-5226-44c7-8985-04a18b3bbf5e" 00:16:39.652 ], 00:16:39.652 "product_name": "FTL disk", 00:16:39.652 "block_size": 4096, 00:16:39.652 "num_blocks": 23592960, 00:16:39.652 "uuid": "73ef595a-5226-44c7-8985-04a18b3bbf5e", 00:16:39.652 "assigned_rate_limits": { 00:16:39.652 "rw_ios_per_sec": 0, 00:16:39.652 "rw_mbytes_per_sec": 0, 00:16:39.652 "r_mbytes_per_sec": 0, 00:16:39.652 "w_mbytes_per_sec": 0 00:16:39.652 }, 00:16:39.652 "claimed": false, 00:16:39.652 "zoned": false, 00:16:39.653 "supported_io_types": { 00:16:39.653 "read": true, 00:16:39.653 "write": true, 00:16:39.653 "unmap": true, 00:16:39.653 "flush": true, 00:16:39.653 "reset": false, 00:16:39.653 "nvme_admin": false, 00:16:39.653 "nvme_io": false, 00:16:39.653 "nvme_io_md": false, 00:16:39.653 "write_zeroes": true, 00:16:39.653 "zcopy": false, 00:16:39.653 "get_zone_info": false, 00:16:39.653 "zone_management": false, 00:16:39.653 "zone_append": false, 00:16:39.653 "compare": false, 00:16:39.653 "compare_and_write": false, 00:16:39.653 "abort": false, 00:16:39.653 "seek_hole": false, 00:16:39.653 "seek_data": false, 00:16:39.653 "copy": false, 00:16:39.653 "nvme_iov_md": false 00:16:39.653 }, 00:16:39.653 "driver_specific": { 00:16:39.653 "ftl": { 00:16:39.653 "base_bdev": "af6c09ec-0a72-4908-b7ff-7843a0ea717b", 00:16:39.653 "cache": "nvc0n1p0" 00:16:39.653 } 00:16:39.653 } 00:16:39.653 } 00:16:39.653 ] 00:16:39.923 05:04:17 ftl.ftl_trim -- common/autotest_common.sh@907 -- # return 0 00:16:39.923 05:04:17 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:16:39.923 05:04:17 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:16:39.923 05:04:18 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:16:39.923 05:04:18 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:16:40.198 05:04:18 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:16:40.198 { 00:16:40.198 "name": "ftl0", 00:16:40.198 "aliases": [ 00:16:40.198 "73ef595a-5226-44c7-8985-04a18b3bbf5e" 00:16:40.198 ], 00:16:40.198 "product_name": "FTL disk", 00:16:40.198 "block_size": 4096, 00:16:40.198 "num_blocks": 23592960, 00:16:40.198 "uuid": "73ef595a-5226-44c7-8985-04a18b3bbf5e", 00:16:40.198 "assigned_rate_limits": { 00:16:40.198 "rw_ios_per_sec": 0, 00:16:40.198 "rw_mbytes_per_sec": 0, 00:16:40.198 "r_mbytes_per_sec": 0, 00:16:40.198 "w_mbytes_per_sec": 0 00:16:40.198 }, 00:16:40.198 "claimed": false, 00:16:40.198 "zoned": false, 00:16:40.198 "supported_io_types": { 00:16:40.198 "read": true, 00:16:40.198 "write": true, 00:16:40.198 "unmap": true, 00:16:40.198 "flush": true, 00:16:40.198 "reset": false, 00:16:40.198 "nvme_admin": false, 00:16:40.198 "nvme_io": false, 00:16:40.198 "nvme_io_md": false, 00:16:40.198 "write_zeroes": true, 00:16:40.198 "zcopy": false, 00:16:40.198 "get_zone_info": false, 00:16:40.198 "zone_management": false, 00:16:40.198 "zone_append": false, 00:16:40.198 "compare": false, 00:16:40.198 "compare_and_write": false, 00:16:40.198 "abort": false, 00:16:40.198 "seek_hole": false, 00:16:40.198 "seek_data": false, 00:16:40.198 "copy": false, 00:16:40.198 "nvme_iov_md": false 00:16:40.198 }, 00:16:40.198 "driver_specific": { 00:16:40.198 "ftl": { 00:16:40.198 "base_bdev": "af6c09ec-0a72-4908-b7ff-7843a0ea717b", 00:16:40.198 "cache": "nvc0n1p0" 00:16:40.198 } 00:16:40.198 } 00:16:40.198 } 00:16:40.198 ]' 00:16:40.198 05:04:18 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:16:40.198 05:04:18 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:16:40.198 05:04:18 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:16:40.457 [2024-12-06 05:04:18.488186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.457 [2024-12-06 05:04:18.488229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:40.457 [2024-12-06 05:04:18.488244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:40.457 [2024-12-06 05:04:18.488252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.457 [2024-12-06 05:04:18.488301] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:40.457 [2024-12-06 05:04:18.488871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.457 [2024-12-06 05:04:18.488891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:40.457 [2024-12-06 05:04:18.488900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.556 ms 00:16:40.457 [2024-12-06 05:04:18.488912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.457 [2024-12-06 05:04:18.489473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.457 [2024-12-06 05:04:18.489493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:40.457 [2024-12-06 05:04:18.489501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.532 ms 00:16:40.457 [2024-12-06 05:04:18.489527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.457 [2024-12-06 05:04:18.493208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.457 [2024-12-06 05:04:18.493232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:40.457 [2024-12-06 05:04:18.493242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.639 ms 00:16:40.457 [2024-12-06 05:04:18.493251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.457 [2024-12-06 05:04:18.500258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.457 [2024-12-06 05:04:18.500290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:40.457 [2024-12-06 05:04:18.500299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.962 ms 00:16:40.457 [2024-12-06 05:04:18.500311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.457 [2024-12-06 05:04:18.501989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.457 [2024-12-06 05:04:18.502024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:40.457 [2024-12-06 05:04:18.502034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.576 ms 00:16:40.457 [2024-12-06 05:04:18.502047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.457 [2024-12-06 05:04:18.506536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.457 [2024-12-06 05:04:18.506576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:40.457 [2024-12-06 05:04:18.506586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.446 ms 00:16:40.457 [2024-12-06 05:04:18.506596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.457 [2024-12-06 05:04:18.506809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.457 [2024-12-06 05:04:18.506824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:40.457 [2024-12-06 05:04:18.506833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.153 ms 00:16:40.457 [2024-12-06 05:04:18.506846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.457 [2024-12-06 05:04:18.508745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.457 [2024-12-06 05:04:18.508776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:40.457 [2024-12-06 05:04:18.508785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.865 ms 00:16:40.457 [2024-12-06 05:04:18.508797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.457 [2024-12-06 05:04:18.510315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.457 [2024-12-06 05:04:18.510348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:40.457 [2024-12-06 05:04:18.510357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.471 ms 00:16:40.457 [2024-12-06 05:04:18.510366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.457 [2024-12-06 05:04:18.511530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.457 [2024-12-06 05:04:18.511653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:40.457 [2024-12-06 05:04:18.511683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.107 ms 00:16:40.457 [2024-12-06 05:04:18.511693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.457 [2024-12-06 05:04:18.512977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.457 [2024-12-06 05:04:18.513006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:40.457 [2024-12-06 05:04:18.513015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.191 ms 00:16:40.457 [2024-12-06 05:04:18.513027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.457 [2024-12-06 05:04:18.513068] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:40.457 [2024-12-06 05:04:18.513085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:40.457 [2024-12-06 05:04:18.513095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:40.457 [2024-12-06 05:04:18.513107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:40.457 [2024-12-06 05:04:18.513115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:40.457 [2024-12-06 05:04:18.513126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:40.457 [2024-12-06 05:04:18.513133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:40.457 [2024-12-06 05:04:18.513143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:40.457 [2024-12-06 05:04:18.513150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:40.457 [2024-12-06 05:04:18.513160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:40.457 [2024-12-06 05:04:18.513168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:40.457 [2024-12-06 05:04:18.513177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:40.457 [2024-12-06 05:04:18.513185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:40.457 [2024-12-06 05:04:18.513195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:40.457 [2024-12-06 05:04:18.513203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:40.457 [2024-12-06 05:04:18.513212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:40.457 [2024-12-06 05:04:18.513220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:40.457 [2024-12-06 05:04:18.513229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:40.457 [2024-12-06 05:04:18.513236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:40.457 [2024-12-06 05:04:18.513248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:40.457 [2024-12-06 05:04:18.513257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:40.457 [2024-12-06 05:04:18.513266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:40.457 [2024-12-06 05:04:18.513273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:40.457 [2024-12-06 05:04:18.513282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:40.457 [2024-12-06 05:04:18.513289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:40.457 [2024-12-06 05:04:18.513303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:40.457 [2024-12-06 05:04:18.513310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:40.457 [2024-12-06 05:04:18.513320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:40.457 [2024-12-06 05:04:18.513328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:40.457 [2024-12-06 05:04:18.513337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:40.457 [2024-12-06 05:04:18.513344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:40.457 [2024-12-06 05:04:18.513354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:40.457 [2024-12-06 05:04:18.513361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:40.457 [2024-12-06 05:04:18.513370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:40.457 [2024-12-06 05:04:18.513379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:40.457 [2024-12-06 05:04:18.513390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:40.457 [2024-12-06 05:04:18.513397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:40.457 [2024-12-06 05:04:18.513408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:40.457 [2024-12-06 05:04:18.513417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:40.457 [2024-12-06 05:04:18.513427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:40.457 [2024-12-06 05:04:18.513435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:40.457 [2024-12-06 05:04:18.513444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:40.457 [2024-12-06 05:04:18.513451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:40.457 [2024-12-06 05:04:18.513460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:40.457 [2024-12-06 05:04:18.513469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:40.457 [2024-12-06 05:04:18.513479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:40.457 [2024-12-06 05:04:18.513487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:40.457 [2024-12-06 05:04:18.513496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:40.457 [2024-12-06 05:04:18.513504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:40.457 [2024-12-06 05:04:18.513513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:40.457 [2024-12-06 05:04:18.513520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:40.457 [2024-12-06 05:04:18.513532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:40.457 [2024-12-06 05:04:18.513541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:40.457 [2024-12-06 05:04:18.513550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:40.457 [2024-12-06 05:04:18.513558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:40.457 [2024-12-06 05:04:18.513567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:40.457 [2024-12-06 05:04:18.513574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:40.457 [2024-12-06 05:04:18.513585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:40.457 [2024-12-06 05:04:18.513592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:40.457 [2024-12-06 05:04:18.513601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:40.457 [2024-12-06 05:04:18.513608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:40.458 [2024-12-06 05:04:18.513617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:40.458 [2024-12-06 05:04:18.513626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:40.458 [2024-12-06 05:04:18.513634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:40.458 [2024-12-06 05:04:18.513642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:40.458 [2024-12-06 05:04:18.513651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:40.458 [2024-12-06 05:04:18.513683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:40.458 [2024-12-06 05:04:18.513697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:40.458 [2024-12-06 05:04:18.513704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:40.458 [2024-12-06 05:04:18.513715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:40.458 [2024-12-06 05:04:18.513723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:40.458 [2024-12-06 05:04:18.513732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:40.458 [2024-12-06 05:04:18.513741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:40.458 [2024-12-06 05:04:18.513751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:40.458 [2024-12-06 05:04:18.513758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:40.458 [2024-12-06 05:04:18.513768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:40.458 [2024-12-06 05:04:18.513775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:40.458 [2024-12-06 05:04:18.513785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:40.458 [2024-12-06 05:04:18.513793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:40.458 [2024-12-06 05:04:18.513804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:40.458 [2024-12-06 05:04:18.513812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:40.458 [2024-12-06 05:04:18.513821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:40.458 [2024-12-06 05:04:18.513829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:40.458 [2024-12-06 05:04:18.513985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:40.458 [2024-12-06 05:04:18.513994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:40.458 [2024-12-06 05:04:18.514004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:40.458 [2024-12-06 05:04:18.514012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:40.458 [2024-12-06 05:04:18.514021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:40.458 [2024-12-06 05:04:18.514029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:40.458 [2024-12-06 05:04:18.514038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:40.458 [2024-12-06 05:04:18.514059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:40.458 [2024-12-06 05:04:18.514071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:40.458 [2024-12-06 05:04:18.514079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:40.458 [2024-12-06 05:04:18.514089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:40.458 [2024-12-06 05:04:18.514096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:40.458 [2024-12-06 05:04:18.514106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:40.458 [2024-12-06 05:04:18.514114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:40.458 [2024-12-06 05:04:18.514123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:40.458 [2024-12-06 05:04:18.514130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:40.458 [2024-12-06 05:04:18.514141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:40.458 [2024-12-06 05:04:18.514149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:40.458 [2024-12-06 05:04:18.514167] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:40.458 [2024-12-06 05:04:18.514175] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 73ef595a-5226-44c7-8985-04a18b3bbf5e 00:16:40.458 [2024-12-06 05:04:18.514187] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:40.458 [2024-12-06 05:04:18.514194] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:40.458 [2024-12-06 05:04:18.514203] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:40.458 [2024-12-06 05:04:18.514212] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:40.458 [2024-12-06 05:04:18.514221] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:40.458 [2024-12-06 05:04:18.514229] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:40.458 [2024-12-06 05:04:18.514240] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:40.458 [2024-12-06 05:04:18.514247] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:40.458 [2024-12-06 05:04:18.514255] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:40.458 [2024-12-06 05:04:18.514263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.458 [2024-12-06 05:04:18.514283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:40.458 [2024-12-06 05:04:18.514291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.196 ms 00:16:40.458 [2024-12-06 05:04:18.514303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.458 [2024-12-06 05:04:18.516330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.458 [2024-12-06 05:04:18.516421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:40.458 [2024-12-06 05:04:18.516469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.997 ms 00:16:40.458 [2024-12-06 05:04:18.516498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.458 [2024-12-06 05:04:18.516631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.458 [2024-12-06 05:04:18.516693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:40.458 [2024-12-06 05:04:18.516756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:16:40.458 [2024-12-06 05:04:18.516781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.458 [2024-12-06 05:04:18.523255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:40.458 [2024-12-06 05:04:18.523364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:40.458 [2024-12-06 05:04:18.523411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:40.458 [2024-12-06 05:04:18.523438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.458 [2024-12-06 05:04:18.523545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:40.458 [2024-12-06 05:04:18.523573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:40.458 [2024-12-06 05:04:18.523594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:40.458 [2024-12-06 05:04:18.523704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.458 [2024-12-06 05:04:18.523786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:40.458 [2024-12-06 05:04:18.523852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:40.458 [2024-12-06 05:04:18.523876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:40.458 [2024-12-06 05:04:18.523917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.458 [2024-12-06 05:04:18.523966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:40.458 [2024-12-06 05:04:18.523997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:40.458 [2024-12-06 05:04:18.524048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:40.458 [2024-12-06 05:04:18.524074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.458 [2024-12-06 05:04:18.536063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:40.458 [2024-12-06 05:04:18.536195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:40.458 [2024-12-06 05:04:18.536243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:40.458 [2024-12-06 05:04:18.536270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.458 [2024-12-06 05:04:18.546186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:40.458 [2024-12-06 05:04:18.546334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:40.458 [2024-12-06 05:04:18.546382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:40.458 [2024-12-06 05:04:18.546413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.458 [2024-12-06 05:04:18.546507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:40.458 [2024-12-06 05:04:18.546535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:40.458 [2024-12-06 05:04:18.546555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:40.458 [2024-12-06 05:04:18.546576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.458 [2024-12-06 05:04:18.546661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:40.458 [2024-12-06 05:04:18.546773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:40.458 [2024-12-06 05:04:18.546794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:40.458 [2024-12-06 05:04:18.546814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.458 [2024-12-06 05:04:18.546914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:40.458 [2024-12-06 05:04:18.546990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:40.458 [2024-12-06 05:04:18.547017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:40.458 [2024-12-06 05:04:18.547038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.458 [2024-12-06 05:04:18.547110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:40.458 [2024-12-06 05:04:18.547187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:40.459 [2024-12-06 05:04:18.547207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:40.459 [2024-12-06 05:04:18.547231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.459 [2024-12-06 05:04:18.547332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:40.459 [2024-12-06 05:04:18.547380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:40.459 [2024-12-06 05:04:18.547424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:40.459 [2024-12-06 05:04:18.547448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.459 [2024-12-06 05:04:18.547541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:40.459 [2024-12-06 05:04:18.547571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:40.459 [2024-12-06 05:04:18.547593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:40.459 [2024-12-06 05:04:18.547614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.459 [2024-12-06 05:04:18.547836] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 59.623 ms, result 0 00:16:40.459 true 00:16:40.459 05:04:18 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 85122 00:16:40.459 05:04:18 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 85122 ']' 00:16:40.459 05:04:18 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 85122 00:16:40.459 05:04:18 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:16:40.459 05:04:18 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:40.459 05:04:18 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85122 00:16:40.459 05:04:18 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:40.459 05:04:18 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:40.459 killing process with pid 85122 00:16:40.459 05:04:18 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85122' 00:16:40.459 05:04:18 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 85122 00:16:40.459 05:04:18 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 85122 00:16:45.721 05:04:23 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:16:46.662 65536+0 records in 00:16:46.662 65536+0 records out 00:16:46.662 268435456 bytes (268 MB, 256 MiB) copied, 1.116 s, 241 MB/s 00:16:46.662 05:04:24 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:46.662 [2024-12-06 05:04:24.758080] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:16:46.662 [2024-12-06 05:04:24.758231] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85293 ] 00:16:46.936 [2024-12-06 05:04:24.895868] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:46.936 [2024-12-06 05:04:24.968640] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:46.936 [2024-12-06 05:04:25.118694] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:46.936 [2024-12-06 05:04:25.118800] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:47.202 [2024-12-06 05:04:25.282411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.202 [2024-12-06 05:04:25.282766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:47.202 [2024-12-06 05:04:25.282795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:47.202 [2024-12-06 05:04:25.282817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.202 [2024-12-06 05:04:25.285536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.202 [2024-12-06 05:04:25.285598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:47.202 [2024-12-06 05:04:25.285613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.684 ms 00:16:47.202 [2024-12-06 05:04:25.285625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.202 [2024-12-06 05:04:25.285814] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:47.202 [2024-12-06 05:04:25.286109] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:47.202 [2024-12-06 05:04:25.286130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.202 [2024-12-06 05:04:25.286145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:47.202 [2024-12-06 05:04:25.286161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.332 ms 00:16:47.202 [2024-12-06 05:04:25.286170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.202 [2024-12-06 05:04:25.288688] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:47.202 [2024-12-06 05:04:25.293569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.202 [2024-12-06 05:04:25.293630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:47.202 [2024-12-06 05:04:25.293646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.884 ms 00:16:47.202 [2024-12-06 05:04:25.293691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.202 [2024-12-06 05:04:25.293782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.202 [2024-12-06 05:04:25.293799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:47.202 [2024-12-06 05:04:25.293810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:16:47.202 [2024-12-06 05:04:25.293818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.202 [2024-12-06 05:04:25.305418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.202 [2024-12-06 05:04:25.305628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:47.202 [2024-12-06 05:04:25.305649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.550 ms 00:16:47.202 [2024-12-06 05:04:25.305684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.202 [2024-12-06 05:04:25.305859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.202 [2024-12-06 05:04:25.305873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:47.202 [2024-12-06 05:04:25.305883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:16:47.202 [2024-12-06 05:04:25.305892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.202 [2024-12-06 05:04:25.305923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.202 [2024-12-06 05:04:25.305939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:47.202 [2024-12-06 05:04:25.305953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:16:47.202 [2024-12-06 05:04:25.305965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.202 [2024-12-06 05:04:25.305997] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:47.202 [2024-12-06 05:04:25.308720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.202 [2024-12-06 05:04:25.308760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:47.202 [2024-12-06 05:04:25.308772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.731 ms 00:16:47.202 [2024-12-06 05:04:25.308780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.202 [2024-12-06 05:04:25.308828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.202 [2024-12-06 05:04:25.308846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:47.202 [2024-12-06 05:04:25.308859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:16:47.202 [2024-12-06 05:04:25.308868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.202 [2024-12-06 05:04:25.308889] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:47.202 [2024-12-06 05:04:25.308912] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:16:47.202 [2024-12-06 05:04:25.308969] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:47.202 [2024-12-06 05:04:25.308991] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:16:47.202 [2024-12-06 05:04:25.309109] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:47.202 [2024-12-06 05:04:25.309123] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:47.202 [2024-12-06 05:04:25.309135] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:47.202 [2024-12-06 05:04:25.309146] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:47.202 [2024-12-06 05:04:25.309160] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:47.202 [2024-12-06 05:04:25.309169] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:47.202 [2024-12-06 05:04:25.309180] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:47.202 [2024-12-06 05:04:25.309191] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:47.202 [2024-12-06 05:04:25.309200] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:47.202 [2024-12-06 05:04:25.309208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.202 [2024-12-06 05:04:25.309218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:47.202 [2024-12-06 05:04:25.309231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.322 ms 00:16:47.202 [2024-12-06 05:04:25.309239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.202 [2024-12-06 05:04:25.309326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.202 [2024-12-06 05:04:25.309338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:47.202 [2024-12-06 05:04:25.309346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:16:47.202 [2024-12-06 05:04:25.309353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.202 [2024-12-06 05:04:25.309456] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:47.202 [2024-12-06 05:04:25.309469] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:47.202 [2024-12-06 05:04:25.309480] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:47.202 [2024-12-06 05:04:25.309496] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:47.202 [2024-12-06 05:04:25.309505] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:47.202 [2024-12-06 05:04:25.309513] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:47.202 [2024-12-06 05:04:25.309522] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:47.202 [2024-12-06 05:04:25.309534] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:47.202 [2024-12-06 05:04:25.309548] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:47.202 [2024-12-06 05:04:25.309556] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:47.202 [2024-12-06 05:04:25.309565] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:47.202 [2024-12-06 05:04:25.309574] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:47.202 [2024-12-06 05:04:25.309584] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:47.202 [2024-12-06 05:04:25.309594] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:47.202 [2024-12-06 05:04:25.309603] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:47.202 [2024-12-06 05:04:25.309611] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:47.202 [2024-12-06 05:04:25.309619] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:47.202 [2024-12-06 05:04:25.309627] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:47.202 [2024-12-06 05:04:25.309636] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:47.202 [2024-12-06 05:04:25.309644] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:47.202 [2024-12-06 05:04:25.309653] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:47.202 [2024-12-06 05:04:25.309715] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:47.202 [2024-12-06 05:04:25.309724] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:47.202 [2024-12-06 05:04:25.309733] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:47.202 [2024-12-06 05:04:25.309747] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:47.202 [2024-12-06 05:04:25.309756] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:47.202 [2024-12-06 05:04:25.309764] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:47.202 [2024-12-06 05:04:25.309772] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:47.202 [2024-12-06 05:04:25.309781] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:47.202 [2024-12-06 05:04:25.309789] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:47.202 [2024-12-06 05:04:25.309796] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:47.202 [2024-12-06 05:04:25.309803] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:47.202 [2024-12-06 05:04:25.309810] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:47.202 [2024-12-06 05:04:25.309817] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:47.202 [2024-12-06 05:04:25.309824] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:47.202 [2024-12-06 05:04:25.309831] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:47.202 [2024-12-06 05:04:25.309839] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:47.202 [2024-12-06 05:04:25.309847] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:47.202 [2024-12-06 05:04:25.309855] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:47.202 [2024-12-06 05:04:25.309862] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:47.202 [2024-12-06 05:04:25.309873] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:47.202 [2024-12-06 05:04:25.309883] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:47.202 [2024-12-06 05:04:25.309894] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:47.202 [2024-12-06 05:04:25.309901] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:47.202 [2024-12-06 05:04:25.309914] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:47.202 [2024-12-06 05:04:25.309922] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:47.202 [2024-12-06 05:04:25.309930] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:47.202 [2024-12-06 05:04:25.309938] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:47.202 [2024-12-06 05:04:25.309947] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:47.202 [2024-12-06 05:04:25.309954] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:47.202 [2024-12-06 05:04:25.309961] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:47.202 [2024-12-06 05:04:25.309968] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:47.202 [2024-12-06 05:04:25.309975] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:47.202 [2024-12-06 05:04:25.309984] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:47.202 [2024-12-06 05:04:25.309994] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:47.202 [2024-12-06 05:04:25.310003] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:47.202 [2024-12-06 05:04:25.310014] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:47.202 [2024-12-06 05:04:25.310022] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:47.202 [2024-12-06 05:04:25.310031] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:47.202 [2024-12-06 05:04:25.310039] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:47.202 [2024-12-06 05:04:25.310059] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:47.202 [2024-12-06 05:04:25.310067] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:47.202 [2024-12-06 05:04:25.310081] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:47.202 [2024-12-06 05:04:25.310087] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:47.202 [2024-12-06 05:04:25.310094] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:47.202 [2024-12-06 05:04:25.310103] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:47.202 [2024-12-06 05:04:25.310110] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:47.202 [2024-12-06 05:04:25.310117] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:47.202 [2024-12-06 05:04:25.310125] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:47.202 [2024-12-06 05:04:25.310132] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:47.202 [2024-12-06 05:04:25.310144] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:47.202 [2024-12-06 05:04:25.310153] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:47.202 [2024-12-06 05:04:25.310165] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:47.202 [2024-12-06 05:04:25.310172] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:47.202 [2024-12-06 05:04:25.310180] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:47.202 [2024-12-06 05:04:25.310188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.202 [2024-12-06 05:04:25.310199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:47.203 [2024-12-06 05:04:25.310210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.802 ms 00:16:47.203 [2024-12-06 05:04:25.310220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.203 [2024-12-06 05:04:25.339109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.203 [2024-12-06 05:04:25.339379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:47.203 [2024-12-06 05:04:25.339412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.801 ms 00:16:47.203 [2024-12-06 05:04:25.339427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.203 [2024-12-06 05:04:25.339656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.203 [2024-12-06 05:04:25.339709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:47.203 [2024-12-06 05:04:25.339727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:16:47.203 [2024-12-06 05:04:25.339749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.203 [2024-12-06 05:04:25.356037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.203 [2024-12-06 05:04:25.356088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:47.203 [2024-12-06 05:04:25.356102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.249 ms 00:16:47.203 [2024-12-06 05:04:25.356112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.203 [2024-12-06 05:04:25.356197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.203 [2024-12-06 05:04:25.356209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:47.203 [2024-12-06 05:04:25.356230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:47.203 [2024-12-06 05:04:25.356239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.203 [2024-12-06 05:04:25.356973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.203 [2024-12-06 05:04:25.357016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:47.203 [2024-12-06 05:04:25.357029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.709 ms 00:16:47.203 [2024-12-06 05:04:25.357040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.203 [2024-12-06 05:04:25.357214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.203 [2024-12-06 05:04:25.357227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:47.203 [2024-12-06 05:04:25.357237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.142 ms 00:16:47.203 [2024-12-06 05:04:25.357249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.203 [2024-12-06 05:04:25.367716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.203 [2024-12-06 05:04:25.367758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:47.203 [2024-12-06 05:04:25.367771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.442 ms 00:16:47.203 [2024-12-06 05:04:25.367789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.203 [2024-12-06 05:04:25.372637] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:16:47.203 [2024-12-06 05:04:25.372711] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:47.203 [2024-12-06 05:04:25.372734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.203 [2024-12-06 05:04:25.372744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:47.203 [2024-12-06 05:04:25.372755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.815 ms 00:16:47.203 [2024-12-06 05:04:25.372763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.203 [2024-12-06 05:04:25.389994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.203 [2024-12-06 05:04:25.390051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:47.203 [2024-12-06 05:04:25.390064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.143 ms 00:16:47.203 [2024-12-06 05:04:25.390083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.203 [2024-12-06 05:04:25.393446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.203 [2024-12-06 05:04:25.393497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:47.203 [2024-12-06 05:04:25.393509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.265 ms 00:16:47.203 [2024-12-06 05:04:25.393516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.203 [2024-12-06 05:04:25.396195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.203 [2024-12-06 05:04:25.396389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:47.203 [2024-12-06 05:04:25.396422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.622 ms 00:16:47.203 [2024-12-06 05:04:25.396429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.203 [2024-12-06 05:04:25.396821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.203 [2024-12-06 05:04:25.396838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:47.203 [2024-12-06 05:04:25.396851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.309 ms 00:16:47.203 [2024-12-06 05:04:25.396864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.203 [2024-12-06 05:04:25.428745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.203 [2024-12-06 05:04:25.428800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:47.203 [2024-12-06 05:04:25.428814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.852 ms 00:16:47.203 [2024-12-06 05:04:25.428825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.465 [2024-12-06 05:04:25.437807] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:47.465 [2024-12-06 05:04:25.463505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.465 [2024-12-06 05:04:25.463749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:47.465 [2024-12-06 05:04:25.463773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.583 ms 00:16:47.465 [2024-12-06 05:04:25.463784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.465 [2024-12-06 05:04:25.463898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.465 [2024-12-06 05:04:25.463912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:47.465 [2024-12-06 05:04:25.463924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:16:47.465 [2024-12-06 05:04:25.463933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.465 [2024-12-06 05:04:25.464022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.465 [2024-12-06 05:04:25.464041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:47.465 [2024-12-06 05:04:25.464052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:16:47.465 [2024-12-06 05:04:25.464061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.465 [2024-12-06 05:04:25.464091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.465 [2024-12-06 05:04:25.464102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:47.465 [2024-12-06 05:04:25.464116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:47.465 [2024-12-06 05:04:25.464126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.465 [2024-12-06 05:04:25.464173] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:47.465 [2024-12-06 05:04:25.464185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.465 [2024-12-06 05:04:25.464193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:47.465 [2024-12-06 05:04:25.464215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:16:47.465 [2024-12-06 05:04:25.464223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.465 [2024-12-06 05:04:25.471511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.465 [2024-12-06 05:04:25.471729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:47.465 [2024-12-06 05:04:25.471751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.261 ms 00:16:47.465 [2024-12-06 05:04:25.471761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.465 [2024-12-06 05:04:25.471968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.465 [2024-12-06 05:04:25.471998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:47.465 [2024-12-06 05:04:25.472011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:16:47.465 [2024-12-06 05:04:25.472031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.465 [2024-12-06 05:04:25.473340] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:47.465 [2024-12-06 05:04:25.474896] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 190.526 ms, result 0 00:16:47.465 [2024-12-06 05:04:25.476694] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:47.465 [2024-12-06 05:04:25.483787] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:48.403  [2024-12-06T05:04:27.579Z] Copying: 12/256 [MB] (12 MBps) [2024-12-06T05:04:28.521Z] Copying: 27/256 [MB] (14 MBps) [2024-12-06T05:04:29.910Z] Copying: 45/256 [MB] (18 MBps) [2024-12-06T05:04:30.856Z] Copying: 63/256 [MB] (18 MBps) [2024-12-06T05:04:31.801Z] Copying: 78/256 [MB] (14 MBps) [2024-12-06T05:04:32.747Z] Copying: 94/256 [MB] (15 MBps) [2024-12-06T05:04:33.691Z] Copying: 105/256 [MB] (10 MBps) [2024-12-06T05:04:34.633Z] Copying: 116/256 [MB] (11 MBps) [2024-12-06T05:04:35.578Z] Copying: 128/256 [MB] (11 MBps) [2024-12-06T05:04:36.522Z] Copying: 139/256 [MB] (11 MBps) [2024-12-06T05:04:37.910Z] Copying: 149/256 [MB] (10 MBps) [2024-12-06T05:04:38.855Z] Copying: 160/256 [MB] (10 MBps) [2024-12-06T05:04:39.800Z] Copying: 171/256 [MB] (11 MBps) [2024-12-06T05:04:40.744Z] Copying: 183/256 [MB] (12 MBps) [2024-12-06T05:04:41.686Z] Copying: 196/256 [MB] (12 MBps) [2024-12-06T05:04:42.627Z] Copying: 209/256 [MB] (13 MBps) [2024-12-06T05:04:43.568Z] Copying: 221/256 [MB] (11 MBps) [2024-12-06T05:04:44.513Z] Copying: 232/256 [MB] (11 MBps) [2024-12-06T05:04:45.902Z] Copying: 242/256 [MB] (10 MBps) [2024-12-06T05:04:45.902Z] Copying: 258384/262144 [kB] (9800 kBps) [2024-12-06T05:04:45.902Z] Copying: 256/256 [MB] (average 12 MBps)[2024-12-06 05:04:45.839405] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:07.670 [2024-12-06 05:04:45.841532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.670 [2024-12-06 05:04:45.841598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:07.670 [2024-12-06 05:04:45.841619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:07.670 [2024-12-06 05:04:45.841632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.670 [2024-12-06 05:04:45.841713] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:07.670 [2024-12-06 05:04:45.842509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.670 [2024-12-06 05:04:45.842564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:07.670 [2024-12-06 05:04:45.842580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.770 ms 00:17:07.670 [2024-12-06 05:04:45.842590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.670 [2024-12-06 05:04:45.845461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.670 [2024-12-06 05:04:45.845513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:07.670 [2024-12-06 05:04:45.845530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.823 ms 00:17:07.670 [2024-12-06 05:04:45.845541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.670 [2024-12-06 05:04:45.854023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.670 [2024-12-06 05:04:45.854084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:07.670 [2024-12-06 05:04:45.854101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.455 ms 00:17:07.670 [2024-12-06 05:04:45.854112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.670 [2024-12-06 05:04:45.861765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.670 [2024-12-06 05:04:45.861817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:07.670 [2024-12-06 05:04:45.861846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.561 ms 00:17:07.670 [2024-12-06 05:04:45.861858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.670 [2024-12-06 05:04:45.864528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.670 [2024-12-06 05:04:45.864587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:07.670 [2024-12-06 05:04:45.864602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.606 ms 00:17:07.670 [2024-12-06 05:04:45.864614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.670 [2024-12-06 05:04:45.869718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.670 [2024-12-06 05:04:45.869783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:07.670 [2024-12-06 05:04:45.869798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.000 ms 00:17:07.670 [2024-12-06 05:04:45.869816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.670 [2024-12-06 05:04:45.869999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.670 [2024-12-06 05:04:45.870018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:07.670 [2024-12-06 05:04:45.870040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:17:07.670 [2024-12-06 05:04:45.870052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.670 [2024-12-06 05:04:45.873390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.670 [2024-12-06 05:04:45.873449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:07.670 [2024-12-06 05:04:45.873465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.310 ms 00:17:07.670 [2024-12-06 05:04:45.873476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.670 [2024-12-06 05:04:45.876355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.670 [2024-12-06 05:04:45.876413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:07.670 [2024-12-06 05:04:45.876427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.817 ms 00:17:07.670 [2024-12-06 05:04:45.876439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.670 [2024-12-06 05:04:45.878732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.670 [2024-12-06 05:04:45.878917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:07.670 [2024-12-06 05:04:45.878940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.205 ms 00:17:07.670 [2024-12-06 05:04:45.878950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.670 [2024-12-06 05:04:45.881197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.670 [2024-12-06 05:04:45.881254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:07.670 [2024-12-06 05:04:45.881269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.127 ms 00:17:07.670 [2024-12-06 05:04:45.881279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.670 [2024-12-06 05:04:45.881339] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:07.670 [2024-12-06 05:04:45.881362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:07.670 [2024-12-06 05:04:45.881391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:07.670 [2024-12-06 05:04:45.881405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:07.670 [2024-12-06 05:04:45.881418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:07.670 [2024-12-06 05:04:45.881431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:07.670 [2024-12-06 05:04:45.881444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:07.670 [2024-12-06 05:04:45.881458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:07.670 [2024-12-06 05:04:45.881471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:07.670 [2024-12-06 05:04:45.881483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:07.670 [2024-12-06 05:04:45.881496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:07.670 [2024-12-06 05:04:45.881510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:07.670 [2024-12-06 05:04:45.881523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:07.670 [2024-12-06 05:04:45.881536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:07.670 [2024-12-06 05:04:45.881549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:07.670 [2024-12-06 05:04:45.881562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:07.670 [2024-12-06 05:04:45.881575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:07.671 [2024-12-06 05:04:45.881588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:07.671 [2024-12-06 05:04:45.881601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:07.671 [2024-12-06 05:04:45.881613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:07.671 [2024-12-06 05:04:45.881626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:07.671 [2024-12-06 05:04:45.881639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:07.671 [2024-12-06 05:04:45.881652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:07.671 [2024-12-06 05:04:45.881696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:07.671 [2024-12-06 05:04:45.881709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:07.671 [2024-12-06 05:04:45.881722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:07.671 [2024-12-06 05:04:45.881735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:07.671 [2024-12-06 05:04:45.881750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:07.671 [2024-12-06 05:04:45.881764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:07.671 [2024-12-06 05:04:45.881776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:07.671 [2024-12-06 05:04:45.881789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:07.671 [2024-12-06 05:04:45.881802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:07.671 [2024-12-06 05:04:45.881815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:07.671 [2024-12-06 05:04:45.881828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:07.671 [2024-12-06 05:04:45.881842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:07.671 [2024-12-06 05:04:45.881856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:07.671 [2024-12-06 05:04:45.881870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:07.671 [2024-12-06 05:04:45.881894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:07.671 [2024-12-06 05:04:45.881908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:07.671 [2024-12-06 05:04:45.881921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:07.671 [2024-12-06 05:04:45.881935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:07.671 [2024-12-06 05:04:45.881948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:07.671 [2024-12-06 05:04:45.881961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:07.671 [2024-12-06 05:04:45.881976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:07.671 [2024-12-06 05:04:45.881989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:07.671 [2024-12-06 05:04:45.882003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:07.671 [2024-12-06 05:04:45.882016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:07.671 [2024-12-06 05:04:45.882029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:07.671 [2024-12-06 05:04:45.882043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:07.671 [2024-12-06 05:04:45.882057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:07.671 [2024-12-06 05:04:45.882070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:07.671 [2024-12-06 05:04:45.882083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:07.671 [2024-12-06 05:04:45.882095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:07.671 [2024-12-06 05:04:45.882108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:07.671 [2024-12-06 05:04:45.882122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:07.671 [2024-12-06 05:04:45.882135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:07.671 [2024-12-06 05:04:45.882149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:07.671 [2024-12-06 05:04:45.882162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:07.671 [2024-12-06 05:04:45.882176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:07.671 [2024-12-06 05:04:45.882189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:07.671 [2024-12-06 05:04:45.882202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:07.671 [2024-12-06 05:04:45.882215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:07.671 [2024-12-06 05:04:45.882228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:07.671 [2024-12-06 05:04:45.882240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:07.671 [2024-12-06 05:04:45.882253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:07.671 [2024-12-06 05:04:45.882266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:07.671 [2024-12-06 05:04:45.882279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:07.671 [2024-12-06 05:04:45.882293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:07.671 [2024-12-06 05:04:45.882306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:07.671 [2024-12-06 05:04:45.882319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:07.671 [2024-12-06 05:04:45.882332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:07.671 [2024-12-06 05:04:45.882345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:07.671 [2024-12-06 05:04:45.882359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:07.671 [2024-12-06 05:04:45.882372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:07.671 [2024-12-06 05:04:45.882385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:07.671 [2024-12-06 05:04:45.882399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:07.671 [2024-12-06 05:04:45.882412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:07.671 [2024-12-06 05:04:45.882425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:07.671 [2024-12-06 05:04:45.882438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:07.671 [2024-12-06 05:04:45.882452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:07.671 [2024-12-06 05:04:45.882465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:07.671 [2024-12-06 05:04:45.882478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:07.671 [2024-12-06 05:04:45.882491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:07.671 [2024-12-06 05:04:45.882504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:07.671 [2024-12-06 05:04:45.882517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:07.671 [2024-12-06 05:04:45.882532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:07.671 [2024-12-06 05:04:45.882545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:07.671 [2024-12-06 05:04:45.882557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:07.671 [2024-12-06 05:04:45.882571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:07.671 [2024-12-06 05:04:45.882584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:07.671 [2024-12-06 05:04:45.882597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:07.671 [2024-12-06 05:04:45.882610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:07.671 [2024-12-06 05:04:45.882624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:07.671 [2024-12-06 05:04:45.882636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:07.671 [2024-12-06 05:04:45.882649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:07.671 [2024-12-06 05:04:45.882983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:07.671 [2024-12-06 05:04:45.883068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:07.671 [2024-12-06 05:04:45.883121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:07.671 [2024-12-06 05:04:45.883172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:07.671 [2024-12-06 05:04:45.883223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:07.671 [2024-12-06 05:04:45.883274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:07.671 [2024-12-06 05:04:45.883337] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:07.671 [2024-12-06 05:04:45.883381] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 73ef595a-5226-44c7-8985-04a18b3bbf5e 00:17:07.671 [2024-12-06 05:04:45.883433] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:07.671 [2024-12-06 05:04:45.883466] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:07.671 [2024-12-06 05:04:45.883499] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:07.671 [2024-12-06 05:04:45.883789] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:07.672 [2024-12-06 05:04:45.883835] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:07.672 [2024-12-06 05:04:45.883871] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:07.672 [2024-12-06 05:04:45.883908] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:07.672 [2024-12-06 05:04:45.883940] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:07.672 [2024-12-06 05:04:45.883972] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:07.672 [2024-12-06 05:04:45.884006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.672 [2024-12-06 05:04:45.884040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:07.672 [2024-12-06 05:04:45.884075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.669 ms 00:17:07.672 [2024-12-06 05:04:45.884115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.672 [2024-12-06 05:04:45.887080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.672 [2024-12-06 05:04:45.887288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:07.672 [2024-12-06 05:04:45.887410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.820 ms 00:17:07.672 [2024-12-06 05:04:45.887460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.672 [2024-12-06 05:04:45.887649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.672 [2024-12-06 05:04:45.887746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:07.672 [2024-12-06 05:04:45.887862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:17:07.672 [2024-12-06 05:04:45.887900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.672 [2024-12-06 05:04:45.896052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:07.672 [2024-12-06 05:04:45.896107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:07.672 [2024-12-06 05:04:45.896123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:07.672 [2024-12-06 05:04:45.896136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.672 [2024-12-06 05:04:45.896229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:07.672 [2024-12-06 05:04:45.896244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:07.672 [2024-12-06 05:04:45.896263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:07.672 [2024-12-06 05:04:45.896276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.672 [2024-12-06 05:04:45.896344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:07.672 [2024-12-06 05:04:45.896360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:07.672 [2024-12-06 05:04:45.896374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:07.672 [2024-12-06 05:04:45.896388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.672 [2024-12-06 05:04:45.896415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:07.672 [2024-12-06 05:04:45.896431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:07.672 [2024-12-06 05:04:45.896445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:07.672 [2024-12-06 05:04:45.896463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.933 [2024-12-06 05:04:45.911407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:07.933 [2024-12-06 05:04:45.911468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:07.933 [2024-12-06 05:04:45.911483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:07.933 [2024-12-06 05:04:45.911495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.933 [2024-12-06 05:04:45.922064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:07.933 [2024-12-06 05:04:45.922119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:07.933 [2024-12-06 05:04:45.922144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:07.933 [2024-12-06 05:04:45.922156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.933 [2024-12-06 05:04:45.922223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:07.933 [2024-12-06 05:04:45.922238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:07.933 [2024-12-06 05:04:45.922250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:07.933 [2024-12-06 05:04:45.922262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.933 [2024-12-06 05:04:45.922308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:07.933 [2024-12-06 05:04:45.922324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:07.933 [2024-12-06 05:04:45.922337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:07.933 [2024-12-06 05:04:45.922350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.933 [2024-12-06 05:04:45.922460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:07.933 [2024-12-06 05:04:45.922476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:07.933 [2024-12-06 05:04:45.922491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:07.933 [2024-12-06 05:04:45.922506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.933 [2024-12-06 05:04:45.922552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:07.933 [2024-12-06 05:04:45.922567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:07.933 [2024-12-06 05:04:45.922583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:07.933 [2024-12-06 05:04:45.922600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.933 [2024-12-06 05:04:45.922692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:07.933 [2024-12-06 05:04:45.922720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:07.933 [2024-12-06 05:04:45.922737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:07.933 [2024-12-06 05:04:45.922758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.933 [2024-12-06 05:04:45.922833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:07.933 [2024-12-06 05:04:45.922852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:07.933 [2024-12-06 05:04:45.922867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:07.933 [2024-12-06 05:04:45.922883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.933 [2024-12-06 05:04:45.923101] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 81.539 ms, result 0 00:17:07.933 00:17:07.933 00:17:08.203 05:04:46 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=85522 00:17:08.203 05:04:46 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 85522 00:17:08.203 05:04:46 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 85522 ']' 00:17:08.203 05:04:46 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:08.203 05:04:46 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:08.203 05:04:46 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:08.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:08.203 05:04:46 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:08.203 05:04:46 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:08.203 05:04:46 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:17:08.203 [2024-12-06 05:04:46.257965] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:17:08.203 [2024-12-06 05:04:46.258873] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85522 ] 00:17:08.203 [2024-12-06 05:04:46.396538] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:08.465 [2024-12-06 05:04:46.449759] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:09.037 05:04:47 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:09.037 05:04:47 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:17:09.037 05:04:47 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:17:09.301 [2024-12-06 05:04:47.313459] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:09.301 [2024-12-06 05:04:47.313552] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:09.301 [2024-12-06 05:04:47.490819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.301 [2024-12-06 05:04:47.490884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:09.301 [2024-12-06 05:04:47.490905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:09.301 [2024-12-06 05:04:47.490920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.301 [2024-12-06 05:04:47.493545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.301 [2024-12-06 05:04:47.493612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:09.301 [2024-12-06 05:04:47.493632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.596 ms 00:17:09.301 [2024-12-06 05:04:47.493646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.301 [2024-12-06 05:04:47.493838] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:09.301 [2024-12-06 05:04:47.494190] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:09.301 [2024-12-06 05:04:47.494223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.301 [2024-12-06 05:04:47.494247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:09.301 [2024-12-06 05:04:47.494269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.401 ms 00:17:09.301 [2024-12-06 05:04:47.494283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.301 [2024-12-06 05:04:47.496184] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:09.301 [2024-12-06 05:04:47.500207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.301 [2024-12-06 05:04:47.500269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:09.301 [2024-12-06 05:04:47.500289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.019 ms 00:17:09.301 [2024-12-06 05:04:47.500306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.301 [2024-12-06 05:04:47.500412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.301 [2024-12-06 05:04:47.500430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:09.301 [2024-12-06 05:04:47.500452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:17:09.301 [2024-12-06 05:04:47.500470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.301 [2024-12-06 05:04:47.509041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.301 [2024-12-06 05:04:47.509246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:09.301 [2024-12-06 05:04:47.509274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.493 ms 00:17:09.301 [2024-12-06 05:04:47.509286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.301 [2024-12-06 05:04:47.509473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.301 [2024-12-06 05:04:47.509492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:09.301 [2024-12-06 05:04:47.509510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:17:09.301 [2024-12-06 05:04:47.509523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.301 [2024-12-06 05:04:47.509573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.301 [2024-12-06 05:04:47.509588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:09.301 [2024-12-06 05:04:47.509609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:17:09.301 [2024-12-06 05:04:47.509625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.301 [2024-12-06 05:04:47.509739] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:09.301 [2024-12-06 05:04:47.511909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.301 [2024-12-06 05:04:47.511964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:09.301 [2024-12-06 05:04:47.511980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.256 ms 00:17:09.301 [2024-12-06 05:04:47.511993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.301 [2024-12-06 05:04:47.512057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.301 [2024-12-06 05:04:47.512073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:09.301 [2024-12-06 05:04:47.512086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:17:09.301 [2024-12-06 05:04:47.512101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.301 [2024-12-06 05:04:47.512134] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:09.301 [2024-12-06 05:04:47.512167] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:09.301 [2024-12-06 05:04:47.512227] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:09.301 [2024-12-06 05:04:47.512258] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:09.301 [2024-12-06 05:04:47.512408] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:09.301 [2024-12-06 05:04:47.512433] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:09.301 [2024-12-06 05:04:47.512451] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:09.301 [2024-12-06 05:04:47.512471] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:09.301 [2024-12-06 05:04:47.512486] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:09.301 [2024-12-06 05:04:47.512509] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:09.301 [2024-12-06 05:04:47.512522] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:09.301 [2024-12-06 05:04:47.512537] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:09.301 [2024-12-06 05:04:47.512549] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:09.301 [2024-12-06 05:04:47.512566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.301 [2024-12-06 05:04:47.512584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:09.301 [2024-12-06 05:04:47.512606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.432 ms 00:17:09.301 [2024-12-06 05:04:47.512621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.301 [2024-12-06 05:04:47.512994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.301 [2024-12-06 05:04:47.513059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:09.301 [2024-12-06 05:04:47.513101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.332 ms 00:17:09.301 [2024-12-06 05:04:47.513135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.301 [2024-12-06 05:04:47.513304] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:09.301 [2024-12-06 05:04:47.513842] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:09.301 [2024-12-06 05:04:47.513933] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:09.301 [2024-12-06 05:04:47.513953] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:09.301 [2024-12-06 05:04:47.513973] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:09.301 [2024-12-06 05:04:47.513984] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:09.301 [2024-12-06 05:04:47.513998] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:09.301 [2024-12-06 05:04:47.514010] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:09.301 [2024-12-06 05:04:47.514028] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:09.301 [2024-12-06 05:04:47.514040] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:09.301 [2024-12-06 05:04:47.514054] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:09.301 [2024-12-06 05:04:47.514067] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:09.301 [2024-12-06 05:04:47.514081] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:09.301 [2024-12-06 05:04:47.514093] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:09.301 [2024-12-06 05:04:47.514108] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:09.301 [2024-12-06 05:04:47.514120] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:09.301 [2024-12-06 05:04:47.514134] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:09.301 [2024-12-06 05:04:47.514146] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:09.301 [2024-12-06 05:04:47.514161] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:09.301 [2024-12-06 05:04:47.514172] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:09.301 [2024-12-06 05:04:47.514190] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:09.301 [2024-12-06 05:04:47.514201] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:09.301 [2024-12-06 05:04:47.514215] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:09.301 [2024-12-06 05:04:47.514227] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:09.301 [2024-12-06 05:04:47.514241] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:09.301 [2024-12-06 05:04:47.514251] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:09.301 [2024-12-06 05:04:47.514266] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:09.301 [2024-12-06 05:04:47.514277] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:09.302 [2024-12-06 05:04:47.514291] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:09.302 [2024-12-06 05:04:47.514303] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:09.302 [2024-12-06 05:04:47.514317] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:09.302 [2024-12-06 05:04:47.514331] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:09.302 [2024-12-06 05:04:47.514346] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:09.302 [2024-12-06 05:04:47.514357] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:09.302 [2024-12-06 05:04:47.514371] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:09.302 [2024-12-06 05:04:47.514383] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:09.302 [2024-12-06 05:04:47.514402] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:09.302 [2024-12-06 05:04:47.514414] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:09.302 [2024-12-06 05:04:47.514427] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:09.302 [2024-12-06 05:04:47.514440] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:09.302 [2024-12-06 05:04:47.514454] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:09.302 [2024-12-06 05:04:47.514466] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:09.302 [2024-12-06 05:04:47.514480] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:09.302 [2024-12-06 05:04:47.514491] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:09.302 [2024-12-06 05:04:47.514506] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:09.302 [2024-12-06 05:04:47.514519] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:09.302 [2024-12-06 05:04:47.514534] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:09.302 [2024-12-06 05:04:47.514547] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:09.302 [2024-12-06 05:04:47.514562] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:09.302 [2024-12-06 05:04:47.514574] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:09.302 [2024-12-06 05:04:47.514589] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:09.302 [2024-12-06 05:04:47.514600] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:09.302 [2024-12-06 05:04:47.514617] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:09.302 [2024-12-06 05:04:47.514633] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:09.302 [2024-12-06 05:04:47.514653] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:09.302 [2024-12-06 05:04:47.514685] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:09.302 [2024-12-06 05:04:47.514700] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:09.302 [2024-12-06 05:04:47.514713] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:09.302 [2024-12-06 05:04:47.514729] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:09.302 [2024-12-06 05:04:47.514741] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:09.302 [2024-12-06 05:04:47.514757] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:09.302 [2024-12-06 05:04:47.514771] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:09.302 [2024-12-06 05:04:47.514789] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:09.302 [2024-12-06 05:04:47.514802] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:09.302 [2024-12-06 05:04:47.514818] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:09.302 [2024-12-06 05:04:47.514831] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:09.302 [2024-12-06 05:04:47.514847] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:09.302 [2024-12-06 05:04:47.514859] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:09.302 [2024-12-06 05:04:47.514884] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:09.302 [2024-12-06 05:04:47.514905] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:09.302 [2024-12-06 05:04:47.514922] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:09.302 [2024-12-06 05:04:47.514936] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:09.302 [2024-12-06 05:04:47.514951] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:09.302 [2024-12-06 05:04:47.514964] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:09.302 [2024-12-06 05:04:47.514980] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:09.302 [2024-12-06 05:04:47.514996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.302 [2024-12-06 05:04:47.515016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:09.302 [2024-12-06 05:04:47.515030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.777 ms 00:17:09.302 [2024-12-06 05:04:47.515044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.565 [2024-12-06 05:04:47.530632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.565 [2024-12-06 05:04:47.530725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:09.565 [2024-12-06 05:04:47.530744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.440 ms 00:17:09.565 [2024-12-06 05:04:47.530759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.565 [2024-12-06 05:04:47.530937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.565 [2024-12-06 05:04:47.530963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:09.565 [2024-12-06 05:04:47.530982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:17:09.565 [2024-12-06 05:04:47.530998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.565 [2024-12-06 05:04:47.543737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.565 [2024-12-06 05:04:47.543935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:09.565 [2024-12-06 05:04:47.543958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.705 ms 00:17:09.565 [2024-12-06 05:04:47.543973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.565 [2024-12-06 05:04:47.544076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.565 [2024-12-06 05:04:47.544099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:09.565 [2024-12-06 05:04:47.544114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:09.565 [2024-12-06 05:04:47.544130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.566 [2024-12-06 05:04:47.544724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.566 [2024-12-06 05:04:47.544767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:09.566 [2024-12-06 05:04:47.544783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.559 ms 00:17:09.566 [2024-12-06 05:04:47.544797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.566 [2024-12-06 05:04:47.545002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.566 [2024-12-06 05:04:47.545031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:09.566 [2024-12-06 05:04:47.545052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.163 ms 00:17:09.566 [2024-12-06 05:04:47.545068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.566 [2024-12-06 05:04:47.570567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.566 [2024-12-06 05:04:47.570799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:09.566 [2024-12-06 05:04:47.570827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.461 ms 00:17:09.566 [2024-12-06 05:04:47.570842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.566 [2024-12-06 05:04:47.574989] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:09.566 [2024-12-06 05:04:47.575051] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:09.566 [2024-12-06 05:04:47.575070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.566 [2024-12-06 05:04:47.575085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:09.566 [2024-12-06 05:04:47.575098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.037 ms 00:17:09.566 [2024-12-06 05:04:47.575112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.566 [2024-12-06 05:04:47.591171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.566 [2024-12-06 05:04:47.591234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:09.566 [2024-12-06 05:04:47.591254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.952 ms 00:17:09.566 [2024-12-06 05:04:47.591272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.566 [2024-12-06 05:04:47.594522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.566 [2024-12-06 05:04:47.594733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:09.566 [2024-12-06 05:04:47.594758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.095 ms 00:17:09.566 [2024-12-06 05:04:47.594771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.566 [2024-12-06 05:04:47.597824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.566 [2024-12-06 05:04:47.598015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:09.566 [2024-12-06 05:04:47.598039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.933 ms 00:17:09.566 [2024-12-06 05:04:47.598052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.566 [2024-12-06 05:04:47.598535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.566 [2024-12-06 05:04:47.598581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:09.566 [2024-12-06 05:04:47.598598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.289 ms 00:17:09.566 [2024-12-06 05:04:47.598619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.566 [2024-12-06 05:04:47.623062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.566 [2024-12-06 05:04:47.623130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:09.566 [2024-12-06 05:04:47.623148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.403 ms 00:17:09.566 [2024-12-06 05:04:47.623166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.566 [2024-12-06 05:04:47.631430] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:09.566 [2024-12-06 05:04:47.650790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.566 [2024-12-06 05:04:47.650842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:09.566 [2024-12-06 05:04:47.650864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.498 ms 00:17:09.566 [2024-12-06 05:04:47.650877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.566 [2024-12-06 05:04:47.651003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.566 [2024-12-06 05:04:47.651029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:09.566 [2024-12-06 05:04:47.651051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:17:09.566 [2024-12-06 05:04:47.651064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.566 [2024-12-06 05:04:47.651151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.566 [2024-12-06 05:04:47.651178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:09.566 [2024-12-06 05:04:47.651195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:17:09.566 [2024-12-06 05:04:47.651207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.566 [2024-12-06 05:04:47.651258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.566 [2024-12-06 05:04:47.651273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:09.566 [2024-12-06 05:04:47.651293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:17:09.566 [2024-12-06 05:04:47.651306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.566 [2024-12-06 05:04:47.651361] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:09.566 [2024-12-06 05:04:47.651378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.566 [2024-12-06 05:04:47.651395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:09.566 [2024-12-06 05:04:47.651408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:17:09.566 [2024-12-06 05:04:47.651423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.566 [2024-12-06 05:04:47.657513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.566 [2024-12-06 05:04:47.657719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:09.566 [2024-12-06 05:04:47.657745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.054 ms 00:17:09.566 [2024-12-06 05:04:47.657760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.566 [2024-12-06 05:04:47.657954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.566 [2024-12-06 05:04:47.657984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:09.566 [2024-12-06 05:04:47.657999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:17:09.566 [2024-12-06 05:04:47.658015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.566 [2024-12-06 05:04:47.659135] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:09.566 [2024-12-06 05:04:47.660613] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 167.998 ms, result 0 00:17:09.566 [2024-12-06 05:04:47.662893] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:09.566 Some configs were skipped because the RPC state that can call them passed over. 00:17:09.566 05:04:47 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:17:09.827 [2024-12-06 05:04:47.904434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.827 [2024-12-06 05:04:47.904631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:09.828 [2024-12-06 05:04:47.904758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.085 ms 00:17:09.828 [2024-12-06 05:04:47.904798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.828 [2024-12-06 05:04:47.904890] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.539 ms, result 0 00:17:09.828 true 00:17:09.828 05:04:47 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:17:10.121 [2024-12-06 05:04:48.124549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.121 [2024-12-06 05:04:48.124798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:10.121 [2024-12-06 05:04:48.124894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.947 ms 00:17:10.121 [2024-12-06 05:04:48.124951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.121 [2024-12-06 05:04:48.125050] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.439 ms, result 0 00:17:10.121 true 00:17:10.121 05:04:48 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 85522 00:17:10.121 05:04:48 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 85522 ']' 00:17:10.121 05:04:48 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 85522 00:17:10.121 05:04:48 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:17:10.121 05:04:48 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:10.121 05:04:48 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85522 00:17:10.121 05:04:48 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:10.121 05:04:48 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:10.121 05:04:48 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85522' 00:17:10.121 killing process with pid 85522 00:17:10.121 05:04:48 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 85522 00:17:10.121 05:04:48 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 85522 00:17:10.121 [2024-12-06 05:04:48.295072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.121 [2024-12-06 05:04:48.295309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:10.121 [2024-12-06 05:04:48.295393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:10.121 [2024-12-06 05:04:48.295428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.121 [2024-12-06 05:04:48.295497] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:10.121 [2024-12-06 05:04:48.296071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.121 [2024-12-06 05:04:48.296142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:10.121 [2024-12-06 05:04:48.296181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.513 ms 00:17:10.121 [2024-12-06 05:04:48.296291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.121 [2024-12-06 05:04:48.296653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.121 [2024-12-06 05:04:48.296717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:10.121 [2024-12-06 05:04:48.296840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.288 ms 00:17:10.121 [2024-12-06 05:04:48.296885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.121 [2024-12-06 05:04:48.301504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.121 [2024-12-06 05:04:48.301625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:10.121 [2024-12-06 05:04:48.301803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.559 ms 00:17:10.121 [2024-12-06 05:04:48.301895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.121 [2024-12-06 05:04:48.308933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.121 [2024-12-06 05:04:48.309059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:10.121 [2024-12-06 05:04:48.309079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.953 ms 00:17:10.121 [2024-12-06 05:04:48.309095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.121 [2024-12-06 05:04:48.311539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.121 [2024-12-06 05:04:48.311586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:10.121 [2024-12-06 05:04:48.311600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.344 ms 00:17:10.121 [2024-12-06 05:04:48.311612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.121 [2024-12-06 05:04:48.316103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.121 [2024-12-06 05:04:48.316152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:10.121 [2024-12-06 05:04:48.316168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.440 ms 00:17:10.121 [2024-12-06 05:04:48.316181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.121 [2024-12-06 05:04:48.316354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.121 [2024-12-06 05:04:48.316374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:10.121 [2024-12-06 05:04:48.316387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:17:10.121 [2024-12-06 05:04:48.316405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.121 [2024-12-06 05:04:48.319398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.121 [2024-12-06 05:04:48.319446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:10.121 [2024-12-06 05:04:48.319460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.967 ms 00:17:10.121 [2024-12-06 05:04:48.319476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.122 [2024-12-06 05:04:48.322172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.122 [2024-12-06 05:04:48.322318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:10.122 [2024-12-06 05:04:48.322333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.643 ms 00:17:10.122 [2024-12-06 05:04:48.322342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.122 [2024-12-06 05:04:48.324673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.122 [2024-12-06 05:04:48.324712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:10.122 [2024-12-06 05:04:48.324723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.251 ms 00:17:10.122 [2024-12-06 05:04:48.324732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.122 [2024-12-06 05:04:48.326309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.122 [2024-12-06 05:04:48.326354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:10.122 [2024-12-06 05:04:48.326363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.512 ms 00:17:10.122 [2024-12-06 05:04:48.326372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.122 [2024-12-06 05:04:48.326410] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:10.122 [2024-12-06 05:04:48.326427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:10.122 [2024-12-06 05:04:48.326437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:10.122 [2024-12-06 05:04:48.326450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:10.122 [2024-12-06 05:04:48.326458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:10.122 [2024-12-06 05:04:48.326468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:10.122 [2024-12-06 05:04:48.326476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:10.122 [2024-12-06 05:04:48.326488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:10.122 [2024-12-06 05:04:48.326495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:10.122 [2024-12-06 05:04:48.326505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:10.122 [2024-12-06 05:04:48.326512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:10.122 [2024-12-06 05:04:48.326521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:10.122 [2024-12-06 05:04:48.326529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:10.122 [2024-12-06 05:04:48.326538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:10.122 [2024-12-06 05:04:48.326545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:10.122 [2024-12-06 05:04:48.326554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:10.122 [2024-12-06 05:04:48.326561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:10.122 [2024-12-06 05:04:48.326571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:10.122 [2024-12-06 05:04:48.326578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:10.122 [2024-12-06 05:04:48.326589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:10.122 [2024-12-06 05:04:48.326597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:10.122 [2024-12-06 05:04:48.326613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:10.122 [2024-12-06 05:04:48.326620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:10.122 [2024-12-06 05:04:48.326629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:10.122 [2024-12-06 05:04:48.326636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:10.122 [2024-12-06 05:04:48.326646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:10.122 [2024-12-06 05:04:48.326654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:10.122 [2024-12-06 05:04:48.326676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:10.122 [2024-12-06 05:04:48.326685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:10.122 [2024-12-06 05:04:48.326695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:10.122 [2024-12-06 05:04:48.326702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:10.122 [2024-12-06 05:04:48.326712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:10.122 [2024-12-06 05:04:48.326720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:10.122 [2024-12-06 05:04:48.326731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:10.122 [2024-12-06 05:04:48.326738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:10.122 [2024-12-06 05:04:48.326749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:10.122 [2024-12-06 05:04:48.326757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:10.122 [2024-12-06 05:04:48.326766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:10.122 [2024-12-06 05:04:48.326774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:10.122 [2024-12-06 05:04:48.326783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:10.122 [2024-12-06 05:04:48.326791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:10.122 [2024-12-06 05:04:48.326800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:10.122 [2024-12-06 05:04:48.326807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:10.122 [2024-12-06 05:04:48.326816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:10.122 [2024-12-06 05:04:48.326824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:10.122 [2024-12-06 05:04:48.326833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:10.122 [2024-12-06 05:04:48.326840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:10.122 [2024-12-06 05:04:48.326849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:10.122 [2024-12-06 05:04:48.326864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:10.122 [2024-12-06 05:04:48.326873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:10.122 [2024-12-06 05:04:48.326881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:10.122 [2024-12-06 05:04:48.326892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:10.122 [2024-12-06 05:04:48.326899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:10.122 [2024-12-06 05:04:48.326908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:10.122 [2024-12-06 05:04:48.326915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:10.122 [2024-12-06 05:04:48.326924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:10.122 [2024-12-06 05:04:48.326932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:10.122 [2024-12-06 05:04:48.326941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:10.122 [2024-12-06 05:04:48.326949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:10.122 [2024-12-06 05:04:48.326959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:10.122 [2024-12-06 05:04:48.326968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:10.122 [2024-12-06 05:04:48.326977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:10.122 [2024-12-06 05:04:48.326984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:10.122 [2024-12-06 05:04:48.326994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:10.122 [2024-12-06 05:04:48.327002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:10.122 [2024-12-06 05:04:48.327011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:10.122 [2024-12-06 05:04:48.327019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:10.122 [2024-12-06 05:04:48.327029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:10.122 [2024-12-06 05:04:48.327037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:10.122 [2024-12-06 05:04:48.327047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:10.122 [2024-12-06 05:04:48.327055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:10.122 [2024-12-06 05:04:48.327064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:10.122 [2024-12-06 05:04:48.327072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:10.122 [2024-12-06 05:04:48.327081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:10.122 [2024-12-06 05:04:48.327088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:10.122 [2024-12-06 05:04:48.327097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:10.122 [2024-12-06 05:04:48.327104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:10.122 [2024-12-06 05:04:48.327113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:10.123 [2024-12-06 05:04:48.327120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:10.123 [2024-12-06 05:04:48.327133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:10.123 [2024-12-06 05:04:48.327140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:10.123 [2024-12-06 05:04:48.327149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:10.123 [2024-12-06 05:04:48.327157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:10.123 [2024-12-06 05:04:48.327167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:10.123 [2024-12-06 05:04:48.327174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:10.123 [2024-12-06 05:04:48.327186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:10.123 [2024-12-06 05:04:48.327194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:10.123 [2024-12-06 05:04:48.327203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:10.123 [2024-12-06 05:04:48.327211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:10.123 [2024-12-06 05:04:48.327219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:10.123 [2024-12-06 05:04:48.327227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:10.123 [2024-12-06 05:04:48.327237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:10.123 [2024-12-06 05:04:48.327245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:10.123 [2024-12-06 05:04:48.327254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:10.123 [2024-12-06 05:04:48.327262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:10.123 [2024-12-06 05:04:48.327272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:10.123 [2024-12-06 05:04:48.327281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:10.123 [2024-12-06 05:04:48.327290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:10.123 [2024-12-06 05:04:48.327298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:10.123 [2024-12-06 05:04:48.327309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:10.123 [2024-12-06 05:04:48.327316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:10.123 [2024-12-06 05:04:48.327333] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:10.123 [2024-12-06 05:04:48.327341] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 73ef595a-5226-44c7-8985-04a18b3bbf5e 00:17:10.123 [2024-12-06 05:04:48.327350] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:10.123 [2024-12-06 05:04:48.327362] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:10.123 [2024-12-06 05:04:48.327373] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:10.123 [2024-12-06 05:04:48.327381] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:10.123 [2024-12-06 05:04:48.327389] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:10.123 [2024-12-06 05:04:48.327397] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:10.123 [2024-12-06 05:04:48.327410] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:10.123 [2024-12-06 05:04:48.327417] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:10.123 [2024-12-06 05:04:48.327425] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:10.123 [2024-12-06 05:04:48.327432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.123 [2024-12-06 05:04:48.327441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:10.123 [2024-12-06 05:04:48.327449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.023 ms 00:17:10.123 [2024-12-06 05:04:48.327460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.123 [2024-12-06 05:04:48.329159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.123 [2024-12-06 05:04:48.329286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:10.123 [2024-12-06 05:04:48.329301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.679 ms 00:17:10.123 [2024-12-06 05:04:48.329311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.123 [2024-12-06 05:04:48.329430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.123 [2024-12-06 05:04:48.329442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:10.123 [2024-12-06 05:04:48.329451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:10.123 [2024-12-06 05:04:48.329460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.123 [2024-12-06 05:04:48.335630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:10.123 [2024-12-06 05:04:48.335705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:10.123 [2024-12-06 05:04:48.335716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:10.123 [2024-12-06 05:04:48.335727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.123 [2024-12-06 05:04:48.335793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:10.123 [2024-12-06 05:04:48.335805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:10.123 [2024-12-06 05:04:48.335816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:10.123 [2024-12-06 05:04:48.335828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.123 [2024-12-06 05:04:48.335892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:10.123 [2024-12-06 05:04:48.335905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:10.123 [2024-12-06 05:04:48.335914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:10.123 [2024-12-06 05:04:48.335923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.123 [2024-12-06 05:04:48.335943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:10.123 [2024-12-06 05:04:48.335953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:10.123 [2024-12-06 05:04:48.335960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:10.123 [2024-12-06 05:04:48.335973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.407 [2024-12-06 05:04:48.346812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:10.407 [2024-12-06 05:04:48.346986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:10.407 [2024-12-06 05:04:48.347003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:10.407 [2024-12-06 05:04:48.347018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.407 [2024-12-06 05:04:48.355484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:10.407 [2024-12-06 05:04:48.355538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:10.407 [2024-12-06 05:04:48.355549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:10.407 [2024-12-06 05:04:48.355561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.407 [2024-12-06 05:04:48.355609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:10.407 [2024-12-06 05:04:48.355627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:10.407 [2024-12-06 05:04:48.355638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:10.407 [2024-12-06 05:04:48.355647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.407 [2024-12-06 05:04:48.355849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:10.407 [2024-12-06 05:04:48.355884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:10.407 [2024-12-06 05:04:48.355905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:10.407 [2024-12-06 05:04:48.355924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.407 [2024-12-06 05:04:48.356024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:10.407 [2024-12-06 05:04:48.356052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:10.407 [2024-12-06 05:04:48.356072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:10.407 [2024-12-06 05:04:48.356137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.407 [2024-12-06 05:04:48.356201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:10.407 [2024-12-06 05:04:48.356228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:10.407 [2024-12-06 05:04:48.356238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:10.407 [2024-12-06 05:04:48.356250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.407 [2024-12-06 05:04:48.356290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:10.407 [2024-12-06 05:04:48.356300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:10.407 [2024-12-06 05:04:48.356309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:10.407 [2024-12-06 05:04:48.356321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.407 [2024-12-06 05:04:48.356366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:10.407 [2024-12-06 05:04:48.356378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:10.407 [2024-12-06 05:04:48.356386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:10.407 [2024-12-06 05:04:48.356396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.407 [2024-12-06 05:04:48.356539] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 61.445 ms, result 0 00:17:10.407 05:04:48 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:10.407 05:04:48 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:10.668 [2024-12-06 05:04:48.643227] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:17:10.668 [2024-12-06 05:04:48.643398] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85559 ] 00:17:10.668 [2024-12-06 05:04:48.779592] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:10.668 [2024-12-06 05:04:48.834912] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:10.931 [2024-12-06 05:04:48.947700] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:10.932 [2024-12-06 05:04:48.947762] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:10.932 [2024-12-06 05:04:49.104598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.932 [2024-12-06 05:04:49.104645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:10.932 [2024-12-06 05:04:49.104662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:10.932 [2024-12-06 05:04:49.104691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.932 [2024-12-06 05:04:49.106955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.932 [2024-12-06 05:04:49.106995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:10.932 [2024-12-06 05:04:49.107007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.246 ms 00:17:10.932 [2024-12-06 05:04:49.107015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.932 [2024-12-06 05:04:49.107086] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:10.932 [2024-12-06 05:04:49.107321] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:10.932 [2024-12-06 05:04:49.107337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.932 [2024-12-06 05:04:49.107345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:10.932 [2024-12-06 05:04:49.107356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.259 ms 00:17:10.932 [2024-12-06 05:04:49.107364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.932 [2024-12-06 05:04:49.108514] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:10.932 [2024-12-06 05:04:49.111232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.932 [2024-12-06 05:04:49.111267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:10.932 [2024-12-06 05:04:49.111281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.719 ms 00:17:10.932 [2024-12-06 05:04:49.111291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.932 [2024-12-06 05:04:49.111349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.932 [2024-12-06 05:04:49.111359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:10.932 [2024-12-06 05:04:49.111367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:17:10.932 [2024-12-06 05:04:49.111374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.932 [2024-12-06 05:04:49.116569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.932 [2024-12-06 05:04:49.116693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:10.932 [2024-12-06 05:04:49.116750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.158 ms 00:17:10.932 [2024-12-06 05:04:49.116774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.932 [2024-12-06 05:04:49.116894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.932 [2024-12-06 05:04:49.116922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:10.932 [2024-12-06 05:04:49.116942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:17:10.932 [2024-12-06 05:04:49.117013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.932 [2024-12-06 05:04:49.117058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.932 [2024-12-06 05:04:49.117081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:10.932 [2024-12-06 05:04:49.117105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:10.932 [2024-12-06 05:04:49.117113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.932 [2024-12-06 05:04:49.117140] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:10.932 [2024-12-06 05:04:49.118528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.932 [2024-12-06 05:04:49.118560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:10.932 [2024-12-06 05:04:49.118575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.393 ms 00:17:10.932 [2024-12-06 05:04:49.118582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.932 [2024-12-06 05:04:49.118625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.932 [2024-12-06 05:04:49.118637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:10.932 [2024-12-06 05:04:49.118647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:10.932 [2024-12-06 05:04:49.118654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.932 [2024-12-06 05:04:49.118690] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:10.932 [2024-12-06 05:04:49.118712] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:10.932 [2024-12-06 05:04:49.118749] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:10.932 [2024-12-06 05:04:49.118770] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:10.932 [2024-12-06 05:04:49.118879] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:10.932 [2024-12-06 05:04:49.118894] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:10.932 [2024-12-06 05:04:49.118907] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:10.932 [2024-12-06 05:04:49.118921] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:10.932 [2024-12-06 05:04:49.118930] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:10.932 [2024-12-06 05:04:49.118938] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:10.932 [2024-12-06 05:04:49.118945] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:10.932 [2024-12-06 05:04:49.118952] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:10.932 [2024-12-06 05:04:49.118959] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:10.932 [2024-12-06 05:04:49.118966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.932 [2024-12-06 05:04:49.118975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:10.932 [2024-12-06 05:04:49.118985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:17:10.932 [2024-12-06 05:04:49.118992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.932 [2024-12-06 05:04:49.119079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.932 [2024-12-06 05:04:49.119087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:10.932 [2024-12-06 05:04:49.119098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:17:10.932 [2024-12-06 05:04:49.119105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.932 [2024-12-06 05:04:49.119210] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:10.932 [2024-12-06 05:04:49.119221] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:10.932 [2024-12-06 05:04:49.119230] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:10.932 [2024-12-06 05:04:49.119241] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:10.932 [2024-12-06 05:04:49.119250] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:10.932 [2024-12-06 05:04:49.119258] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:10.932 [2024-12-06 05:04:49.119266] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:10.932 [2024-12-06 05:04:49.119274] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:10.932 [2024-12-06 05:04:49.119282] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:10.932 [2024-12-06 05:04:49.119292] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:10.932 [2024-12-06 05:04:49.119299] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:10.932 [2024-12-06 05:04:49.119307] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:10.932 [2024-12-06 05:04:49.119315] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:10.932 [2024-12-06 05:04:49.119323] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:10.932 [2024-12-06 05:04:49.119330] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:10.932 [2024-12-06 05:04:49.119337] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:10.932 [2024-12-06 05:04:49.119345] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:10.932 [2024-12-06 05:04:49.119353] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:10.932 [2024-12-06 05:04:49.119360] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:10.932 [2024-12-06 05:04:49.119368] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:10.932 [2024-12-06 05:04:49.119376] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:10.932 [2024-12-06 05:04:49.119384] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:10.932 [2024-12-06 05:04:49.119392] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:10.932 [2024-12-06 05:04:49.119400] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:10.932 [2024-12-06 05:04:49.119407] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:10.932 [2024-12-06 05:04:49.119418] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:10.932 [2024-12-06 05:04:49.119425] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:10.932 [2024-12-06 05:04:49.119433] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:10.932 [2024-12-06 05:04:49.119440] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:10.932 [2024-12-06 05:04:49.119448] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:10.932 [2024-12-06 05:04:49.119456] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:10.932 [2024-12-06 05:04:49.119463] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:10.932 [2024-12-06 05:04:49.119471] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:10.932 [2024-12-06 05:04:49.119478] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:10.932 [2024-12-06 05:04:49.119485] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:10.932 [2024-12-06 05:04:49.119493] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:10.932 [2024-12-06 05:04:49.119500] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:10.932 [2024-12-06 05:04:49.119508] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:10.933 [2024-12-06 05:04:49.119515] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:10.933 [2024-12-06 05:04:49.119523] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:10.933 [2024-12-06 05:04:49.119530] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:10.933 [2024-12-06 05:04:49.119540] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:10.933 [2024-12-06 05:04:49.119548] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:10.933 [2024-12-06 05:04:49.119554] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:10.933 [2024-12-06 05:04:49.119566] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:10.933 [2024-12-06 05:04:49.119573] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:10.933 [2024-12-06 05:04:49.119580] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:10.933 [2024-12-06 05:04:49.119588] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:10.933 [2024-12-06 05:04:49.119594] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:10.933 [2024-12-06 05:04:49.119600] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:10.933 [2024-12-06 05:04:49.119607] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:10.933 [2024-12-06 05:04:49.119613] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:10.933 [2024-12-06 05:04:49.119621] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:10.933 [2024-12-06 05:04:49.119629] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:10.933 [2024-12-06 05:04:49.119638] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:10.933 [2024-12-06 05:04:49.119646] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:10.933 [2024-12-06 05:04:49.119653] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:10.933 [2024-12-06 05:04:49.119662] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:10.933 [2024-12-06 05:04:49.119876] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:10.933 [2024-12-06 05:04:49.119904] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:10.933 [2024-12-06 05:04:49.119932] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:10.933 [2024-12-06 05:04:49.119960] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:10.933 [2024-12-06 05:04:49.119995] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:10.933 [2024-12-06 05:04:49.120023] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:10.933 [2024-12-06 05:04:49.120050] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:10.933 [2024-12-06 05:04:49.120078] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:10.933 [2024-12-06 05:04:49.120174] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:10.933 [2024-12-06 05:04:49.120204] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:10.933 [2024-12-06 05:04:49.120232] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:10.933 [2024-12-06 05:04:49.120259] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:10.933 [2024-12-06 05:04:49.120294] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:10.933 [2024-12-06 05:04:49.120328] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:10.933 [2024-12-06 05:04:49.120366] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:10.933 [2024-12-06 05:04:49.120408] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:10.933 [2024-12-06 05:04:49.120441] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:10.933 [2024-12-06 05:04:49.120501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.933 [2024-12-06 05:04:49.120521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:10.933 [2024-12-06 05:04:49.120544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.362 ms 00:17:10.933 [2024-12-06 05:04:49.120562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.933 [2024-12-06 05:04:49.139908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.933 [2024-12-06 05:04:49.140064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:10.933 [2024-12-06 05:04:49.140124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.266 ms 00:17:10.933 [2024-12-06 05:04:49.140150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.933 [2024-12-06 05:04:49.140310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.933 [2024-12-06 05:04:49.140382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:10.933 [2024-12-06 05:04:49.140407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:17:10.933 [2024-12-06 05:04:49.140435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.933 [2024-12-06 05:04:49.148780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.933 [2024-12-06 05:04:49.148891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:10.933 [2024-12-06 05:04:49.148941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.278 ms 00:17:10.933 [2024-12-06 05:04:49.148980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.933 [2024-12-06 05:04:49.149042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.933 [2024-12-06 05:04:49.149098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:10.933 [2024-12-06 05:04:49.149124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:10.933 [2024-12-06 05:04:49.149143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.933 [2024-12-06 05:04:49.149481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.933 [2024-12-06 05:04:49.149528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:10.933 [2024-12-06 05:04:49.149540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.305 ms 00:17:10.933 [2024-12-06 05:04:49.149547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.933 [2024-12-06 05:04:49.149700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.933 [2024-12-06 05:04:49.149714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:10.933 [2024-12-06 05:04:49.149722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:17:10.933 [2024-12-06 05:04:49.149732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.933 [2024-12-06 05:04:49.154710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.933 [2024-12-06 05:04:49.154812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:10.933 [2024-12-06 05:04:49.154825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.958 ms 00:17:10.933 [2024-12-06 05:04:49.154832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.933 [2024-12-06 05:04:49.157472] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:10.933 [2024-12-06 05:04:49.157512] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:10.933 [2024-12-06 05:04:49.157522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.933 [2024-12-06 05:04:49.157530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:10.933 [2024-12-06 05:04:49.157538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.589 ms 00:17:10.933 [2024-12-06 05:04:49.157545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.194 [2024-12-06 05:04:49.172182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.194 [2024-12-06 05:04:49.172216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:11.194 [2024-12-06 05:04:49.172227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.594 ms 00:17:11.194 [2024-12-06 05:04:49.172235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.194 [2024-12-06 05:04:49.174315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.194 [2024-12-06 05:04:49.174428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:11.194 [2024-12-06 05:04:49.174443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.007 ms 00:17:11.194 [2024-12-06 05:04:49.174450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.194 [2024-12-06 05:04:49.176377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.194 [2024-12-06 05:04:49.176405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:11.194 [2024-12-06 05:04:49.176414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.892 ms 00:17:11.194 [2024-12-06 05:04:49.176428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.194 [2024-12-06 05:04:49.176772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.194 [2024-12-06 05:04:49.176784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:11.194 [2024-12-06 05:04:49.176793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 00:17:11.194 [2024-12-06 05:04:49.176808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.194 [2024-12-06 05:04:49.193564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.194 [2024-12-06 05:04:49.193764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:11.194 [2024-12-06 05:04:49.193783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.733 ms 00:17:11.195 [2024-12-06 05:04:49.193791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.195 [2024-12-06 05:04:49.201300] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:11.195 [2024-12-06 05:04:49.216539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.195 [2024-12-06 05:04:49.216582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:11.195 [2024-12-06 05:04:49.216600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.689 ms 00:17:11.195 [2024-12-06 05:04:49.216608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.195 [2024-12-06 05:04:49.216724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.195 [2024-12-06 05:04:49.216736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:11.195 [2024-12-06 05:04:49.216745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:11.195 [2024-12-06 05:04:49.216760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.195 [2024-12-06 05:04:49.216814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.195 [2024-12-06 05:04:49.216823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:11.195 [2024-12-06 05:04:49.216832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:17:11.195 [2024-12-06 05:04:49.216840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.195 [2024-12-06 05:04:49.216862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.195 [2024-12-06 05:04:49.216872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:11.195 [2024-12-06 05:04:49.216879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:11.195 [2024-12-06 05:04:49.216887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.195 [2024-12-06 05:04:49.216919] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:11.195 [2024-12-06 05:04:49.216929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.195 [2024-12-06 05:04:49.216941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:11.195 [2024-12-06 05:04:49.216949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:11.195 [2024-12-06 05:04:49.216957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.195 [2024-12-06 05:04:49.221535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.195 [2024-12-06 05:04:49.221573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:11.195 [2024-12-06 05:04:49.221584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.558 ms 00:17:11.195 [2024-12-06 05:04:49.221591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.195 [2024-12-06 05:04:49.221701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.195 [2024-12-06 05:04:49.221715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:11.195 [2024-12-06 05:04:49.221723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:17:11.195 [2024-12-06 05:04:49.221731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.195 [2024-12-06 05:04:49.222627] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:11.195 [2024-12-06 05:04:49.223747] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 117.749 ms, result 0 00:17:11.195 [2024-12-06 05:04:49.224766] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:11.195 [2024-12-06 05:04:49.233094] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:12.139  [2024-12-06T05:04:51.318Z] Copying: 14/256 [MB] (14 MBps) [2024-12-06T05:04:52.263Z] Copying: 24/256 [MB] (10 MBps) [2024-12-06T05:04:53.653Z] Copying: 34/256 [MB] (10 MBps) [2024-12-06T05:04:54.598Z] Copying: 44/256 [MB] (10 MBps) [2024-12-06T05:04:55.541Z] Copying: 55/256 [MB] (10 MBps) [2024-12-06T05:04:56.483Z] Copying: 70/256 [MB] (14 MBps) [2024-12-06T05:04:57.428Z] Copying: 80/256 [MB] (10 MBps) [2024-12-06T05:04:58.370Z] Copying: 94/256 [MB] (13 MBps) [2024-12-06T05:04:59.314Z] Copying: 110/256 [MB] (16 MBps) [2024-12-06T05:05:00.259Z] Copying: 122/256 [MB] (11 MBps) [2024-12-06T05:05:01.648Z] Copying: 138/256 [MB] (16 MBps) [2024-12-06T05:05:02.591Z] Copying: 157/256 [MB] (19 MBps) [2024-12-06T05:05:03.536Z] Copying: 168/256 [MB] (10 MBps) [2024-12-06T05:05:04.481Z] Copying: 186/256 [MB] (18 MBps) [2024-12-06T05:05:05.426Z] Copying: 196/256 [MB] (10 MBps) [2024-12-06T05:05:06.370Z] Copying: 207/256 [MB] (11 MBps) [2024-12-06T05:05:07.316Z] Copying: 220/256 [MB] (12 MBps) [2024-12-06T05:05:08.261Z] Copying: 231/256 [MB] (11 MBps) [2024-12-06T05:05:08.524Z] Copying: 248/256 [MB] (16 MBps) [2024-12-06T05:05:08.524Z] Copying: 256/256 [MB] (average 13 MBps)[2024-12-06 05:05:08.478888] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:30.292 [2024-12-06 05:05:08.481193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.292 [2024-12-06 05:05:08.481576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:30.292 [2024-12-06 05:05:08.481629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:30.292 [2024-12-06 05:05:08.481745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.292 [2024-12-06 05:05:08.481831] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:30.292 [2024-12-06 05:05:08.482452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.292 [2024-12-06 05:05:08.482479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:30.292 [2024-12-06 05:05:08.482489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.588 ms 00:17:30.292 [2024-12-06 05:05:08.482497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.292 [2024-12-06 05:05:08.482768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.292 [2024-12-06 05:05:08.482786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:30.292 [2024-12-06 05:05:08.482796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.250 ms 00:17:30.292 [2024-12-06 05:05:08.482805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.292 [2024-12-06 05:05:08.486492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.292 [2024-12-06 05:05:08.486517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:30.292 [2024-12-06 05:05:08.486527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.671 ms 00:17:30.292 [2024-12-06 05:05:08.486534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.292 [2024-12-06 05:05:08.493461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.292 [2024-12-06 05:05:08.493489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:30.292 [2024-12-06 05:05:08.493500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.898 ms 00:17:30.292 [2024-12-06 05:05:08.493513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.292 [2024-12-06 05:05:08.495935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.292 [2024-12-06 05:05:08.496062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:30.292 [2024-12-06 05:05:08.496077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.374 ms 00:17:30.292 [2024-12-06 05:05:08.496084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.292 [2024-12-06 05:05:08.500612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.292 [2024-12-06 05:05:08.500655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:30.292 [2024-12-06 05:05:08.500678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.497 ms 00:17:30.292 [2024-12-06 05:05:08.500686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.292 [2024-12-06 05:05:08.500808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.292 [2024-12-06 05:05:08.500819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:30.292 [2024-12-06 05:05:08.500828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:17:30.292 [2024-12-06 05:05:08.500835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.292 [2024-12-06 05:05:08.503496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.292 [2024-12-06 05:05:08.503616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:30.292 [2024-12-06 05:05:08.503630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.644 ms 00:17:30.292 [2024-12-06 05:05:08.503638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.292 [2024-12-06 05:05:08.506075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.292 [2024-12-06 05:05:08.506101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:30.292 [2024-12-06 05:05:08.506110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.394 ms 00:17:30.292 [2024-12-06 05:05:08.506116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.292 [2024-12-06 05:05:08.508220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.292 [2024-12-06 05:05:08.508251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:30.292 [2024-12-06 05:05:08.508260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.072 ms 00:17:30.292 [2024-12-06 05:05:08.508267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.292 [2024-12-06 05:05:08.510544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.292 [2024-12-06 05:05:08.510662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:30.292 [2024-12-06 05:05:08.510689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.219 ms 00:17:30.292 [2024-12-06 05:05:08.510695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.292 [2024-12-06 05:05:08.510725] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:30.292 [2024-12-06 05:05:08.510744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:30.292 [2024-12-06 05:05:08.510754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:30.292 [2024-12-06 05:05:08.510762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:30.292 [2024-12-06 05:05:08.510770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:30.292 [2024-12-06 05:05:08.510777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:30.292 [2024-12-06 05:05:08.510784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:30.292 [2024-12-06 05:05:08.510792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:30.292 [2024-12-06 05:05:08.510800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:30.292 [2024-12-06 05:05:08.510809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:30.292 [2024-12-06 05:05:08.510817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:30.292 [2024-12-06 05:05:08.510825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:30.292 [2024-12-06 05:05:08.510832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:30.292 [2024-12-06 05:05:08.510840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:30.292 [2024-12-06 05:05:08.510847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:30.292 [2024-12-06 05:05:08.510854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:30.292 [2024-12-06 05:05:08.510861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:30.292 [2024-12-06 05:05:08.510868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:30.292 [2024-12-06 05:05:08.510875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:30.292 [2024-12-06 05:05:08.510882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:30.292 [2024-12-06 05:05:08.510890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:30.292 [2024-12-06 05:05:08.510897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:30.292 [2024-12-06 05:05:08.510905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:30.292 [2024-12-06 05:05:08.510912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:30.292 [2024-12-06 05:05:08.510919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:30.292 [2024-12-06 05:05:08.510926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:30.292 [2024-12-06 05:05:08.510934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:30.292 [2024-12-06 05:05:08.510941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:30.292 [2024-12-06 05:05:08.510949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:30.292 [2024-12-06 05:05:08.510957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:30.292 [2024-12-06 05:05:08.510964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:30.292 [2024-12-06 05:05:08.510973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:30.292 [2024-12-06 05:05:08.510980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:30.293 [2024-12-06 05:05:08.510988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:30.293 [2024-12-06 05:05:08.510995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:30.293 [2024-12-06 05:05:08.511003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:30.293 [2024-12-06 05:05:08.511018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:30.293 [2024-12-06 05:05:08.511025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:30.293 [2024-12-06 05:05:08.511032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:30.293 [2024-12-06 05:05:08.511040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:30.293 [2024-12-06 05:05:08.511047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:30.293 [2024-12-06 05:05:08.511055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:30.293 [2024-12-06 05:05:08.511063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:30.293 [2024-12-06 05:05:08.511070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:30.293 [2024-12-06 05:05:08.511078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:30.293 [2024-12-06 05:05:08.511085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:30.293 [2024-12-06 05:05:08.511092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:30.293 [2024-12-06 05:05:08.511100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:30.293 [2024-12-06 05:05:08.511108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:30.293 [2024-12-06 05:05:08.511115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:30.293 [2024-12-06 05:05:08.511123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:30.293 [2024-12-06 05:05:08.511130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:30.293 [2024-12-06 05:05:08.511139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:30.293 [2024-12-06 05:05:08.511147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:30.293 [2024-12-06 05:05:08.511155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:30.293 [2024-12-06 05:05:08.511162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:30.293 [2024-12-06 05:05:08.511170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:30.293 [2024-12-06 05:05:08.511177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:30.293 [2024-12-06 05:05:08.511184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:30.293 [2024-12-06 05:05:08.511192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:30.293 [2024-12-06 05:05:08.511201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:30.293 [2024-12-06 05:05:08.511208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:30.293 [2024-12-06 05:05:08.511216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:30.293 [2024-12-06 05:05:08.511223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:30.293 [2024-12-06 05:05:08.511230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:30.293 [2024-12-06 05:05:08.511238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:30.293 [2024-12-06 05:05:08.511246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:30.293 [2024-12-06 05:05:08.511253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:30.293 [2024-12-06 05:05:08.511260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:30.293 [2024-12-06 05:05:08.511267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:30.293 [2024-12-06 05:05:08.511275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:30.293 [2024-12-06 05:05:08.511282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:30.293 [2024-12-06 05:05:08.511290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:30.293 [2024-12-06 05:05:08.511297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:30.293 [2024-12-06 05:05:08.511304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:30.293 [2024-12-06 05:05:08.511311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:30.293 [2024-12-06 05:05:08.511319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:30.293 [2024-12-06 05:05:08.511326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:30.293 [2024-12-06 05:05:08.511334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:30.293 [2024-12-06 05:05:08.511341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:30.293 [2024-12-06 05:05:08.511349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:30.293 [2024-12-06 05:05:08.511356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:30.293 [2024-12-06 05:05:08.511363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:30.293 [2024-12-06 05:05:08.511370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:30.293 [2024-12-06 05:05:08.511378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:30.293 [2024-12-06 05:05:08.511386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:30.293 [2024-12-06 05:05:08.511395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:30.293 [2024-12-06 05:05:08.511402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:30.293 [2024-12-06 05:05:08.511409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:30.293 [2024-12-06 05:05:08.511416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:30.293 [2024-12-06 05:05:08.511423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:30.293 [2024-12-06 05:05:08.511430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:30.293 [2024-12-06 05:05:08.511437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:30.293 [2024-12-06 05:05:08.511446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:30.293 [2024-12-06 05:05:08.511453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:30.293 [2024-12-06 05:05:08.511461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:30.293 [2024-12-06 05:05:08.511468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:30.293 [2024-12-06 05:05:08.511475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:30.293 [2024-12-06 05:05:08.511482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:30.293 [2024-12-06 05:05:08.511489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:30.293 [2024-12-06 05:05:08.511497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:30.293 [2024-12-06 05:05:08.511512] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:30.293 [2024-12-06 05:05:08.511520] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 73ef595a-5226-44c7-8985-04a18b3bbf5e 00:17:30.293 [2024-12-06 05:05:08.511528] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:30.293 [2024-12-06 05:05:08.511536] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:30.293 [2024-12-06 05:05:08.511543] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:30.293 [2024-12-06 05:05:08.511551] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:30.293 [2024-12-06 05:05:08.511558] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:30.293 [2024-12-06 05:05:08.511565] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:30.293 [2024-12-06 05:05:08.511572] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:30.293 [2024-12-06 05:05:08.511579] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:30.293 [2024-12-06 05:05:08.511586] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:30.293 [2024-12-06 05:05:08.511592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.293 [2024-12-06 05:05:08.511600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:30.293 [2024-12-06 05:05:08.511610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.868 ms 00:17:30.293 [2024-12-06 05:05:08.511626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.293 [2024-12-06 05:05:08.513581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.293 [2024-12-06 05:05:08.513605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:30.293 [2024-12-06 05:05:08.513622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.938 ms 00:17:30.293 [2024-12-06 05:05:08.513629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.293 [2024-12-06 05:05:08.513881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.293 [2024-12-06 05:05:08.513906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:30.293 [2024-12-06 05:05:08.513915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.233 ms 00:17:30.293 [2024-12-06 05:05:08.513922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.293 [2024-12-06 05:05:08.520184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.293 [2024-12-06 05:05:08.520218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:30.293 [2024-12-06 05:05:08.520228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.293 [2024-12-06 05:05:08.520235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.294 [2024-12-06 05:05:08.520328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.294 [2024-12-06 05:05:08.520344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:30.294 [2024-12-06 05:05:08.520352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.294 [2024-12-06 05:05:08.520364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.294 [2024-12-06 05:05:08.520407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.294 [2024-12-06 05:05:08.520419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:30.294 [2024-12-06 05:05:08.520427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.294 [2024-12-06 05:05:08.520439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.294 [2024-12-06 05:05:08.520456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.294 [2024-12-06 05:05:08.520464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:30.294 [2024-12-06 05:05:08.520475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.294 [2024-12-06 05:05:08.520482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.555 [2024-12-06 05:05:08.533124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.555 [2024-12-06 05:05:08.533169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:30.555 [2024-12-06 05:05:08.533180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.555 [2024-12-06 05:05:08.533188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.555 [2024-12-06 05:05:08.543128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.555 [2024-12-06 05:05:08.543175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:30.555 [2024-12-06 05:05:08.543186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.555 [2024-12-06 05:05:08.543195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.555 [2024-12-06 05:05:08.543241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.556 [2024-12-06 05:05:08.543256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:30.556 [2024-12-06 05:05:08.543268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.556 [2024-12-06 05:05:08.543276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.556 [2024-12-06 05:05:08.543311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.556 [2024-12-06 05:05:08.543319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:30.556 [2024-12-06 05:05:08.543328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.556 [2024-12-06 05:05:08.543338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.556 [2024-12-06 05:05:08.543413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.556 [2024-12-06 05:05:08.543424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:30.556 [2024-12-06 05:05:08.543433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.556 [2024-12-06 05:05:08.543440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.556 [2024-12-06 05:05:08.543470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.556 [2024-12-06 05:05:08.543483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:30.556 [2024-12-06 05:05:08.543492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.556 [2024-12-06 05:05:08.543499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.556 [2024-12-06 05:05:08.543550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.556 [2024-12-06 05:05:08.543560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:30.556 [2024-12-06 05:05:08.543569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.556 [2024-12-06 05:05:08.543577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.556 [2024-12-06 05:05:08.543626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.556 [2024-12-06 05:05:08.543637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:30.556 [2024-12-06 05:05:08.543646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.556 [2024-12-06 05:05:08.543657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.556 [2024-12-06 05:05:08.543830] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 62.637 ms, result 0 00:17:30.556 00:17:30.556 00:17:30.816 05:05:08 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:17:30.816 05:05:08 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:31.387 05:05:09 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:31.387 [2024-12-06 05:05:09.437924] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:17:31.387 [2024-12-06 05:05:09.438069] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85783 ] 00:17:31.387 [2024-12-06 05:05:09.574683] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:31.648 [2024-12-06 05:05:09.621230] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:31.648 [2024-12-06 05:05:09.731412] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:31.648 [2024-12-06 05:05:09.731484] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:31.910 [2024-12-06 05:05:09.891019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.911 [2024-12-06 05:05:09.891306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:31.911 [2024-12-06 05:05:09.891334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:31.911 [2024-12-06 05:05:09.891350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.911 [2024-12-06 05:05:09.894012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.911 [2024-12-06 05:05:09.894069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:31.911 [2024-12-06 05:05:09.894083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.631 ms 00:17:31.911 [2024-12-06 05:05:09.894091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.911 [2024-12-06 05:05:09.894196] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:31.911 [2024-12-06 05:05:09.894475] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:31.911 [2024-12-06 05:05:09.894495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.911 [2024-12-06 05:05:09.894503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:31.911 [2024-12-06 05:05:09.894517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.308 ms 00:17:31.911 [2024-12-06 05:05:09.894525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.911 [2024-12-06 05:05:09.896801] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:31.911 [2024-12-06 05:05:09.901525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.911 [2024-12-06 05:05:09.901747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:31.911 [2024-12-06 05:05:09.901901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.726 ms 00:17:31.911 [2024-12-06 05:05:09.901933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.911 [2024-12-06 05:05:09.902014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.911 [2024-12-06 05:05:09.902027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:31.911 [2024-12-06 05:05:09.902037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:17:31.911 [2024-12-06 05:05:09.902045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.911 [2024-12-06 05:05:09.913066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.911 [2024-12-06 05:05:09.913116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:31.911 [2024-12-06 05:05:09.913128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.964 ms 00:17:31.911 [2024-12-06 05:05:09.913136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.911 [2024-12-06 05:05:09.913290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.911 [2024-12-06 05:05:09.913302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:31.911 [2024-12-06 05:05:09.913313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:17:31.911 [2024-12-06 05:05:09.913322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.911 [2024-12-06 05:05:09.913350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.911 [2024-12-06 05:05:09.913359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:31.911 [2024-12-06 05:05:09.913371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:31.911 [2024-12-06 05:05:09.913379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.911 [2024-12-06 05:05:09.913402] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:31.911 [2024-12-06 05:05:09.916064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.911 [2024-12-06 05:05:09.916107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:31.911 [2024-12-06 05:05:09.916118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.668 ms 00:17:31.911 [2024-12-06 05:05:09.916127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.911 [2024-12-06 05:05:09.916178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.911 [2024-12-06 05:05:09.916191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:31.911 [2024-12-06 05:05:09.916202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:17:31.911 [2024-12-06 05:05:09.916210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.911 [2024-12-06 05:05:09.916234] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:31.911 [2024-12-06 05:05:09.916262] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:31.911 [2024-12-06 05:05:09.916307] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:31.911 [2024-12-06 05:05:09.916328] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:31.911 [2024-12-06 05:05:09.916443] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:31.911 [2024-12-06 05:05:09.916458] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:31.911 [2024-12-06 05:05:09.916474] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:31.911 [2024-12-06 05:05:09.916489] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:31.911 [2024-12-06 05:05:09.916499] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:31.911 [2024-12-06 05:05:09.916508] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:31.911 [2024-12-06 05:05:09.916518] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:31.911 [2024-12-06 05:05:09.916526] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:31.911 [2024-12-06 05:05:09.916533] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:31.911 [2024-12-06 05:05:09.916542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.911 [2024-12-06 05:05:09.916553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:31.911 [2024-12-06 05:05:09.916563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.311 ms 00:17:31.911 [2024-12-06 05:05:09.916578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.911 [2024-12-06 05:05:09.916691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.911 [2024-12-06 05:05:09.916702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:31.911 [2024-12-06 05:05:09.916712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:17:31.911 [2024-12-06 05:05:09.916721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.911 [2024-12-06 05:05:09.916831] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:31.911 [2024-12-06 05:05:09.916846] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:31.911 [2024-12-06 05:05:09.916859] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:31.911 [2024-12-06 05:05:09.916872] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:31.911 [2024-12-06 05:05:09.916882] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:31.911 [2024-12-06 05:05:09.916891] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:31.911 [2024-12-06 05:05:09.916900] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:31.911 [2024-12-06 05:05:09.916910] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:31.911 [2024-12-06 05:05:09.916923] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:31.911 [2024-12-06 05:05:09.916932] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:31.911 [2024-12-06 05:05:09.916940] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:31.911 [2024-12-06 05:05:09.916950] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:31.912 [2024-12-06 05:05:09.916958] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:31.912 [2024-12-06 05:05:09.916971] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:31.912 [2024-12-06 05:05:09.916981] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:31.912 [2024-12-06 05:05:09.916990] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:31.912 [2024-12-06 05:05:09.916998] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:31.912 [2024-12-06 05:05:09.917006] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:31.912 [2024-12-06 05:05:09.917014] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:31.912 [2024-12-06 05:05:09.917023] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:31.912 [2024-12-06 05:05:09.917032] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:31.912 [2024-12-06 05:05:09.917040] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:31.912 [2024-12-06 05:05:09.917048] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:31.912 [2024-12-06 05:05:09.917057] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:31.912 [2024-12-06 05:05:09.917070] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:31.912 [2024-12-06 05:05:09.917077] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:31.912 [2024-12-06 05:05:09.917084] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:31.912 [2024-12-06 05:05:09.917092] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:31.912 [2024-12-06 05:05:09.917099] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:31.912 [2024-12-06 05:05:09.917107] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:31.912 [2024-12-06 05:05:09.917113] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:31.912 [2024-12-06 05:05:09.917121] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:31.912 [2024-12-06 05:05:09.917128] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:31.912 [2024-12-06 05:05:09.917135] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:31.912 [2024-12-06 05:05:09.917142] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:31.912 [2024-12-06 05:05:09.917149] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:31.912 [2024-12-06 05:05:09.917158] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:31.912 [2024-12-06 05:05:09.917166] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:31.912 [2024-12-06 05:05:09.917173] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:31.912 [2024-12-06 05:05:09.917180] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:31.912 [2024-12-06 05:05:09.917189] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:31.912 [2024-12-06 05:05:09.917197] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:31.912 [2024-12-06 05:05:09.917204] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:31.912 [2024-12-06 05:05:09.917212] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:31.912 [2024-12-06 05:05:09.917221] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:31.912 [2024-12-06 05:05:09.917232] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:31.912 [2024-12-06 05:05:09.917241] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:31.912 [2024-12-06 05:05:09.917249] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:31.912 [2024-12-06 05:05:09.917256] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:31.912 [2024-12-06 05:05:09.917263] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:31.912 [2024-12-06 05:05:09.917271] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:31.912 [2024-12-06 05:05:09.917278] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:31.912 [2024-12-06 05:05:09.917285] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:31.912 [2024-12-06 05:05:09.917295] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:31.912 [2024-12-06 05:05:09.917306] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:31.912 [2024-12-06 05:05:09.917316] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:31.912 [2024-12-06 05:05:09.917326] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:31.912 [2024-12-06 05:05:09.917335] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:31.912 [2024-12-06 05:05:09.917343] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:31.912 [2024-12-06 05:05:09.917351] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:31.912 [2024-12-06 05:05:09.917359] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:31.912 [2024-12-06 05:05:09.917369] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:31.912 [2024-12-06 05:05:09.917383] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:31.912 [2024-12-06 05:05:09.917390] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:31.912 [2024-12-06 05:05:09.917399] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:31.912 [2024-12-06 05:05:09.917406] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:31.912 [2024-12-06 05:05:09.917414] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:31.912 [2024-12-06 05:05:09.917423] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:31.912 [2024-12-06 05:05:09.917434] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:31.912 [2024-12-06 05:05:09.917442] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:31.912 [2024-12-06 05:05:09.917451] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:31.912 [2024-12-06 05:05:09.917464] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:31.912 [2024-12-06 05:05:09.917475] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:31.912 [2024-12-06 05:05:09.917484] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:31.912 [2024-12-06 05:05:09.917493] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:31.912 [2024-12-06 05:05:09.917502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.912 [2024-12-06 05:05:09.917512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:31.912 [2024-12-06 05:05:09.917524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.741 ms 00:17:31.912 [2024-12-06 05:05:09.917533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.912 [2024-12-06 05:05:09.944869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.912 [2024-12-06 05:05:09.945134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:31.912 [2024-12-06 05:05:09.945166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.260 ms 00:17:31.912 [2024-12-06 05:05:09.945177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.912 [2024-12-06 05:05:09.945361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.912 [2024-12-06 05:05:09.945376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:31.913 [2024-12-06 05:05:09.945387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:17:31.913 [2024-12-06 05:05:09.945401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.913 [2024-12-06 05:05:09.961935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.913 [2024-12-06 05:05:09.961992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:31.913 [2024-12-06 05:05:09.962005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.508 ms 00:17:31.913 [2024-12-06 05:05:09.962014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.913 [2024-12-06 05:05:09.962104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.913 [2024-12-06 05:05:09.962115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:31.913 [2024-12-06 05:05:09.962129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:31.913 [2024-12-06 05:05:09.962141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.913 [2024-12-06 05:05:09.962886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.913 [2024-12-06 05:05:09.962924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:31.913 [2024-12-06 05:05:09.962937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.718 ms 00:17:31.913 [2024-12-06 05:05:09.962946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.913 [2024-12-06 05:05:09.963123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.913 [2024-12-06 05:05:09.963146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:31.913 [2024-12-06 05:05:09.963162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.144 ms 00:17:31.913 [2024-12-06 05:05:09.963174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.913 [2024-12-06 05:05:09.973907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.913 [2024-12-06 05:05:09.973956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:31.913 [2024-12-06 05:05:09.973969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.708 ms 00:17:31.913 [2024-12-06 05:05:09.973979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.913 [2024-12-06 05:05:09.978985] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:31.913 [2024-12-06 05:05:09.979050] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:31.913 [2024-12-06 05:05:09.979064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.913 [2024-12-06 05:05:09.979074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:31.913 [2024-12-06 05:05:09.979084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.944 ms 00:17:31.913 [2024-12-06 05:05:09.979093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.913 [2024-12-06 05:05:09.995835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.913 [2024-12-06 05:05:09.996069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:31.913 [2024-12-06 05:05:09.996091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.654 ms 00:17:31.913 [2024-12-06 05:05:09.996101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.913 [2024-12-06 05:05:09.999218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.913 [2024-12-06 05:05:09.999546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:31.913 [2024-12-06 05:05:09.999572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.955 ms 00:17:31.913 [2024-12-06 05:05:09.999580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.913 [2024-12-06 05:05:10.002398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.913 [2024-12-06 05:05:10.002461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:31.913 [2024-12-06 05:05:10.002483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.757 ms 00:17:31.913 [2024-12-06 05:05:10.002492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.913 [2024-12-06 05:05:10.002891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.913 [2024-12-06 05:05:10.002920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:31.913 [2024-12-06 05:05:10.002935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.308 ms 00:17:31.913 [2024-12-06 05:05:10.002944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.913 [2024-12-06 05:05:10.035656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.913 [2024-12-06 05:05:10.035759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:31.913 [2024-12-06 05:05:10.035774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.682 ms 00:17:31.913 [2024-12-06 05:05:10.035784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.913 [2024-12-06 05:05:10.045219] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:31.913 [2024-12-06 05:05:10.071608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.913 [2024-12-06 05:05:10.071700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:31.913 [2024-12-06 05:05:10.071719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.703 ms 00:17:31.913 [2024-12-06 05:05:10.071729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.913 [2024-12-06 05:05:10.071885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.913 [2024-12-06 05:05:10.071899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:31.913 [2024-12-06 05:05:10.071919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:17:31.913 [2024-12-06 05:05:10.071939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.913 [2024-12-06 05:05:10.072031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.913 [2024-12-06 05:05:10.072047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:31.913 [2024-12-06 05:05:10.072057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:17:31.913 [2024-12-06 05:05:10.072066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.913 [2024-12-06 05:05:10.072099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.913 [2024-12-06 05:05:10.072111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:31.913 [2024-12-06 05:05:10.072124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:31.913 [2024-12-06 05:05:10.072133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.913 [2024-12-06 05:05:10.072178] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:31.913 [2024-12-06 05:05:10.072191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.913 [2024-12-06 05:05:10.072203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:31.913 [2024-12-06 05:05:10.072219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:17:31.913 [2024-12-06 05:05:10.072228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.913 [2024-12-06 05:05:10.079361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.913 [2024-12-06 05:05:10.079634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:31.913 [2024-12-06 05:05:10.079702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.102 ms 00:17:31.913 [2024-12-06 05:05:10.079712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.913 [2024-12-06 05:05:10.079821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.913 [2024-12-06 05:05:10.079839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:31.913 [2024-12-06 05:05:10.079851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:17:31.913 [2024-12-06 05:05:10.079861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.913 [2024-12-06 05:05:10.081170] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:31.913 [2024-12-06 05:05:10.082776] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 189.701 ms, result 0 00:17:31.913 [2024-12-06 05:05:10.084025] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:31.913 [2024-12-06 05:05:10.091643] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:32.487  [2024-12-06T05:05:10.719Z] Copying: 4096/4096 [kB] (average 10138 kBps)[2024-12-06 05:05:10.497777] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:32.487 [2024-12-06 05:05:10.499093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.487 [2024-12-06 05:05:10.499150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:32.487 [2024-12-06 05:05:10.499170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:32.487 [2024-12-06 05:05:10.499180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.487 [2024-12-06 05:05:10.499204] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:32.487 [2024-12-06 05:05:10.500191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.487 [2024-12-06 05:05:10.500229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:32.487 [2024-12-06 05:05:10.500242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.970 ms 00:17:32.487 [2024-12-06 05:05:10.500252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.487 [2024-12-06 05:05:10.503789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.487 [2024-12-06 05:05:10.503840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:32.487 [2024-12-06 05:05:10.503852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.506 ms 00:17:32.487 [2024-12-06 05:05:10.503860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.487 [2024-12-06 05:05:10.508315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.487 [2024-12-06 05:05:10.508372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:32.487 [2024-12-06 05:05:10.508385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.430 ms 00:17:32.488 [2024-12-06 05:05:10.508400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.488 [2024-12-06 05:05:10.515410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.488 [2024-12-06 05:05:10.515458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:32.488 [2024-12-06 05:05:10.515470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.973 ms 00:17:32.488 [2024-12-06 05:05:10.515479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.488 [2024-12-06 05:05:10.518764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.488 [2024-12-06 05:05:10.518815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:32.488 [2024-12-06 05:05:10.518828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.219 ms 00:17:32.488 [2024-12-06 05:05:10.518837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.488 [2024-12-06 05:05:10.525215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.488 [2024-12-06 05:05:10.525271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:32.488 [2024-12-06 05:05:10.525293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.323 ms 00:17:32.488 [2024-12-06 05:05:10.525304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.488 [2024-12-06 05:05:10.525457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.488 [2024-12-06 05:05:10.525471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:32.488 [2024-12-06 05:05:10.525483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:17:32.488 [2024-12-06 05:05:10.525493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.488 [2024-12-06 05:05:10.529363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.488 [2024-12-06 05:05:10.529417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:32.488 [2024-12-06 05:05:10.529427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.837 ms 00:17:32.488 [2024-12-06 05:05:10.529435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.488 [2024-12-06 05:05:10.532481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.488 [2024-12-06 05:05:10.532785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:32.488 [2024-12-06 05:05:10.532807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.998 ms 00:17:32.488 [2024-12-06 05:05:10.532815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.488 [2024-12-06 05:05:10.535440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.488 [2024-12-06 05:05:10.535493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:32.488 [2024-12-06 05:05:10.535504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.395 ms 00:17:32.488 [2024-12-06 05:05:10.535512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.488 [2024-12-06 05:05:10.538065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.488 [2024-12-06 05:05:10.538117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:32.488 [2024-12-06 05:05:10.538128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.459 ms 00:17:32.488 [2024-12-06 05:05:10.538135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.488 [2024-12-06 05:05:10.538183] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:32.488 [2024-12-06 05:05:10.538210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:32.488 [2024-12-06 05:05:10.538221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:32.488 [2024-12-06 05:05:10.538229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:32.488 [2024-12-06 05:05:10.538238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:32.488 [2024-12-06 05:05:10.538246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:32.488 [2024-12-06 05:05:10.538254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:32.488 [2024-12-06 05:05:10.538263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:32.488 [2024-12-06 05:05:10.538272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:32.488 [2024-12-06 05:05:10.538279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:32.488 [2024-12-06 05:05:10.538287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:32.488 [2024-12-06 05:05:10.538296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:32.488 [2024-12-06 05:05:10.538304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:32.488 [2024-12-06 05:05:10.538312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:32.488 [2024-12-06 05:05:10.538320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:32.488 [2024-12-06 05:05:10.538328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:32.488 [2024-12-06 05:05:10.538335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:32.488 [2024-12-06 05:05:10.538343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:32.488 [2024-12-06 05:05:10.538351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:32.488 [2024-12-06 05:05:10.538358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:32.488 [2024-12-06 05:05:10.538366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:32.488 [2024-12-06 05:05:10.538374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:32.488 [2024-12-06 05:05:10.538381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:32.488 [2024-12-06 05:05:10.538389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:32.488 [2024-12-06 05:05:10.538396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:32.488 [2024-12-06 05:05:10.538404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:32.488 [2024-12-06 05:05:10.538411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:32.488 [2024-12-06 05:05:10.538421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:32.488 [2024-12-06 05:05:10.538430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:32.488 [2024-12-06 05:05:10.538438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:32.488 [2024-12-06 05:05:10.538445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:32.488 [2024-12-06 05:05:10.538455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:32.488 [2024-12-06 05:05:10.538463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:32.488 [2024-12-06 05:05:10.538470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:32.488 [2024-12-06 05:05:10.538479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:32.488 [2024-12-06 05:05:10.538488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:32.488 [2024-12-06 05:05:10.538505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:32.488 [2024-12-06 05:05:10.538513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:32.488 [2024-12-06 05:05:10.538521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:32.488 [2024-12-06 05:05:10.538530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:32.488 [2024-12-06 05:05:10.538538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:32.488 [2024-12-06 05:05:10.538546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:32.489 [2024-12-06 05:05:10.538553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:32.489 [2024-12-06 05:05:10.538561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:32.489 [2024-12-06 05:05:10.538570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:32.489 [2024-12-06 05:05:10.538578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:32.489 [2024-12-06 05:05:10.538586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:32.489 [2024-12-06 05:05:10.538593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:32.489 [2024-12-06 05:05:10.538601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:32.489 [2024-12-06 05:05:10.538609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:32.489 [2024-12-06 05:05:10.538618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:32.489 [2024-12-06 05:05:10.538626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:32.489 [2024-12-06 05:05:10.538633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:32.489 [2024-12-06 05:05:10.538642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:32.489 [2024-12-06 05:05:10.538649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:32.489 [2024-12-06 05:05:10.538657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:32.489 [2024-12-06 05:05:10.538690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:32.489 [2024-12-06 05:05:10.538698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:32.489 [2024-12-06 05:05:10.538708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:32.489 [2024-12-06 05:05:10.538716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:32.489 [2024-12-06 05:05:10.538724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:32.489 [2024-12-06 05:05:10.538732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:32.489 [2024-12-06 05:05:10.538740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:32.489 [2024-12-06 05:05:10.538752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:32.489 [2024-12-06 05:05:10.538761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:32.489 [2024-12-06 05:05:10.538769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:32.489 [2024-12-06 05:05:10.538778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:32.489 [2024-12-06 05:05:10.538785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:32.489 [2024-12-06 05:05:10.538795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:32.489 [2024-12-06 05:05:10.538803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:32.489 [2024-12-06 05:05:10.538811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:32.489 [2024-12-06 05:05:10.538820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:32.489 [2024-12-06 05:05:10.538829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:32.489 [2024-12-06 05:05:10.538837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:32.489 [2024-12-06 05:05:10.538847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:32.489 [2024-12-06 05:05:10.538854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:32.489 [2024-12-06 05:05:10.538862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:32.489 [2024-12-06 05:05:10.538869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:32.489 [2024-12-06 05:05:10.538877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:32.489 [2024-12-06 05:05:10.538885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:32.489 [2024-12-06 05:05:10.538893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:32.489 [2024-12-06 05:05:10.538901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:32.489 [2024-12-06 05:05:10.538909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:32.489 [2024-12-06 05:05:10.538917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:32.489 [2024-12-06 05:05:10.538928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:32.489 [2024-12-06 05:05:10.538938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:32.489 [2024-12-06 05:05:10.538945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:32.489 [2024-12-06 05:05:10.538953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:32.489 [2024-12-06 05:05:10.538960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:32.489 [2024-12-06 05:05:10.538970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:32.489 [2024-12-06 05:05:10.538978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:32.489 [2024-12-06 05:05:10.538985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:32.489 [2024-12-06 05:05:10.538992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:32.489 [2024-12-06 05:05:10.539000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:32.489 [2024-12-06 05:05:10.539009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:32.489 [2024-12-06 05:05:10.539018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:32.489 [2024-12-06 05:05:10.539035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:32.489 [2024-12-06 05:05:10.539043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:32.489 [2024-12-06 05:05:10.539052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:32.489 [2024-12-06 05:05:10.539060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:32.489 [2024-12-06 05:05:10.539068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:32.489 [2024-12-06 05:05:10.539084] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:32.489 [2024-12-06 05:05:10.539094] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 73ef595a-5226-44c7-8985-04a18b3bbf5e 00:17:32.489 [2024-12-06 05:05:10.539102] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:32.489 [2024-12-06 05:05:10.539109] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:32.489 [2024-12-06 05:05:10.539117] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:32.489 [2024-12-06 05:05:10.539126] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:32.489 [2024-12-06 05:05:10.539135] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:32.489 [2024-12-06 05:05:10.539144] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:32.489 [2024-12-06 05:05:10.539160] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:32.489 [2024-12-06 05:05:10.539168] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:32.489 [2024-12-06 05:05:10.539175] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:32.489 [2024-12-06 05:05:10.539182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.490 [2024-12-06 05:05:10.539190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:32.490 [2024-12-06 05:05:10.539205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.001 ms 00:17:32.490 [2024-12-06 05:05:10.539215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.490 [2024-12-06 05:05:10.542329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.490 [2024-12-06 05:05:10.542366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:32.490 [2024-12-06 05:05:10.542377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.092 ms 00:17:32.490 [2024-12-06 05:05:10.542385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.490 [2024-12-06 05:05:10.542545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.490 [2024-12-06 05:05:10.542565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:32.490 [2024-12-06 05:05:10.542575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.134 ms 00:17:32.490 [2024-12-06 05:05:10.542584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.490 [2024-12-06 05:05:10.552699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.490 [2024-12-06 05:05:10.552748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:32.490 [2024-12-06 05:05:10.552759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.490 [2024-12-06 05:05:10.552768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.490 [2024-12-06 05:05:10.552854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.490 [2024-12-06 05:05:10.552872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:32.490 [2024-12-06 05:05:10.552881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.490 [2024-12-06 05:05:10.552888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.490 [2024-12-06 05:05:10.552941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.490 [2024-12-06 05:05:10.552952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:32.490 [2024-12-06 05:05:10.552963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.490 [2024-12-06 05:05:10.552971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.490 [2024-12-06 05:05:10.552991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.490 [2024-12-06 05:05:10.552999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:32.490 [2024-12-06 05:05:10.553012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.490 [2024-12-06 05:05:10.553027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.490 [2024-12-06 05:05:10.574011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.490 [2024-12-06 05:05:10.574275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:32.490 [2024-12-06 05:05:10.574305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.490 [2024-12-06 05:05:10.574315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.490 [2024-12-06 05:05:10.590589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.490 [2024-12-06 05:05:10.590661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:32.490 [2024-12-06 05:05:10.590695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.490 [2024-12-06 05:05:10.590704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.490 [2024-12-06 05:05:10.590770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.490 [2024-12-06 05:05:10.590781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:32.490 [2024-12-06 05:05:10.590791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.490 [2024-12-06 05:05:10.590801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.490 [2024-12-06 05:05:10.590838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.490 [2024-12-06 05:05:10.590849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:32.490 [2024-12-06 05:05:10.590858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.490 [2024-12-06 05:05:10.590872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.490 [2024-12-06 05:05:10.590957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.490 [2024-12-06 05:05:10.590971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:32.490 [2024-12-06 05:05:10.590981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.490 [2024-12-06 05:05:10.590991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.490 [2024-12-06 05:05:10.591028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.490 [2024-12-06 05:05:10.591041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:32.490 [2024-12-06 05:05:10.591052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.490 [2024-12-06 05:05:10.591062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.490 [2024-12-06 05:05:10.591134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.490 [2024-12-06 05:05:10.591147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:32.490 [2024-12-06 05:05:10.591164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.490 [2024-12-06 05:05:10.591176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.490 [2024-12-06 05:05:10.591238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.490 [2024-12-06 05:05:10.591253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:32.490 [2024-12-06 05:05:10.591264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.490 [2024-12-06 05:05:10.591277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.490 [2024-12-06 05:05:10.591468] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 92.336 ms, result 0 00:17:32.751 00:17:32.751 00:17:32.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:32.751 05:05:10 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=85805 00:17:32.751 05:05:10 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 85805 00:17:32.751 05:05:10 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 85805 ']' 00:17:32.751 05:05:10 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:32.751 05:05:10 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:32.751 05:05:10 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:32.751 05:05:10 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:32.751 05:05:10 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:32.751 05:05:10 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:17:33.011 [2024-12-06 05:05:10.992436] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:17:33.012 [2024-12-06 05:05:10.992983] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85805 ] 00:17:33.012 [2024-12-06 05:05:11.131450] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:33.012 [2024-12-06 05:05:11.204873] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:33.954 05:05:11 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:33.954 05:05:11 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:17:33.954 05:05:11 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:17:33.954 [2024-12-06 05:05:12.058515] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:33.954 [2024-12-06 05:05:12.058617] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:34.218 [2024-12-06 05:05:12.238520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.218 [2024-12-06 05:05:12.238588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:34.218 [2024-12-06 05:05:12.238606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:34.218 [2024-12-06 05:05:12.238618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.218 [2024-12-06 05:05:12.241335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.218 [2024-12-06 05:05:12.241640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:34.218 [2024-12-06 05:05:12.241701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.694 ms 00:17:34.218 [2024-12-06 05:05:12.241713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.218 [2024-12-06 05:05:12.242314] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:34.218 [2024-12-06 05:05:12.242874] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:34.218 [2024-12-06 05:05:12.242935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.218 [2024-12-06 05:05:12.242957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:34.218 [2024-12-06 05:05:12.242978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.642 ms 00:17:34.218 [2024-12-06 05:05:12.242990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.218 [2024-12-06 05:05:12.245377] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:34.218 [2024-12-06 05:05:12.250525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.218 [2024-12-06 05:05:12.250583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:34.218 [2024-12-06 05:05:12.250599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.143 ms 00:17:34.218 [2024-12-06 05:05:12.250607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.218 [2024-12-06 05:05:12.250735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.218 [2024-12-06 05:05:12.250752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:34.218 [2024-12-06 05:05:12.250768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:17:34.218 [2024-12-06 05:05:12.250778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.218 [2024-12-06 05:05:12.262538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.218 [2024-12-06 05:05:12.262589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:34.218 [2024-12-06 05:05:12.262605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.704 ms 00:17:34.218 [2024-12-06 05:05:12.262613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.218 [2024-12-06 05:05:12.262796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.218 [2024-12-06 05:05:12.262814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:34.218 [2024-12-06 05:05:12.262828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:17:34.218 [2024-12-06 05:05:12.262836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.218 [2024-12-06 05:05:12.262876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.218 [2024-12-06 05:05:12.262886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:34.218 [2024-12-06 05:05:12.262902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:34.218 [2024-12-06 05:05:12.262913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.218 [2024-12-06 05:05:12.262941] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:34.218 [2024-12-06 05:05:12.265705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.218 [2024-12-06 05:05:12.265754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:34.218 [2024-12-06 05:05:12.265765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.773 ms 00:17:34.218 [2024-12-06 05:05:12.265776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.218 [2024-12-06 05:05:12.265825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.218 [2024-12-06 05:05:12.265836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:34.218 [2024-12-06 05:05:12.265846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:17:34.218 [2024-12-06 05:05:12.265856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.218 [2024-12-06 05:05:12.265879] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:34.218 [2024-12-06 05:05:12.265912] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:34.218 [2024-12-06 05:05:12.265952] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:34.218 [2024-12-06 05:05:12.265979] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:34.218 [2024-12-06 05:05:12.266092] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:34.218 [2024-12-06 05:05:12.266109] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:34.218 [2024-12-06 05:05:12.266125] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:34.218 [2024-12-06 05:05:12.266142] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:34.218 [2024-12-06 05:05:12.266153] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:34.218 [2024-12-06 05:05:12.266171] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:34.218 [2024-12-06 05:05:12.266179] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:34.218 [2024-12-06 05:05:12.266189] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:34.218 [2024-12-06 05:05:12.266197] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:34.218 [2024-12-06 05:05:12.266209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.218 [2024-12-06 05:05:12.266222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:34.218 [2024-12-06 05:05:12.266233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.330 ms 00:17:34.218 [2024-12-06 05:05:12.266241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.218 [2024-12-06 05:05:12.266333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.218 [2024-12-06 05:05:12.266346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:34.218 [2024-12-06 05:05:12.266359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:17:34.218 [2024-12-06 05:05:12.266368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.218 [2024-12-06 05:05:12.266475] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:34.218 [2024-12-06 05:05:12.266487] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:34.218 [2024-12-06 05:05:12.266502] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:34.218 [2024-12-06 05:05:12.266513] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:34.218 [2024-12-06 05:05:12.266530] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:34.218 [2024-12-06 05:05:12.266537] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:34.218 [2024-12-06 05:05:12.266548] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:34.218 [2024-12-06 05:05:12.266559] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:34.218 [2024-12-06 05:05:12.266572] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:34.218 [2024-12-06 05:05:12.266581] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:34.218 [2024-12-06 05:05:12.266591] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:34.218 [2024-12-06 05:05:12.266600] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:34.218 [2024-12-06 05:05:12.266611] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:34.218 [2024-12-06 05:05:12.266621] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:34.218 [2024-12-06 05:05:12.266632] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:34.218 [2024-12-06 05:05:12.266642] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:34.218 [2024-12-06 05:05:12.266653] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:34.218 [2024-12-06 05:05:12.266662] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:34.218 [2024-12-06 05:05:12.266695] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:34.218 [2024-12-06 05:05:12.266703] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:34.218 [2024-12-06 05:05:12.266715] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:34.218 [2024-12-06 05:05:12.266723] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:34.218 [2024-12-06 05:05:12.266732] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:34.218 [2024-12-06 05:05:12.266740] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:34.218 [2024-12-06 05:05:12.266750] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:34.218 [2024-12-06 05:05:12.266757] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:34.218 [2024-12-06 05:05:12.266767] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:34.218 [2024-12-06 05:05:12.266774] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:34.218 [2024-12-06 05:05:12.266785] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:34.218 [2024-12-06 05:05:12.266792] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:34.218 [2024-12-06 05:05:12.266803] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:34.219 [2024-12-06 05:05:12.266811] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:34.219 [2024-12-06 05:05:12.266822] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:34.219 [2024-12-06 05:05:12.266832] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:34.219 [2024-12-06 05:05:12.266842] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:34.219 [2024-12-06 05:05:12.266850] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:34.219 [2024-12-06 05:05:12.266862] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:34.219 [2024-12-06 05:05:12.266869] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:34.219 [2024-12-06 05:05:12.266881] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:34.219 [2024-12-06 05:05:12.266889] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:34.219 [2024-12-06 05:05:12.266898] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:34.219 [2024-12-06 05:05:12.266904] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:34.219 [2024-12-06 05:05:12.266914] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:34.219 [2024-12-06 05:05:12.266920] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:34.219 [2024-12-06 05:05:12.266931] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:34.219 [2024-12-06 05:05:12.266945] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:34.219 [2024-12-06 05:05:12.266955] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:34.219 [2024-12-06 05:05:12.266965] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:34.219 [2024-12-06 05:05:12.266974] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:34.219 [2024-12-06 05:05:12.266981] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:34.219 [2024-12-06 05:05:12.266990] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:34.219 [2024-12-06 05:05:12.266997] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:34.219 [2024-12-06 05:05:12.267008] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:34.219 [2024-12-06 05:05:12.267018] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:34.219 [2024-12-06 05:05:12.267032] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:34.219 [2024-12-06 05:05:12.267041] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:34.219 [2024-12-06 05:05:12.267052] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:34.219 [2024-12-06 05:05:12.267060] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:34.219 [2024-12-06 05:05:12.267069] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:34.219 [2024-12-06 05:05:12.267077] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:34.219 [2024-12-06 05:05:12.267087] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:34.219 [2024-12-06 05:05:12.267095] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:34.219 [2024-12-06 05:05:12.267104] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:34.219 [2024-12-06 05:05:12.267112] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:34.219 [2024-12-06 05:05:12.267121] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:34.219 [2024-12-06 05:05:12.267128] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:34.219 [2024-12-06 05:05:12.267137] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:34.219 [2024-12-06 05:05:12.267144] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:34.219 [2024-12-06 05:05:12.267157] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:34.219 [2024-12-06 05:05:12.267171] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:34.219 [2024-12-06 05:05:12.267181] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:34.219 [2024-12-06 05:05:12.267189] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:34.219 [2024-12-06 05:05:12.267200] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:34.219 [2024-12-06 05:05:12.267207] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:34.219 [2024-12-06 05:05:12.267217] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:34.219 [2024-12-06 05:05:12.267232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.219 [2024-12-06 05:05:12.267245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:34.219 [2024-12-06 05:05:12.267260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.829 ms 00:17:34.219 [2024-12-06 05:05:12.267274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.219 [2024-12-06 05:05:12.288224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.219 [2024-12-06 05:05:12.288515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:34.219 [2024-12-06 05:05:12.288537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.860 ms 00:17:34.219 [2024-12-06 05:05:12.288549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.219 [2024-12-06 05:05:12.288719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.219 [2024-12-06 05:05:12.288739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:34.219 [2024-12-06 05:05:12.288752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:17:34.219 [2024-12-06 05:05:12.288763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.219 [2024-12-06 05:05:12.305794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.219 [2024-12-06 05:05:12.305846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:34.219 [2024-12-06 05:05:12.305859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.004 ms 00:17:34.219 [2024-12-06 05:05:12.305871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.219 [2024-12-06 05:05:12.305951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.219 [2024-12-06 05:05:12.305967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:34.219 [2024-12-06 05:05:12.305976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:34.219 [2024-12-06 05:05:12.305991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.219 [2024-12-06 05:05:12.306732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.219 [2024-12-06 05:05:12.306764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:34.219 [2024-12-06 05:05:12.306777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.715 ms 00:17:34.219 [2024-12-06 05:05:12.306816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.219 [2024-12-06 05:05:12.306988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.219 [2024-12-06 05:05:12.307006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:34.219 [2024-12-06 05:05:12.307019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.139 ms 00:17:34.219 [2024-12-06 05:05:12.307035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.219 [2024-12-06 05:05:12.334703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.219 [2024-12-06 05:05:12.334774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:34.219 [2024-12-06 05:05:12.334791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.638 ms 00:17:34.219 [2024-12-06 05:05:12.334804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.219 [2024-12-06 05:05:12.339805] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:17:34.219 [2024-12-06 05:05:12.339866] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:34.219 [2024-12-06 05:05:12.339881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.219 [2024-12-06 05:05:12.339894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:34.219 [2024-12-06 05:05:12.339904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.884 ms 00:17:34.219 [2024-12-06 05:05:12.339915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.219 [2024-12-06 05:05:12.357255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.219 [2024-12-06 05:05:12.357319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:34.219 [2024-12-06 05:05:12.357334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.248 ms 00:17:34.219 [2024-12-06 05:05:12.357348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.219 [2024-12-06 05:05:12.360972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.219 [2024-12-06 05:05:12.361158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:34.219 [2024-12-06 05:05:12.361224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.523 ms 00:17:34.219 [2024-12-06 05:05:12.361252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.219 [2024-12-06 05:05:12.364203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.219 [2024-12-06 05:05:12.364381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:34.219 [2024-12-06 05:05:12.364400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.892 ms 00:17:34.219 [2024-12-06 05:05:12.364411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.219 [2024-12-06 05:05:12.364804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.219 [2024-12-06 05:05:12.364834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:34.219 [2024-12-06 05:05:12.364844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.313 ms 00:17:34.219 [2024-12-06 05:05:12.364855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.219 [2024-12-06 05:05:12.396894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.219 [2024-12-06 05:05:12.396956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:34.219 [2024-12-06 05:05:12.396970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.011 ms 00:17:34.220 [2024-12-06 05:05:12.396984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.220 [2024-12-06 05:05:12.405648] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:34.220 [2024-12-06 05:05:12.430489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.220 [2024-12-06 05:05:12.430550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:34.220 [2024-12-06 05:05:12.430566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.399 ms 00:17:34.220 [2024-12-06 05:05:12.430580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.220 [2024-12-06 05:05:12.430723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.220 [2024-12-06 05:05:12.430748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:34.220 [2024-12-06 05:05:12.430761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:17:34.220 [2024-12-06 05:05:12.430774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.220 [2024-12-06 05:05:12.430860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.220 [2024-12-06 05:05:12.430870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:34.220 [2024-12-06 05:05:12.430887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:17:34.220 [2024-12-06 05:05:12.430895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.220 [2024-12-06 05:05:12.430936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.220 [2024-12-06 05:05:12.430947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:34.220 [2024-12-06 05:05:12.430961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:17:34.220 [2024-12-06 05:05:12.430969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.220 [2024-12-06 05:05:12.431019] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:34.220 [2024-12-06 05:05:12.431032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.220 [2024-12-06 05:05:12.431052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:34.220 [2024-12-06 05:05:12.431061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:17:34.220 [2024-12-06 05:05:12.431072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.220 [2024-12-06 05:05:12.438582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.220 [2024-12-06 05:05:12.438873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:34.220 [2024-12-06 05:05:12.438899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.487 ms 00:17:34.220 [2024-12-06 05:05:12.438911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.220 [2024-12-06 05:05:12.439016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.220 [2024-12-06 05:05:12.439031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:34.220 [2024-12-06 05:05:12.439042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:17:34.220 [2024-12-06 05:05:12.439054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.220 [2024-12-06 05:05:12.440391] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:34.220 [2024-12-06 05:05:12.441984] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 201.472 ms, result 0 00:17:34.220 [2024-12-06 05:05:12.444591] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:34.482 Some configs were skipped because the RPC state that can call them passed over. 00:17:34.482 05:05:12 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:17:34.482 [2024-12-06 05:05:12.677953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.482 [2024-12-06 05:05:12.678156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:34.482 [2024-12-06 05:05:12.678410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.133 ms 00:17:34.482 [2024-12-06 05:05:12.678455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.482 [2024-12-06 05:05:12.678527] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.714 ms, result 0 00:17:34.482 true 00:17:34.482 05:05:12 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:17:34.743 [2024-12-06 05:05:12.893983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.743 [2024-12-06 05:05:12.894171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:34.743 [2024-12-06 05:05:12.894234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.933 ms 00:17:34.743 [2024-12-06 05:05:12.894260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.743 [2024-12-06 05:05:12.894316] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.264 ms, result 0 00:17:34.743 true 00:17:34.743 05:05:12 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 85805 00:17:34.743 05:05:12 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 85805 ']' 00:17:34.743 05:05:12 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 85805 00:17:34.743 05:05:12 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:17:34.743 05:05:12 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:34.743 05:05:12 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85805 00:17:34.743 killing process with pid 85805 00:17:34.743 05:05:12 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:34.743 05:05:12 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:34.743 05:05:12 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85805' 00:17:34.743 05:05:12 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 85805 00:17:34.743 05:05:12 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 85805 00:17:35.006 [2024-12-06 05:05:13.153195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.006 [2024-12-06 05:05:13.153280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:35.006 [2024-12-06 05:05:13.153299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:35.006 [2024-12-06 05:05:13.153308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.006 [2024-12-06 05:05:13.153339] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:35.006 [2024-12-06 05:05:13.154339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.006 [2024-12-06 05:05:13.154389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:35.006 [2024-12-06 05:05:13.154416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.982 ms 00:17:35.006 [2024-12-06 05:05:13.154428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.006 [2024-12-06 05:05:13.154766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.006 [2024-12-06 05:05:13.154796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:35.006 [2024-12-06 05:05:13.154807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.296 ms 00:17:35.006 [2024-12-06 05:05:13.154818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.006 [2024-12-06 05:05:13.159473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.006 [2024-12-06 05:05:13.159527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:35.006 [2024-12-06 05:05:13.159540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.630 ms 00:17:35.006 [2024-12-06 05:05:13.159551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.006 [2024-12-06 05:05:13.166630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.006 [2024-12-06 05:05:13.166715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:35.006 [2024-12-06 05:05:13.166728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.032 ms 00:17:35.006 [2024-12-06 05:05:13.166741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.006 [2024-12-06 05:05:13.169840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.006 [2024-12-06 05:05:13.170134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:35.006 [2024-12-06 05:05:13.170156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.000 ms 00:17:35.006 [2024-12-06 05:05:13.170166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.006 [2024-12-06 05:05:13.176507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.006 [2024-12-06 05:05:13.176574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:35.006 [2024-12-06 05:05:13.176586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.290 ms 00:17:35.006 [2024-12-06 05:05:13.176597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.006 [2024-12-06 05:05:13.176781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.006 [2024-12-06 05:05:13.176798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:35.006 [2024-12-06 05:05:13.176808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:17:35.006 [2024-12-06 05:05:13.176818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.006 [2024-12-06 05:05:13.180849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.006 [2024-12-06 05:05:13.180908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:35.006 [2024-12-06 05:05:13.180919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.980 ms 00:17:35.006 [2024-12-06 05:05:13.180934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.006 [2024-12-06 05:05:13.184028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.006 [2024-12-06 05:05:13.184225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:35.006 [2024-12-06 05:05:13.184245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.039 ms 00:17:35.006 [2024-12-06 05:05:13.184259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.006 [2024-12-06 05:05:13.187650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.007 [2024-12-06 05:05:13.187814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:35.007 [2024-12-06 05:05:13.187851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.104 ms 00:17:35.007 [2024-12-06 05:05:13.187879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.007 [2024-12-06 05:05:13.191365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.007 [2024-12-06 05:05:13.191774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:35.007 [2024-12-06 05:05:13.191823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.140 ms 00:17:35.007 [2024-12-06 05:05:13.191852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.007 [2024-12-06 05:05:13.191947] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:35.007 [2024-12-06 05:05:13.191997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:35.007 [2024-12-06 05:05:13.192028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:35.007 [2024-12-06 05:05:13.192064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:35.007 [2024-12-06 05:05:13.192089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:35.007 [2024-12-06 05:05:13.192120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:35.007 [2024-12-06 05:05:13.192146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:35.007 [2024-12-06 05:05:13.192174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:35.007 [2024-12-06 05:05:13.192199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:35.007 [2024-12-06 05:05:13.192229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:35.007 [2024-12-06 05:05:13.192253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:35.007 [2024-12-06 05:05:13.192283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:35.007 [2024-12-06 05:05:13.192308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:35.007 [2024-12-06 05:05:13.192338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:35.007 [2024-12-06 05:05:13.192363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:35.007 [2024-12-06 05:05:13.192392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:35.007 [2024-12-06 05:05:13.192416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:35.007 [2024-12-06 05:05:13.192452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:35.007 [2024-12-06 05:05:13.192476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:35.007 [2024-12-06 05:05:13.192510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:35.007 [2024-12-06 05:05:13.192536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:35.007 [2024-12-06 05:05:13.192583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:35.007 [2024-12-06 05:05:13.192609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:35.007 [2024-12-06 05:05:13.192637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:35.007 [2024-12-06 05:05:13.192663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:35.007 [2024-12-06 05:05:13.192732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:35.007 [2024-12-06 05:05:13.192758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:35.007 [2024-12-06 05:05:13.192786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:35.007 [2024-12-06 05:05:13.192815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:35.007 [2024-12-06 05:05:13.192843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:35.007 [2024-12-06 05:05:13.192869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:35.007 [2024-12-06 05:05:13.192899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:35.007 [2024-12-06 05:05:13.192922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:35.007 [2024-12-06 05:05:13.192953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:35.007 [2024-12-06 05:05:13.192977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:35.007 [2024-12-06 05:05:13.193010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:35.007 [2024-12-06 05:05:13.193034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:35.007 [2024-12-06 05:05:13.193063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:35.007 [2024-12-06 05:05:13.193088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:35.007 [2024-12-06 05:05:13.193119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:35.007 [2024-12-06 05:05:13.193144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:35.007 [2024-12-06 05:05:13.193178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:35.007 [2024-12-06 05:05:13.193203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:35.007 [2024-12-06 05:05:13.193231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:35.007 [2024-12-06 05:05:13.193256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:35.007 [2024-12-06 05:05:13.193286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:35.007 [2024-12-06 05:05:13.193310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:35.007 [2024-12-06 05:05:13.193341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:35.007 [2024-12-06 05:05:13.193366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:35.007 [2024-12-06 05:05:13.193394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:35.007 [2024-12-06 05:05:13.193420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:35.007 [2024-12-06 05:05:13.193455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:35.007 [2024-12-06 05:05:13.193480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:35.007 [2024-12-06 05:05:13.193508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:35.007 [2024-12-06 05:05:13.193533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:35.007 [2024-12-06 05:05:13.193564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:35.007 [2024-12-06 05:05:13.193587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:35.007 [2024-12-06 05:05:13.193619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:35.007 [2024-12-06 05:05:13.193644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:35.007 [2024-12-06 05:05:13.193722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:35.007 [2024-12-06 05:05:13.193747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:35.007 [2024-12-06 05:05:13.193779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:35.007 [2024-12-06 05:05:13.193803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:35.008 [2024-12-06 05:05:13.193832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:35.008 [2024-12-06 05:05:13.193856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:35.008 [2024-12-06 05:05:13.193887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:35.008 [2024-12-06 05:05:13.193911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:35.008 [2024-12-06 05:05:13.193951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:35.008 [2024-12-06 05:05:13.193977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:35.008 [2024-12-06 05:05:13.194008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:35.008 [2024-12-06 05:05:13.194033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:35.008 [2024-12-06 05:05:13.194064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:35.008 [2024-12-06 05:05:13.194089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:35.008 [2024-12-06 05:05:13.194118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:35.008 [2024-12-06 05:05:13.194142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:35.008 [2024-12-06 05:05:13.194172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:35.008 [2024-12-06 05:05:13.194198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:35.008 [2024-12-06 05:05:13.194227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:35.008 [2024-12-06 05:05:13.194252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:35.008 [2024-12-06 05:05:13.194284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:35.008 [2024-12-06 05:05:13.194310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:35.008 [2024-12-06 05:05:13.194356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:35.008 [2024-12-06 05:05:13.194382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:35.008 [2024-12-06 05:05:13.194416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:35.008 [2024-12-06 05:05:13.194441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:35.008 [2024-12-06 05:05:13.194471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:35.008 [2024-12-06 05:05:13.194499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:35.008 [2024-12-06 05:05:13.194532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:35.008 [2024-12-06 05:05:13.194557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:35.008 [2024-12-06 05:05:13.194588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:35.008 [2024-12-06 05:05:13.194614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:35.008 [2024-12-06 05:05:13.194643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:35.008 [2024-12-06 05:05:13.194689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:35.008 [2024-12-06 05:05:13.194720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:35.008 [2024-12-06 05:05:13.194745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:35.008 [2024-12-06 05:05:13.194778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:35.008 [2024-12-06 05:05:13.194803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:35.008 [2024-12-06 05:05:13.194831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:35.008 [2024-12-06 05:05:13.194855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:35.008 [2024-12-06 05:05:13.194888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:35.008 [2024-12-06 05:05:13.194914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:35.008 [2024-12-06 05:05:13.194967] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:35.008 [2024-12-06 05:05:13.194993] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 73ef595a-5226-44c7-8985-04a18b3bbf5e 00:17:35.008 [2024-12-06 05:05:13.195021] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:35.008 [2024-12-06 05:05:13.195046] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:35.008 [2024-12-06 05:05:13.195073] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:35.008 [2024-12-06 05:05:13.195102] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:35.008 [2024-12-06 05:05:13.195130] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:35.008 [2024-12-06 05:05:13.195156] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:35.008 [2024-12-06 05:05:13.195195] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:35.008 [2024-12-06 05:05:13.195216] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:35.008 [2024-12-06 05:05:13.195241] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:35.008 [2024-12-06 05:05:13.195264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.008 [2024-12-06 05:05:13.195299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:35.008 [2024-12-06 05:05:13.195327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.318 ms 00:17:35.008 [2024-12-06 05:05:13.195364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.008 [2024-12-06 05:05:13.199224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.008 [2024-12-06 05:05:13.199423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:35.008 [2024-12-06 05:05:13.199441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.766 ms 00:17:35.008 [2024-12-06 05:05:13.199452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.008 [2024-12-06 05:05:13.199627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.008 [2024-12-06 05:05:13.199643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:35.008 [2024-12-06 05:05:13.199653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.131 ms 00:17:35.008 [2024-12-06 05:05:13.199663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.008 [2024-12-06 05:05:13.210725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.008 [2024-12-06 05:05:13.210784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:35.008 [2024-12-06 05:05:13.210795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.008 [2024-12-06 05:05:13.210806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.008 [2024-12-06 05:05:13.210906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.008 [2024-12-06 05:05:13.210921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:35.008 [2024-12-06 05:05:13.210931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.008 [2024-12-06 05:05:13.210947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.008 [2024-12-06 05:05:13.210997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.008 [2024-12-06 05:05:13.211010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:35.008 [2024-12-06 05:05:13.211024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.008 [2024-12-06 05:05:13.211036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.008 [2024-12-06 05:05:13.211058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.009 [2024-12-06 05:05:13.211071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:35.009 [2024-12-06 05:05:13.211079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.009 [2024-12-06 05:05:13.211092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.009 [2024-12-06 05:05:13.231062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.009 [2024-12-06 05:05:13.231130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:35.009 [2024-12-06 05:05:13.231142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.009 [2024-12-06 05:05:13.231154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.270 [2024-12-06 05:05:13.247034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.270 [2024-12-06 05:05:13.247101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:35.270 [2024-12-06 05:05:13.247116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.270 [2024-12-06 05:05:13.247131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.270 [2024-12-06 05:05:13.247212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.270 [2024-12-06 05:05:13.247238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:35.270 [2024-12-06 05:05:13.247249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.270 [2024-12-06 05:05:13.247264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.270 [2024-12-06 05:05:13.247305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.270 [2024-12-06 05:05:13.247318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:35.270 [2024-12-06 05:05:13.247327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.270 [2024-12-06 05:05:13.247338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.270 [2024-12-06 05:05:13.247429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.270 [2024-12-06 05:05:13.247445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:35.270 [2024-12-06 05:05:13.247456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.270 [2024-12-06 05:05:13.247467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.270 [2024-12-06 05:05:13.247511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.270 [2024-12-06 05:05:13.247526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:35.270 [2024-12-06 05:05:13.247534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.270 [2024-12-06 05:05:13.247549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.270 [2024-12-06 05:05:13.247606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.270 [2024-12-06 05:05:13.247635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:35.270 [2024-12-06 05:05:13.247647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.270 [2024-12-06 05:05:13.247660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.270 [2024-12-06 05:05:13.247801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.270 [2024-12-06 05:05:13.247820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:35.270 [2024-12-06 05:05:13.247831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.270 [2024-12-06 05:05:13.247849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.270 [2024-12-06 05:05:13.248055] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 94.816 ms, result 0 00:17:35.531 05:05:13 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:35.531 [2024-12-06 05:05:13.661408] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:17:35.531 [2024-12-06 05:05:13.661762] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85851 ] 00:17:35.793 [2024-12-06 05:05:13.799006] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:35.793 [2024-12-06 05:05:13.868637] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:35.793 [2024-12-06 05:05:14.018536] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:35.793 [2024-12-06 05:05:14.018636] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:36.055 [2024-12-06 05:05:14.182642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.055 [2024-12-06 05:05:14.182724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:36.056 [2024-12-06 05:05:14.182742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:36.056 [2024-12-06 05:05:14.182750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.056 [2024-12-06 05:05:14.185449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.056 [2024-12-06 05:05:14.185511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:36.056 [2024-12-06 05:05:14.185526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.669 ms 00:17:36.056 [2024-12-06 05:05:14.185535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.056 [2024-12-06 05:05:14.185647] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:36.056 [2024-12-06 05:05:14.186300] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:36.056 [2024-12-06 05:05:14.186372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.056 [2024-12-06 05:05:14.186383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:36.056 [2024-12-06 05:05:14.186397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.739 ms 00:17:36.056 [2024-12-06 05:05:14.186405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.056 [2024-12-06 05:05:14.188834] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:36.056 [2024-12-06 05:05:14.194046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.056 [2024-12-06 05:05:14.194099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:36.056 [2024-12-06 05:05:14.194118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.214 ms 00:17:36.056 [2024-12-06 05:05:14.194131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.056 [2024-12-06 05:05:14.194224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.056 [2024-12-06 05:05:14.194235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:36.056 [2024-12-06 05:05:14.194249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:17:36.056 [2024-12-06 05:05:14.194257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.056 [2024-12-06 05:05:14.205853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.056 [2024-12-06 05:05:14.205896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:36.056 [2024-12-06 05:05:14.205908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.548 ms 00:17:36.056 [2024-12-06 05:05:14.205916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.056 [2024-12-06 05:05:14.206080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.056 [2024-12-06 05:05:14.206093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:36.056 [2024-12-06 05:05:14.206103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:17:36.056 [2024-12-06 05:05:14.206112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.056 [2024-12-06 05:05:14.206145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.056 [2024-12-06 05:05:14.206155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:36.056 [2024-12-06 05:05:14.206171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:36.056 [2024-12-06 05:05:14.206181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.056 [2024-12-06 05:05:14.206208] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:36.056 [2024-12-06 05:05:14.208933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.056 [2024-12-06 05:05:14.208977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:36.056 [2024-12-06 05:05:14.208989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.732 ms 00:17:36.056 [2024-12-06 05:05:14.208999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.056 [2024-12-06 05:05:14.209049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.056 [2024-12-06 05:05:14.209067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:36.056 [2024-12-06 05:05:14.209083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:17:36.056 [2024-12-06 05:05:14.209091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.056 [2024-12-06 05:05:14.209112] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:36.056 [2024-12-06 05:05:14.209135] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:36.056 [2024-12-06 05:05:14.209182] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:36.056 [2024-12-06 05:05:14.209200] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:36.056 [2024-12-06 05:05:14.209313] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:36.056 [2024-12-06 05:05:14.209326] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:36.056 [2024-12-06 05:05:14.209337] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:36.056 [2024-12-06 05:05:14.209352] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:36.056 [2024-12-06 05:05:14.209365] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:36.056 [2024-12-06 05:05:14.209373] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:36.056 [2024-12-06 05:05:14.209381] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:36.056 [2024-12-06 05:05:14.209389] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:36.056 [2024-12-06 05:05:14.209397] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:36.056 [2024-12-06 05:05:14.209406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.056 [2024-12-06 05:05:14.209419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:36.056 [2024-12-06 05:05:14.209430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.298 ms 00:17:36.056 [2024-12-06 05:05:14.209438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.056 [2024-12-06 05:05:14.209526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.056 [2024-12-06 05:05:14.209537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:36.056 [2024-12-06 05:05:14.209547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:17:36.056 [2024-12-06 05:05:14.209555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.056 [2024-12-06 05:05:14.209657] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:36.056 [2024-12-06 05:05:14.209725] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:36.056 [2024-12-06 05:05:14.209736] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:36.056 [2024-12-06 05:05:14.209751] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:36.056 [2024-12-06 05:05:14.209762] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:36.056 [2024-12-06 05:05:14.209771] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:36.056 [2024-12-06 05:05:14.209780] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:36.056 [2024-12-06 05:05:14.209789] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:36.056 [2024-12-06 05:05:14.209801] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:36.056 [2024-12-06 05:05:14.209813] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:36.056 [2024-12-06 05:05:14.209821] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:36.056 [2024-12-06 05:05:14.209829] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:36.056 [2024-12-06 05:05:14.209837] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:36.056 [2024-12-06 05:05:14.209848] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:36.056 [2024-12-06 05:05:14.209859] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:36.057 [2024-12-06 05:05:14.209868] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:36.057 [2024-12-06 05:05:14.209879] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:36.057 [2024-12-06 05:05:14.209889] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:36.057 [2024-12-06 05:05:14.209897] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:36.057 [2024-12-06 05:05:14.209906] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:36.057 [2024-12-06 05:05:14.209914] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:36.057 [2024-12-06 05:05:14.209923] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:36.057 [2024-12-06 05:05:14.209931] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:36.057 [2024-12-06 05:05:14.209939] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:36.057 [2024-12-06 05:05:14.209956] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:36.057 [2024-12-06 05:05:14.209963] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:36.057 [2024-12-06 05:05:14.209970] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:36.057 [2024-12-06 05:05:14.209977] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:36.057 [2024-12-06 05:05:14.209985] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:36.057 [2024-12-06 05:05:14.209992] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:36.057 [2024-12-06 05:05:14.209999] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:36.057 [2024-12-06 05:05:14.210007] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:36.057 [2024-12-06 05:05:14.210014] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:36.057 [2024-12-06 05:05:14.210020] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:36.057 [2024-12-06 05:05:14.210039] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:36.057 [2024-12-06 05:05:14.210046] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:36.057 [2024-12-06 05:05:14.210053] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:36.057 [2024-12-06 05:05:14.210060] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:36.057 [2024-12-06 05:05:14.210066] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:36.057 [2024-12-06 05:05:14.210073] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:36.057 [2024-12-06 05:05:14.210083] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:36.057 [2024-12-06 05:05:14.210091] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:36.057 [2024-12-06 05:05:14.210098] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:36.057 [2024-12-06 05:05:14.210105] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:36.057 [2024-12-06 05:05:14.210113] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:36.057 [2024-12-06 05:05:14.210121] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:36.057 [2024-12-06 05:05:14.210130] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:36.057 [2024-12-06 05:05:14.210144] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:36.057 [2024-12-06 05:05:14.210153] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:36.057 [2024-12-06 05:05:14.210161] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:36.057 [2024-12-06 05:05:14.210168] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:36.057 [2024-12-06 05:05:14.210175] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:36.057 [2024-12-06 05:05:14.210183] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:36.057 [2024-12-06 05:05:14.210194] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:36.057 [2024-12-06 05:05:14.210207] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:36.057 [2024-12-06 05:05:14.210216] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:36.057 [2024-12-06 05:05:14.210226] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:36.057 [2024-12-06 05:05:14.210236] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:36.057 [2024-12-06 05:05:14.210244] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:36.057 [2024-12-06 05:05:14.210251] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:36.057 [2024-12-06 05:05:14.210259] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:36.057 [2024-12-06 05:05:14.210266] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:36.057 [2024-12-06 05:05:14.210281] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:36.057 [2024-12-06 05:05:14.210290] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:36.057 [2024-12-06 05:05:14.210297] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:36.057 [2024-12-06 05:05:14.210304] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:36.057 [2024-12-06 05:05:14.210310] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:36.057 [2024-12-06 05:05:14.210317] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:36.057 [2024-12-06 05:05:14.210325] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:36.057 [2024-12-06 05:05:14.210336] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:36.057 [2024-12-06 05:05:14.210346] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:36.057 [2024-12-06 05:05:14.210357] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:36.057 [2024-12-06 05:05:14.210367] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:36.057 [2024-12-06 05:05:14.210378] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:36.057 [2024-12-06 05:05:14.210385] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:36.057 [2024-12-06 05:05:14.210393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.057 [2024-12-06 05:05:14.210404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:36.057 [2024-12-06 05:05:14.210417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.805 ms 00:17:36.057 [2024-12-06 05:05:14.210426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.057 [2024-12-06 05:05:14.239123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.057 [2024-12-06 05:05:14.239452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:36.057 [2024-12-06 05:05:14.239483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.614 ms 00:17:36.057 [2024-12-06 05:05:14.239498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.057 [2024-12-06 05:05:14.239760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.057 [2024-12-06 05:05:14.239784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:36.057 [2024-12-06 05:05:14.239799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.132 ms 00:17:36.057 [2024-12-06 05:05:14.239820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.057 [2024-12-06 05:05:14.256172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.057 [2024-12-06 05:05:14.256231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:36.057 [2024-12-06 05:05:14.256244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.315 ms 00:17:36.057 [2024-12-06 05:05:14.256254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.058 [2024-12-06 05:05:14.256336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.058 [2024-12-06 05:05:14.256347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:36.058 [2024-12-06 05:05:14.256362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:36.058 [2024-12-06 05:05:14.256371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.058 [2024-12-06 05:05:14.257103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.058 [2024-12-06 05:05:14.257138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:36.058 [2024-12-06 05:05:14.257149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.706 ms 00:17:36.058 [2024-12-06 05:05:14.257158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.058 [2024-12-06 05:05:14.257330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.058 [2024-12-06 05:05:14.257354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:36.058 [2024-12-06 05:05:14.257365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.139 ms 00:17:36.058 [2024-12-06 05:05:14.257377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.058 [2024-12-06 05:05:14.267865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.058 [2024-12-06 05:05:14.267911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:36.058 [2024-12-06 05:05:14.267933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.462 ms 00:17:36.058 [2024-12-06 05:05:14.267947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.058 [2024-12-06 05:05:14.272950] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:17:36.058 [2024-12-06 05:05:14.273202] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:36.058 [2024-12-06 05:05:14.273221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.058 [2024-12-06 05:05:14.273231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:36.058 [2024-12-06 05:05:14.273241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.136 ms 00:17:36.058 [2024-12-06 05:05:14.273249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.382 [2024-12-06 05:05:14.289935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.382 [2024-12-06 05:05:14.289987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:36.382 [2024-12-06 05:05:14.290000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.486 ms 00:17:36.382 [2024-12-06 05:05:14.290009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.382 [2024-12-06 05:05:14.293352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.382 [2024-12-06 05:05:14.293401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:36.382 [2024-12-06 05:05:14.293412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.243 ms 00:17:36.382 [2024-12-06 05:05:14.293420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.382 [2024-12-06 05:05:14.296289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.382 [2024-12-06 05:05:14.296477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:36.382 [2024-12-06 05:05:14.296509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.812 ms 00:17:36.382 [2024-12-06 05:05:14.296517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.382 [2024-12-06 05:05:14.296900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.382 [2024-12-06 05:05:14.296918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:36.382 [2024-12-06 05:05:14.296932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.304 ms 00:17:36.382 [2024-12-06 05:05:14.296941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.382 [2024-12-06 05:05:14.328738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.382 [2024-12-06 05:05:14.328793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:36.382 [2024-12-06 05:05:14.328807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.763 ms 00:17:36.382 [2024-12-06 05:05:14.328817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.382 [2024-12-06 05:05:14.337937] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:36.382 [2024-12-06 05:05:14.363431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.382 [2024-12-06 05:05:14.363702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:36.382 [2024-12-06 05:05:14.363724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.512 ms 00:17:36.382 [2024-12-06 05:05:14.363734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.382 [2024-12-06 05:05:14.363853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.382 [2024-12-06 05:05:14.363865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:36.382 [2024-12-06 05:05:14.363876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:17:36.382 [2024-12-06 05:05:14.363896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.382 [2024-12-06 05:05:14.363972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.382 [2024-12-06 05:05:14.363983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:36.382 [2024-12-06 05:05:14.363992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:17:36.382 [2024-12-06 05:05:14.364001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.383 [2024-12-06 05:05:14.364036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.383 [2024-12-06 05:05:14.364047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:36.383 [2024-12-06 05:05:14.364056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:36.383 [2024-12-06 05:05:14.364066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.383 [2024-12-06 05:05:14.364109] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:36.383 [2024-12-06 05:05:14.364120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.383 [2024-12-06 05:05:14.364134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:36.383 [2024-12-06 05:05:14.364145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:17:36.383 [2024-12-06 05:05:14.364154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.383 [2024-12-06 05:05:14.371307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.383 [2024-12-06 05:05:14.371497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:36.383 [2024-12-06 05:05:14.371516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.128 ms 00:17:36.383 [2024-12-06 05:05:14.371526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.383 [2024-12-06 05:05:14.371632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.383 [2024-12-06 05:05:14.371647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:36.383 [2024-12-06 05:05:14.371657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:17:36.383 [2024-12-06 05:05:14.371694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.383 [2024-12-06 05:05:14.372951] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:36.383 [2024-12-06 05:05:14.374448] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 189.920 ms, result 0 00:17:36.383 [2024-12-06 05:05:14.376164] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:36.383 [2024-12-06 05:05:14.383157] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:37.344  [2024-12-06T05:05:16.518Z] Copying: 13/256 [MB] (13 MBps) [2024-12-06T05:05:17.464Z] Copying: 25/256 [MB] (11 MBps) [2024-12-06T05:05:18.853Z] Copying: 37/256 [MB] (11 MBps) [2024-12-06T05:05:19.799Z] Copying: 47/256 [MB] (10 MBps) [2024-12-06T05:05:20.739Z] Copying: 57/256 [MB] (10 MBps) [2024-12-06T05:05:21.685Z] Copying: 68/256 [MB] (10 MBps) [2024-12-06T05:05:22.630Z] Copying: 79/256 [MB] (10 MBps) [2024-12-06T05:05:23.577Z] Copying: 90/256 [MB] (11 MBps) [2024-12-06T05:05:24.523Z] Copying: 100/256 [MB] (10 MBps) [2024-12-06T05:05:25.467Z] Copying: 110/256 [MB] (10 MBps) [2024-12-06T05:05:26.854Z] Copying: 126/256 [MB] (15 MBps) [2024-12-06T05:05:27.800Z] Copying: 136/256 [MB] (10 MBps) [2024-12-06T05:05:28.747Z] Copying: 146/256 [MB] (10 MBps) [2024-12-06T05:05:29.692Z] Copying: 157/256 [MB] (10 MBps) [2024-12-06T05:05:30.636Z] Copying: 168/256 [MB] (10 MBps) [2024-12-06T05:05:31.582Z] Copying: 178/256 [MB] (10 MBps) [2024-12-06T05:05:32.526Z] Copying: 193/256 [MB] (14 MBps) [2024-12-06T05:05:33.470Z] Copying: 213/256 [MB] (20 MBps) [2024-12-06T05:05:34.861Z] Copying: 231/256 [MB] (17 MBps) [2024-12-06T05:05:35.434Z] Copying: 247/256 [MB] (16 MBps) [2024-12-06T05:05:35.695Z] Copying: 256/256 [MB] (average 12 MBps)[2024-12-06 05:05:35.497029] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:57.463 [2024-12-06 05:05:35.499076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.463 [2024-12-06 05:05:35.499129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:57.463 [2024-12-06 05:05:35.499145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:57.463 [2024-12-06 05:05:35.499160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.463 [2024-12-06 05:05:35.499186] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:57.463 [2024-12-06 05:05:35.499898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.463 [2024-12-06 05:05:35.499941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:57.463 [2024-12-06 05:05:35.499954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.697 ms 00:17:57.463 [2024-12-06 05:05:35.499964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.463 [2024-12-06 05:05:35.500269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.463 [2024-12-06 05:05:35.500287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:57.463 [2024-12-06 05:05:35.500298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:17:57.463 [2024-12-06 05:05:35.500307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.463 [2024-12-06 05:05:35.504134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.463 [2024-12-06 05:05:35.504316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:57.463 [2024-12-06 05:05:35.504341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.806 ms 00:17:57.463 [2024-12-06 05:05:35.504353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.463 [2024-12-06 05:05:35.512327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.463 [2024-12-06 05:05:35.512369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:57.463 [2024-12-06 05:05:35.512381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.946 ms 00:17:57.463 [2024-12-06 05:05:35.512389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.463 [2024-12-06 05:05:35.514874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.463 [2024-12-06 05:05:35.514924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:57.463 [2024-12-06 05:05:35.514934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.402 ms 00:17:57.463 [2024-12-06 05:05:35.514942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.463 [2024-12-06 05:05:35.520139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.463 [2024-12-06 05:05:35.520190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:57.463 [2024-12-06 05:05:35.520209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.149 ms 00:17:57.463 [2024-12-06 05:05:35.520219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.463 [2024-12-06 05:05:35.520358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.463 [2024-12-06 05:05:35.520370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:57.463 [2024-12-06 05:05:35.520379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:17:57.463 [2024-12-06 05:05:35.520393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.463 [2024-12-06 05:05:35.523601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.463 [2024-12-06 05:05:35.523803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:57.463 [2024-12-06 05:05:35.523823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.190 ms 00:17:57.463 [2024-12-06 05:05:35.523857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.463 [2024-12-06 05:05:35.527206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.463 [2024-12-06 05:05:35.527255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:57.463 [2024-12-06 05:05:35.527266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.477 ms 00:17:57.463 [2024-12-06 05:05:35.527273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.463 [2024-12-06 05:05:35.529427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.463 [2024-12-06 05:05:35.529476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:57.463 [2024-12-06 05:05:35.529486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.109 ms 00:17:57.463 [2024-12-06 05:05:35.529494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.463 [2024-12-06 05:05:35.531891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.463 [2024-12-06 05:05:35.532055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:57.463 [2024-12-06 05:05:35.532072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.313 ms 00:17:57.463 [2024-12-06 05:05:35.532079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.463 [2024-12-06 05:05:35.532190] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:57.463 [2024-12-06 05:05:35.532233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:57.463 [2024-12-06 05:05:35.532245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:57.463 [2024-12-06 05:05:35.532254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:57.464 [2024-12-06 05:05:35.532262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:57.464 [2024-12-06 05:05:35.532271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:57.464 [2024-12-06 05:05:35.532279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:57.464 [2024-12-06 05:05:35.532287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:57.464 [2024-12-06 05:05:35.532295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:57.464 [2024-12-06 05:05:35.532303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:57.464 [2024-12-06 05:05:35.532310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:57.464 [2024-12-06 05:05:35.532318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:57.464 [2024-12-06 05:05:35.532326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:57.464 [2024-12-06 05:05:35.532334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:57.464 [2024-12-06 05:05:35.532342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:57.464 [2024-12-06 05:05:35.532350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:57.464 [2024-12-06 05:05:35.532357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:57.464 [2024-12-06 05:05:35.532364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:57.464 [2024-12-06 05:05:35.532372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:57.464 [2024-12-06 05:05:35.532380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:57.464 [2024-12-06 05:05:35.532387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:57.464 [2024-12-06 05:05:35.532395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:57.464 [2024-12-06 05:05:35.532402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:57.464 [2024-12-06 05:05:35.532409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:57.464 [2024-12-06 05:05:35.532416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:57.464 [2024-12-06 05:05:35.532425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:57.464 [2024-12-06 05:05:35.532432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:57.464 [2024-12-06 05:05:35.532440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:57.464 [2024-12-06 05:05:35.532449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:57.464 [2024-12-06 05:05:35.532457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:57.464 [2024-12-06 05:05:35.532465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:57.464 [2024-12-06 05:05:35.532473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:57.464 [2024-12-06 05:05:35.532482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:57.464 [2024-12-06 05:05:35.532490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:57.464 [2024-12-06 05:05:35.532498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:57.464 [2024-12-06 05:05:35.532505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:57.464 [2024-12-06 05:05:35.532522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:57.464 [2024-12-06 05:05:35.532530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:57.464 [2024-12-06 05:05:35.532538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:57.464 [2024-12-06 05:05:35.532546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:57.464 [2024-12-06 05:05:35.532554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:57.464 [2024-12-06 05:05:35.532563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:57.464 [2024-12-06 05:05:35.532571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:57.464 [2024-12-06 05:05:35.532579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:57.464 [2024-12-06 05:05:35.532586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:57.464 [2024-12-06 05:05:35.532594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:57.464 [2024-12-06 05:05:35.532601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:57.464 [2024-12-06 05:05:35.532609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:57.464 [2024-12-06 05:05:35.532616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:57.464 [2024-12-06 05:05:35.532624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:57.464 [2024-12-06 05:05:35.532632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:57.464 [2024-12-06 05:05:35.532639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:57.464 [2024-12-06 05:05:35.532647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:57.464 [2024-12-06 05:05:35.532654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:57.464 [2024-12-06 05:05:35.532681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:57.464 [2024-12-06 05:05:35.532690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:57.464 [2024-12-06 05:05:35.532697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:57.464 [2024-12-06 05:05:35.532707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:57.464 [2024-12-06 05:05:35.532715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:57.464 [2024-12-06 05:05:35.532723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:57.464 [2024-12-06 05:05:35.532731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:57.464 [2024-12-06 05:05:35.532740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:57.464 [2024-12-06 05:05:35.532748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:57.464 [2024-12-06 05:05:35.532756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:57.464 [2024-12-06 05:05:35.532766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:57.464 [2024-12-06 05:05:35.532775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:57.464 [2024-12-06 05:05:35.532783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:57.464 [2024-12-06 05:05:35.532792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:57.464 [2024-12-06 05:05:35.532801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:57.464 [2024-12-06 05:05:35.532810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:57.464 [2024-12-06 05:05:35.532817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:57.464 [2024-12-06 05:05:35.532825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:57.464 [2024-12-06 05:05:35.532833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:57.464 [2024-12-06 05:05:35.532841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:57.464 [2024-12-06 05:05:35.532849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:57.464 [2024-12-06 05:05:35.532857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:57.464 [2024-12-06 05:05:35.532865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:57.464 [2024-12-06 05:05:35.532873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:57.464 [2024-12-06 05:05:35.532880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:57.464 [2024-12-06 05:05:35.532887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:57.464 [2024-12-06 05:05:35.532896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:57.464 [2024-12-06 05:05:35.532903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:57.464 [2024-12-06 05:05:35.532911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:57.464 [2024-12-06 05:05:35.532918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:57.464 [2024-12-06 05:05:35.532926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:57.464 [2024-12-06 05:05:35.532935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:57.464 [2024-12-06 05:05:35.532943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:57.464 [2024-12-06 05:05:35.532950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:57.464 [2024-12-06 05:05:35.532957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:57.464 [2024-12-06 05:05:35.532965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:57.464 [2024-12-06 05:05:35.532973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:57.464 [2024-12-06 05:05:35.532981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:57.464 [2024-12-06 05:05:35.532989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:57.464 [2024-12-06 05:05:35.533004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:57.464 [2024-12-06 05:05:35.533013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:57.464 [2024-12-06 05:05:35.533022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:57.464 [2024-12-06 05:05:35.533030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:57.464 [2024-12-06 05:05:35.533038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:57.464 [2024-12-06 05:05:35.533046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:57.464 [2024-12-06 05:05:35.533055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:57.464 [2024-12-06 05:05:35.533063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:57.464 [2024-12-06 05:05:35.533080] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:57.464 [2024-12-06 05:05:35.533089] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 73ef595a-5226-44c7-8985-04a18b3bbf5e 00:17:57.464 [2024-12-06 05:05:35.533097] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:57.464 [2024-12-06 05:05:35.533105] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:57.464 [2024-12-06 05:05:35.533114] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:57.464 [2024-12-06 05:05:35.533123] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:57.464 [2024-12-06 05:05:35.533131] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:57.464 [2024-12-06 05:05:35.533139] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:57.464 [2024-12-06 05:05:35.533146] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:57.464 [2024-12-06 05:05:35.533153] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:57.464 [2024-12-06 05:05:35.533160] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:57.464 [2024-12-06 05:05:35.533169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.464 [2024-12-06 05:05:35.533176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:57.464 [2024-12-06 05:05:35.533192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.982 ms 00:17:57.464 [2024-12-06 05:05:35.533199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.464 [2024-12-06 05:05:35.535532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.464 [2024-12-06 05:05:35.535705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:57.464 [2024-12-06 05:05:35.535724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.296 ms 00:17:57.464 [2024-12-06 05:05:35.535733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.464 [2024-12-06 05:05:35.535852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.464 [2024-12-06 05:05:35.535869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:57.464 [2024-12-06 05:05:35.535879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:17:57.464 [2024-12-06 05:05:35.535888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.464 [2024-12-06 05:05:35.543896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.464 [2024-12-06 05:05:35.543943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:57.464 [2024-12-06 05:05:35.543955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.464 [2024-12-06 05:05:35.543963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.464 [2024-12-06 05:05:35.544055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.464 [2024-12-06 05:05:35.544068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:57.464 [2024-12-06 05:05:35.544076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.464 [2024-12-06 05:05:35.544085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.464 [2024-12-06 05:05:35.544133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.464 [2024-12-06 05:05:35.544143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:57.464 [2024-12-06 05:05:35.544151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.464 [2024-12-06 05:05:35.544159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.464 [2024-12-06 05:05:35.544181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.464 [2024-12-06 05:05:35.544190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:57.464 [2024-12-06 05:05:35.544200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.464 [2024-12-06 05:05:35.544208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.464 [2024-12-06 05:05:35.558867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.464 [2024-12-06 05:05:35.559041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:57.464 [2024-12-06 05:05:35.559069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.464 [2024-12-06 05:05:35.559079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.464 [2024-12-06 05:05:35.569745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.464 [2024-12-06 05:05:35.569914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:57.464 [2024-12-06 05:05:35.569931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.464 [2024-12-06 05:05:35.569939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.464 [2024-12-06 05:05:35.569994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.464 [2024-12-06 05:05:35.570004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:57.464 [2024-12-06 05:05:35.570013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.464 [2024-12-06 05:05:35.570021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.464 [2024-12-06 05:05:35.570061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.464 [2024-12-06 05:05:35.570071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:57.464 [2024-12-06 05:05:35.570080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.464 [2024-12-06 05:05:35.570090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.464 [2024-12-06 05:05:35.570165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.464 [2024-12-06 05:05:35.570175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:57.464 [2024-12-06 05:05:35.570188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.464 [2024-12-06 05:05:35.570197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.464 [2024-12-06 05:05:35.570229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.464 [2024-12-06 05:05:35.570240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:57.464 [2024-12-06 05:05:35.570248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.464 [2024-12-06 05:05:35.570256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.464 [2024-12-06 05:05:35.570310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.465 [2024-12-06 05:05:35.570320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:57.465 [2024-12-06 05:05:35.570328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.465 [2024-12-06 05:05:35.570336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.465 [2024-12-06 05:05:35.570384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.465 [2024-12-06 05:05:35.570395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:57.465 [2024-12-06 05:05:35.570403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.465 [2024-12-06 05:05:35.570419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.465 [2024-12-06 05:05:35.570571] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 71.471 ms, result 0 00:17:57.725 00:17:57.725 00:17:57.725 05:05:35 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:17:58.296 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:17:58.296 05:05:36 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:17:58.296 05:05:36 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:17:58.296 05:05:36 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:17:58.296 05:05:36 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:58.296 05:05:36 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:17:58.296 05:05:36 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:58.296 Process with pid 85805 is not found 00:17:58.296 05:05:36 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 85805 00:17:58.296 05:05:36 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 85805 ']' 00:17:58.296 05:05:36 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 85805 00:17:58.296 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (85805) - No such process 00:17:58.296 05:05:36 ftl.ftl_trim -- common/autotest_common.sh@977 -- # echo 'Process with pid 85805 is not found' 00:17:58.297 00:17:58.297 real 1m26.023s 00:17:58.297 user 1m39.621s 00:17:58.297 sys 0m15.736s 00:17:58.297 ************************************ 00:17:58.297 END TEST ftl_trim 00:17:58.297 ************************************ 00:17:58.297 05:05:36 ftl.ftl_trim -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:58.297 05:05:36 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:58.297 05:05:36 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:17:58.297 05:05:36 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:17:58.297 05:05:36 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:58.297 05:05:36 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:58.558 ************************************ 00:17:58.558 START TEST ftl_restore 00:17:58.558 ************************************ 00:17:58.558 05:05:36 ftl.ftl_restore -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:17:58.558 * Looking for test storage... 00:17:58.558 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:58.558 05:05:36 ftl.ftl_restore -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:58.558 05:05:36 ftl.ftl_restore -- common/autotest_common.sh@1681 -- # lcov --version 00:17:58.558 05:05:36 ftl.ftl_restore -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:58.558 05:05:36 ftl.ftl_restore -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:58.558 05:05:36 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:58.558 05:05:36 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:58.558 05:05:36 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:58.558 05:05:36 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:17:58.558 05:05:36 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:17:58.558 05:05:36 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:17:58.558 05:05:36 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:17:58.558 05:05:36 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:17:58.558 05:05:36 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:17:58.558 05:05:36 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:17:58.558 05:05:36 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:58.558 05:05:36 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:17:58.558 05:05:36 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:17:58.558 05:05:36 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:58.558 05:05:36 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:58.558 05:05:36 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:17:58.558 05:05:36 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:17:58.558 05:05:36 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:58.558 05:05:36 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:17:58.558 05:05:36 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:17:58.558 05:05:36 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:17:58.558 05:05:36 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:17:58.558 05:05:36 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:58.558 05:05:36 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:17:58.558 05:05:36 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:17:58.558 05:05:36 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:58.558 05:05:36 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:58.558 05:05:36 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:17:58.558 05:05:36 ftl.ftl_restore -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:58.558 05:05:36 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:58.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.558 --rc genhtml_branch_coverage=1 00:17:58.558 --rc genhtml_function_coverage=1 00:17:58.558 --rc genhtml_legend=1 00:17:58.558 --rc geninfo_all_blocks=1 00:17:58.558 --rc geninfo_unexecuted_blocks=1 00:17:58.558 00:17:58.558 ' 00:17:58.558 05:05:36 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:58.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.558 --rc genhtml_branch_coverage=1 00:17:58.558 --rc genhtml_function_coverage=1 00:17:58.558 --rc genhtml_legend=1 00:17:58.559 --rc geninfo_all_blocks=1 00:17:58.559 --rc geninfo_unexecuted_blocks=1 00:17:58.559 00:17:58.559 ' 00:17:58.559 05:05:36 ftl.ftl_restore -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:58.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.559 --rc genhtml_branch_coverage=1 00:17:58.559 --rc genhtml_function_coverage=1 00:17:58.559 --rc genhtml_legend=1 00:17:58.559 --rc geninfo_all_blocks=1 00:17:58.559 --rc geninfo_unexecuted_blocks=1 00:17:58.559 00:17:58.559 ' 00:17:58.559 05:05:36 ftl.ftl_restore -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:58.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.559 --rc genhtml_branch_coverage=1 00:17:58.559 --rc genhtml_function_coverage=1 00:17:58.559 --rc genhtml_legend=1 00:17:58.559 --rc geninfo_all_blocks=1 00:17:58.559 --rc geninfo_unexecuted_blocks=1 00:17:58.559 00:17:58.559 ' 00:17:58.559 05:05:36 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:58.559 05:05:36 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:17:58.559 05:05:36 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:58.559 05:05:36 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:58.559 05:05:36 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:58.559 05:05:36 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:58.559 05:05:36 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:58.559 05:05:36 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:58.559 05:05:36 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:58.559 05:05:36 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:58.559 05:05:36 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:58.559 05:05:36 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:58.559 05:05:36 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:58.559 05:05:36 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:58.559 05:05:36 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:58.559 05:05:36 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:58.559 05:05:36 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:58.559 05:05:36 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:58.559 05:05:36 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:58.559 05:05:36 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:58.559 05:05:36 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:58.559 05:05:36 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:58.559 05:05:36 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:58.559 05:05:36 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:58.559 05:05:36 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:58.559 05:05:36 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:58.559 05:05:36 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:58.559 05:05:36 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:58.559 05:05:36 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:58.559 05:05:36 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:58.559 05:05:36 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:17:58.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:58.559 05:05:36 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.vvfenxyGei 00:17:58.559 05:05:36 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:17:58.559 05:05:36 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:17:58.559 05:05:36 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:17:58.559 05:05:36 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:17:58.559 05:05:36 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:17:58.559 05:05:36 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:17:58.559 05:05:36 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:17:58.559 05:05:36 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:17:58.559 05:05:36 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=86153 00:17:58.559 05:05:36 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 86153 00:17:58.559 05:05:36 ftl.ftl_restore -- common/autotest_common.sh@831 -- # '[' -z 86153 ']' 00:17:58.559 05:05:36 ftl.ftl_restore -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:58.559 05:05:36 ftl.ftl_restore -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:58.559 05:05:36 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:58.559 05:05:36 ftl.ftl_restore -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:58.559 05:05:36 ftl.ftl_restore -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:58.559 05:05:36 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:17:58.559 [2024-12-06 05:05:36.782357] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:17:58.559 [2024-12-06 05:05:36.782516] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86153 ] 00:17:58.820 [2024-12-06 05:05:36.920129] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:58.820 [2024-12-06 05:05:36.970016] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:59.763 05:05:37 ftl.ftl_restore -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:59.763 05:05:37 ftl.ftl_restore -- common/autotest_common.sh@864 -- # return 0 00:17:59.763 05:05:37 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:59.763 05:05:37 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:17:59.763 05:05:37 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:59.763 05:05:37 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:17:59.763 05:05:37 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:17:59.763 05:05:37 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:59.763 05:05:37 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:59.763 05:05:37 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:18:00.024 05:05:37 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:00.024 05:05:37 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:18:00.024 05:05:37 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:00.024 05:05:37 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:18:00.024 05:05:37 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:18:00.024 05:05:37 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:00.024 05:05:38 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:00.024 { 00:18:00.024 "name": "nvme0n1", 00:18:00.024 "aliases": [ 00:18:00.024 "fd7e9b4d-efef-4da0-83bb-bbe048a6162e" 00:18:00.024 ], 00:18:00.024 "product_name": "NVMe disk", 00:18:00.024 "block_size": 4096, 00:18:00.024 "num_blocks": 1310720, 00:18:00.024 "uuid": "fd7e9b4d-efef-4da0-83bb-bbe048a6162e", 00:18:00.024 "numa_id": -1, 00:18:00.024 "assigned_rate_limits": { 00:18:00.024 "rw_ios_per_sec": 0, 00:18:00.024 "rw_mbytes_per_sec": 0, 00:18:00.024 "r_mbytes_per_sec": 0, 00:18:00.024 "w_mbytes_per_sec": 0 00:18:00.024 }, 00:18:00.024 "claimed": true, 00:18:00.024 "claim_type": "read_many_write_one", 00:18:00.024 "zoned": false, 00:18:00.024 "supported_io_types": { 00:18:00.024 "read": true, 00:18:00.024 "write": true, 00:18:00.024 "unmap": true, 00:18:00.024 "flush": true, 00:18:00.024 "reset": true, 00:18:00.024 "nvme_admin": true, 00:18:00.024 "nvme_io": true, 00:18:00.024 "nvme_io_md": false, 00:18:00.024 "write_zeroes": true, 00:18:00.024 "zcopy": false, 00:18:00.024 "get_zone_info": false, 00:18:00.024 "zone_management": false, 00:18:00.024 "zone_append": false, 00:18:00.024 "compare": true, 00:18:00.024 "compare_and_write": false, 00:18:00.024 "abort": true, 00:18:00.024 "seek_hole": false, 00:18:00.024 "seek_data": false, 00:18:00.024 "copy": true, 00:18:00.024 "nvme_iov_md": false 00:18:00.024 }, 00:18:00.024 "driver_specific": { 00:18:00.024 "nvme": [ 00:18:00.024 { 00:18:00.024 "pci_address": "0000:00:11.0", 00:18:00.024 "trid": { 00:18:00.024 "trtype": "PCIe", 00:18:00.024 "traddr": "0000:00:11.0" 00:18:00.024 }, 00:18:00.024 "ctrlr_data": { 00:18:00.024 "cntlid": 0, 00:18:00.024 "vendor_id": "0x1b36", 00:18:00.024 "model_number": "QEMU NVMe Ctrl", 00:18:00.024 "serial_number": "12341", 00:18:00.024 "firmware_revision": "8.0.0", 00:18:00.024 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:00.024 "oacs": { 00:18:00.024 "security": 0, 00:18:00.024 "format": 1, 00:18:00.024 "firmware": 0, 00:18:00.024 "ns_manage": 1 00:18:00.024 }, 00:18:00.024 "multi_ctrlr": false, 00:18:00.024 "ana_reporting": false 00:18:00.024 }, 00:18:00.024 "vs": { 00:18:00.024 "nvme_version": "1.4" 00:18:00.024 }, 00:18:00.024 "ns_data": { 00:18:00.024 "id": 1, 00:18:00.024 "can_share": false 00:18:00.024 } 00:18:00.024 } 00:18:00.024 ], 00:18:00.024 "mp_policy": "active_passive" 00:18:00.024 } 00:18:00.024 } 00:18:00.024 ]' 00:18:00.024 05:05:38 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:00.024 05:05:38 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:18:00.024 05:05:38 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:00.285 05:05:38 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=1310720 00:18:00.285 05:05:38 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:18:00.285 05:05:38 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 5120 00:18:00.285 05:05:38 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:18:00.285 05:05:38 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:00.285 05:05:38 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:18:00.285 05:05:38 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:00.285 05:05:38 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:00.285 05:05:38 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=40238c0f-69a5-445f-af7f-17ad69dff3c6 00:18:00.285 05:05:38 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:18:00.285 05:05:38 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 40238c0f-69a5-445f-af7f-17ad69dff3c6 00:18:00.542 05:05:38 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:00.799 05:05:38 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=6a3e1662-ef36-4aa4-8576-a276798167a9 00:18:00.799 05:05:38 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 6a3e1662-ef36-4aa4-8576-a276798167a9 00:18:01.056 05:05:39 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=684d3019-df32-4a22-aa4c-0493b6f1f865 00:18:01.056 05:05:39 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:18:01.056 05:05:39 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 684d3019-df32-4a22-aa4c-0493b6f1f865 00:18:01.056 05:05:39 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:18:01.056 05:05:39 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:01.056 05:05:39 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=684d3019-df32-4a22-aa4c-0493b6f1f865 00:18:01.056 05:05:39 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:18:01.056 05:05:39 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 684d3019-df32-4a22-aa4c-0493b6f1f865 00:18:01.056 05:05:39 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=684d3019-df32-4a22-aa4c-0493b6f1f865 00:18:01.056 05:05:39 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:01.056 05:05:39 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:18:01.056 05:05:39 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:18:01.056 05:05:39 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 684d3019-df32-4a22-aa4c-0493b6f1f865 00:18:01.336 05:05:39 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:01.336 { 00:18:01.336 "name": "684d3019-df32-4a22-aa4c-0493b6f1f865", 00:18:01.336 "aliases": [ 00:18:01.336 "lvs/nvme0n1p0" 00:18:01.336 ], 00:18:01.336 "product_name": "Logical Volume", 00:18:01.336 "block_size": 4096, 00:18:01.336 "num_blocks": 26476544, 00:18:01.336 "uuid": "684d3019-df32-4a22-aa4c-0493b6f1f865", 00:18:01.336 "assigned_rate_limits": { 00:18:01.336 "rw_ios_per_sec": 0, 00:18:01.336 "rw_mbytes_per_sec": 0, 00:18:01.336 "r_mbytes_per_sec": 0, 00:18:01.336 "w_mbytes_per_sec": 0 00:18:01.336 }, 00:18:01.336 "claimed": false, 00:18:01.336 "zoned": false, 00:18:01.336 "supported_io_types": { 00:18:01.336 "read": true, 00:18:01.336 "write": true, 00:18:01.336 "unmap": true, 00:18:01.336 "flush": false, 00:18:01.336 "reset": true, 00:18:01.336 "nvme_admin": false, 00:18:01.336 "nvme_io": false, 00:18:01.336 "nvme_io_md": false, 00:18:01.336 "write_zeroes": true, 00:18:01.336 "zcopy": false, 00:18:01.336 "get_zone_info": false, 00:18:01.336 "zone_management": false, 00:18:01.336 "zone_append": false, 00:18:01.336 "compare": false, 00:18:01.336 "compare_and_write": false, 00:18:01.336 "abort": false, 00:18:01.336 "seek_hole": true, 00:18:01.336 "seek_data": true, 00:18:01.336 "copy": false, 00:18:01.336 "nvme_iov_md": false 00:18:01.336 }, 00:18:01.336 "driver_specific": { 00:18:01.336 "lvol": { 00:18:01.336 "lvol_store_uuid": "6a3e1662-ef36-4aa4-8576-a276798167a9", 00:18:01.336 "base_bdev": "nvme0n1", 00:18:01.336 "thin_provision": true, 00:18:01.336 "num_allocated_clusters": 0, 00:18:01.336 "snapshot": false, 00:18:01.336 "clone": false, 00:18:01.336 "esnap_clone": false 00:18:01.336 } 00:18:01.336 } 00:18:01.336 } 00:18:01.336 ]' 00:18:01.336 05:05:39 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:01.336 05:05:39 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:18:01.336 05:05:39 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:01.336 05:05:39 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:01.336 05:05:39 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:01.336 05:05:39 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:18:01.336 05:05:39 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:18:01.336 05:05:39 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:18:01.336 05:05:39 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:01.594 05:05:39 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:01.594 05:05:39 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:01.594 05:05:39 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 684d3019-df32-4a22-aa4c-0493b6f1f865 00:18:01.594 05:05:39 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=684d3019-df32-4a22-aa4c-0493b6f1f865 00:18:01.594 05:05:39 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:01.594 05:05:39 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:18:01.594 05:05:39 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:18:01.594 05:05:39 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 684d3019-df32-4a22-aa4c-0493b6f1f865 00:18:01.851 05:05:39 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:01.851 { 00:18:01.851 "name": "684d3019-df32-4a22-aa4c-0493b6f1f865", 00:18:01.851 "aliases": [ 00:18:01.851 "lvs/nvme0n1p0" 00:18:01.851 ], 00:18:01.851 "product_name": "Logical Volume", 00:18:01.851 "block_size": 4096, 00:18:01.851 "num_blocks": 26476544, 00:18:01.851 "uuid": "684d3019-df32-4a22-aa4c-0493b6f1f865", 00:18:01.851 "assigned_rate_limits": { 00:18:01.851 "rw_ios_per_sec": 0, 00:18:01.851 "rw_mbytes_per_sec": 0, 00:18:01.851 "r_mbytes_per_sec": 0, 00:18:01.851 "w_mbytes_per_sec": 0 00:18:01.851 }, 00:18:01.851 "claimed": false, 00:18:01.851 "zoned": false, 00:18:01.851 "supported_io_types": { 00:18:01.851 "read": true, 00:18:01.851 "write": true, 00:18:01.851 "unmap": true, 00:18:01.851 "flush": false, 00:18:01.851 "reset": true, 00:18:01.851 "nvme_admin": false, 00:18:01.851 "nvme_io": false, 00:18:01.851 "nvme_io_md": false, 00:18:01.851 "write_zeroes": true, 00:18:01.851 "zcopy": false, 00:18:01.851 "get_zone_info": false, 00:18:01.851 "zone_management": false, 00:18:01.851 "zone_append": false, 00:18:01.851 "compare": false, 00:18:01.851 "compare_and_write": false, 00:18:01.851 "abort": false, 00:18:01.851 "seek_hole": true, 00:18:01.851 "seek_data": true, 00:18:01.851 "copy": false, 00:18:01.851 "nvme_iov_md": false 00:18:01.851 }, 00:18:01.851 "driver_specific": { 00:18:01.851 "lvol": { 00:18:01.851 "lvol_store_uuid": "6a3e1662-ef36-4aa4-8576-a276798167a9", 00:18:01.851 "base_bdev": "nvme0n1", 00:18:01.851 "thin_provision": true, 00:18:01.851 "num_allocated_clusters": 0, 00:18:01.851 "snapshot": false, 00:18:01.851 "clone": false, 00:18:01.851 "esnap_clone": false 00:18:01.851 } 00:18:01.851 } 00:18:01.851 } 00:18:01.851 ]' 00:18:01.851 05:05:39 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:01.851 05:05:39 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:18:01.851 05:05:39 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:01.851 05:05:39 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:01.851 05:05:39 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:01.851 05:05:39 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:18:01.851 05:05:39 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:18:01.851 05:05:39 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:02.109 05:05:40 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:18:02.109 05:05:40 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 684d3019-df32-4a22-aa4c-0493b6f1f865 00:18:02.109 05:05:40 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=684d3019-df32-4a22-aa4c-0493b6f1f865 00:18:02.109 05:05:40 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:02.109 05:05:40 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:18:02.109 05:05:40 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:18:02.109 05:05:40 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 684d3019-df32-4a22-aa4c-0493b6f1f865 00:18:02.109 05:05:40 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:02.109 { 00:18:02.109 "name": "684d3019-df32-4a22-aa4c-0493b6f1f865", 00:18:02.109 "aliases": [ 00:18:02.109 "lvs/nvme0n1p0" 00:18:02.109 ], 00:18:02.109 "product_name": "Logical Volume", 00:18:02.109 "block_size": 4096, 00:18:02.109 "num_blocks": 26476544, 00:18:02.109 "uuid": "684d3019-df32-4a22-aa4c-0493b6f1f865", 00:18:02.109 "assigned_rate_limits": { 00:18:02.109 "rw_ios_per_sec": 0, 00:18:02.109 "rw_mbytes_per_sec": 0, 00:18:02.109 "r_mbytes_per_sec": 0, 00:18:02.109 "w_mbytes_per_sec": 0 00:18:02.109 }, 00:18:02.109 "claimed": false, 00:18:02.109 "zoned": false, 00:18:02.109 "supported_io_types": { 00:18:02.109 "read": true, 00:18:02.109 "write": true, 00:18:02.109 "unmap": true, 00:18:02.109 "flush": false, 00:18:02.109 "reset": true, 00:18:02.109 "nvme_admin": false, 00:18:02.109 "nvme_io": false, 00:18:02.109 "nvme_io_md": false, 00:18:02.109 "write_zeroes": true, 00:18:02.109 "zcopy": false, 00:18:02.109 "get_zone_info": false, 00:18:02.109 "zone_management": false, 00:18:02.109 "zone_append": false, 00:18:02.109 "compare": false, 00:18:02.109 "compare_and_write": false, 00:18:02.109 "abort": false, 00:18:02.109 "seek_hole": true, 00:18:02.109 "seek_data": true, 00:18:02.109 "copy": false, 00:18:02.109 "nvme_iov_md": false 00:18:02.109 }, 00:18:02.109 "driver_specific": { 00:18:02.109 "lvol": { 00:18:02.109 "lvol_store_uuid": "6a3e1662-ef36-4aa4-8576-a276798167a9", 00:18:02.109 "base_bdev": "nvme0n1", 00:18:02.109 "thin_provision": true, 00:18:02.109 "num_allocated_clusters": 0, 00:18:02.109 "snapshot": false, 00:18:02.109 "clone": false, 00:18:02.109 "esnap_clone": false 00:18:02.109 } 00:18:02.109 } 00:18:02.109 } 00:18:02.109 ]' 00:18:02.109 05:05:40 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:02.368 05:05:40 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:18:02.368 05:05:40 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:02.368 05:05:40 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:02.368 05:05:40 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:02.368 05:05:40 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:18:02.368 05:05:40 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:18:02.368 05:05:40 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 684d3019-df32-4a22-aa4c-0493b6f1f865 --l2p_dram_limit 10' 00:18:02.368 05:05:40 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:18:02.368 05:05:40 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:18:02.368 05:05:40 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:18:02.368 05:05:40 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:18:02.368 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:18:02.368 05:05:40 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 684d3019-df32-4a22-aa4c-0493b6f1f865 --l2p_dram_limit 10 -c nvc0n1p0 00:18:02.368 [2024-12-06 05:05:40.551428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.368 [2024-12-06 05:05:40.551552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:02.368 [2024-12-06 05:05:40.551568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:02.368 [2024-12-06 05:05:40.551576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.368 [2024-12-06 05:05:40.551624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.368 [2024-12-06 05:05:40.551637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:02.368 [2024-12-06 05:05:40.551645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:18:02.368 [2024-12-06 05:05:40.551654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.368 [2024-12-06 05:05:40.551686] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:02.368 [2024-12-06 05:05:40.551885] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:02.368 [2024-12-06 05:05:40.551897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.368 [2024-12-06 05:05:40.551905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:02.368 [2024-12-06 05:05:40.551918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.215 ms 00:18:02.368 [2024-12-06 05:05:40.551925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.368 [2024-12-06 05:05:40.551974] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID d8aca738-c2cb-42fc-8bd7-b1e6d40ace6a 00:18:02.368 [2024-12-06 05:05:40.552942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.368 [2024-12-06 05:05:40.552962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:02.368 [2024-12-06 05:05:40.552971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:18:02.368 [2024-12-06 05:05:40.552977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.368 [2024-12-06 05:05:40.557741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.368 [2024-12-06 05:05:40.557842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:02.368 [2024-12-06 05:05:40.557856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.728 ms 00:18:02.368 [2024-12-06 05:05:40.557866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.368 [2024-12-06 05:05:40.557927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.368 [2024-12-06 05:05:40.557933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:02.368 [2024-12-06 05:05:40.557941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:18:02.368 [2024-12-06 05:05:40.557949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.368 [2024-12-06 05:05:40.557987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.368 [2024-12-06 05:05:40.557995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:02.368 [2024-12-06 05:05:40.558006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:02.368 [2024-12-06 05:05:40.558014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.368 [2024-12-06 05:05:40.558032] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:02.368 [2024-12-06 05:05:40.559275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.368 [2024-12-06 05:05:40.559304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:02.368 [2024-12-06 05:05:40.559313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.249 ms 00:18:02.368 [2024-12-06 05:05:40.559320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.368 [2024-12-06 05:05:40.559344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.368 [2024-12-06 05:05:40.559353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:02.368 [2024-12-06 05:05:40.559360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:02.368 [2024-12-06 05:05:40.559369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.368 [2024-12-06 05:05:40.559382] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:02.368 [2024-12-06 05:05:40.559492] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:02.368 [2024-12-06 05:05:40.559502] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:02.368 [2024-12-06 05:05:40.559514] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:02.368 [2024-12-06 05:05:40.559522] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:02.368 [2024-12-06 05:05:40.559531] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:02.368 [2024-12-06 05:05:40.559537] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:02.368 [2024-12-06 05:05:40.559547] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:02.368 [2024-12-06 05:05:40.559553] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:02.368 [2024-12-06 05:05:40.559560] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:02.368 [2024-12-06 05:05:40.559567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.368 [2024-12-06 05:05:40.559574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:02.368 [2024-12-06 05:05:40.559580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.186 ms 00:18:02.368 [2024-12-06 05:05:40.559586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.368 [2024-12-06 05:05:40.559650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.368 [2024-12-06 05:05:40.559659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:02.368 [2024-12-06 05:05:40.559682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:18:02.368 [2024-12-06 05:05:40.559689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.368 [2024-12-06 05:05:40.559764] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:02.368 [2024-12-06 05:05:40.559774] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:02.368 [2024-12-06 05:05:40.559781] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:02.368 [2024-12-06 05:05:40.559789] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:02.368 [2024-12-06 05:05:40.559794] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:02.368 [2024-12-06 05:05:40.559801] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:02.368 [2024-12-06 05:05:40.559806] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:02.368 [2024-12-06 05:05:40.559812] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:02.368 [2024-12-06 05:05:40.559818] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:02.368 [2024-12-06 05:05:40.559826] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:02.368 [2024-12-06 05:05:40.559831] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:02.368 [2024-12-06 05:05:40.559838] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:02.368 [2024-12-06 05:05:40.559843] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:02.368 [2024-12-06 05:05:40.559852] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:02.368 [2024-12-06 05:05:40.559857] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:02.368 [2024-12-06 05:05:40.559864] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:02.368 [2024-12-06 05:05:40.559869] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:02.368 [2024-12-06 05:05:40.559875] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:02.368 [2024-12-06 05:05:40.559880] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:02.368 [2024-12-06 05:05:40.559888] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:02.368 [2024-12-06 05:05:40.559893] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:02.368 [2024-12-06 05:05:40.559899] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:02.368 [2024-12-06 05:05:40.559904] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:02.369 [2024-12-06 05:05:40.559910] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:02.369 [2024-12-06 05:05:40.559915] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:02.369 [2024-12-06 05:05:40.559921] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:02.369 [2024-12-06 05:05:40.559927] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:02.369 [2024-12-06 05:05:40.559934] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:02.369 [2024-12-06 05:05:40.559940] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:02.369 [2024-12-06 05:05:40.559949] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:02.369 [2024-12-06 05:05:40.559954] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:02.369 [2024-12-06 05:05:40.559962] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:02.369 [2024-12-06 05:05:40.559968] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:02.369 [2024-12-06 05:05:40.559974] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:02.369 [2024-12-06 05:05:40.559980] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:02.369 [2024-12-06 05:05:40.559988] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:02.369 [2024-12-06 05:05:40.559994] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:02.369 [2024-12-06 05:05:40.560001] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:02.369 [2024-12-06 05:05:40.560007] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:02.369 [2024-12-06 05:05:40.560014] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:02.369 [2024-12-06 05:05:40.560020] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:02.369 [2024-12-06 05:05:40.560028] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:02.369 [2024-12-06 05:05:40.560034] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:02.369 [2024-12-06 05:05:40.560041] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:02.369 [2024-12-06 05:05:40.560052] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:02.369 [2024-12-06 05:05:40.560064] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:02.369 [2024-12-06 05:05:40.560071] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:02.369 [2024-12-06 05:05:40.560079] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:02.369 [2024-12-06 05:05:40.560085] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:02.369 [2024-12-06 05:05:40.560092] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:02.369 [2024-12-06 05:05:40.560098] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:02.369 [2024-12-06 05:05:40.560105] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:02.369 [2024-12-06 05:05:40.560111] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:02.369 [2024-12-06 05:05:40.560121] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:02.369 [2024-12-06 05:05:40.560129] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:02.369 [2024-12-06 05:05:40.560138] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:02.369 [2024-12-06 05:05:40.560144] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:02.369 [2024-12-06 05:05:40.560153] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:02.369 [2024-12-06 05:05:40.560159] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:02.369 [2024-12-06 05:05:40.560166] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:02.369 [2024-12-06 05:05:40.560173] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:02.369 [2024-12-06 05:05:40.560182] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:02.369 [2024-12-06 05:05:40.560188] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:02.369 [2024-12-06 05:05:40.560196] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:02.369 [2024-12-06 05:05:40.560202] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:02.369 [2024-12-06 05:05:40.560210] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:02.369 [2024-12-06 05:05:40.560216] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:02.369 [2024-12-06 05:05:40.560223] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:02.369 [2024-12-06 05:05:40.560230] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:02.369 [2024-12-06 05:05:40.560238] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:02.369 [2024-12-06 05:05:40.560246] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:02.369 [2024-12-06 05:05:40.560254] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:02.369 [2024-12-06 05:05:40.560260] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:02.369 [2024-12-06 05:05:40.560267] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:02.369 [2024-12-06 05:05:40.560273] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:02.369 [2024-12-06 05:05:40.560281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.369 [2024-12-06 05:05:40.560286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:02.369 [2024-12-06 05:05:40.560295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.567 ms 00:18:02.369 [2024-12-06 05:05:40.560300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.369 [2024-12-06 05:05:40.560337] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:02.369 [2024-12-06 05:05:40.560345] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:05.780 [2024-12-06 05:05:43.247770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.780 [2024-12-06 05:05:43.247837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:05.780 [2024-12-06 05:05:43.247857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2687.420 ms 00:18:05.780 [2024-12-06 05:05:43.247867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.780 [2024-12-06 05:05:43.257083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.780 [2024-12-06 05:05:43.257256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:05.781 [2024-12-06 05:05:43.257279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.129 ms 00:18:05.781 [2024-12-06 05:05:43.257292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.781 [2024-12-06 05:05:43.257383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.781 [2024-12-06 05:05:43.257394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:05.781 [2024-12-06 05:05:43.257408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:18:05.781 [2024-12-06 05:05:43.257415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.781 [2024-12-06 05:05:43.266059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.781 [2024-12-06 05:05:43.266097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:05.781 [2024-12-06 05:05:43.266110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.595 ms 00:18:05.781 [2024-12-06 05:05:43.266117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.781 [2024-12-06 05:05:43.266145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.781 [2024-12-06 05:05:43.266153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:05.781 [2024-12-06 05:05:43.266166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:05.781 [2024-12-06 05:05:43.266173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.781 [2024-12-06 05:05:43.266551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.781 [2024-12-06 05:05:43.266567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:05.781 [2024-12-06 05:05:43.266578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.343 ms 00:18:05.781 [2024-12-06 05:05:43.266586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.781 [2024-12-06 05:05:43.266724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.781 [2024-12-06 05:05:43.266735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:05.781 [2024-12-06 05:05:43.266747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:18:05.781 [2024-12-06 05:05:43.266758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.781 [2024-12-06 05:05:43.286973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.781 [2024-12-06 05:05:43.287219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:05.781 [2024-12-06 05:05:43.287256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.186 ms 00:18:05.781 [2024-12-06 05:05:43.287272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.781 [2024-12-06 05:05:43.296681] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:05.781 [2024-12-06 05:05:43.299627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.781 [2024-12-06 05:05:43.299680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:05.781 [2024-12-06 05:05:43.299691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.206 ms 00:18:05.781 [2024-12-06 05:05:43.299701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.781 [2024-12-06 05:05:43.374151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.781 [2024-12-06 05:05:43.374214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:05.781 [2024-12-06 05:05:43.374229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.422 ms 00:18:05.781 [2024-12-06 05:05:43.374243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.781 [2024-12-06 05:05:43.374441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.781 [2024-12-06 05:05:43.374455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:05.781 [2024-12-06 05:05:43.374465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.147 ms 00:18:05.781 [2024-12-06 05:05:43.374474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.781 [2024-12-06 05:05:43.379375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.781 [2024-12-06 05:05:43.379423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:05.781 [2024-12-06 05:05:43.379434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.880 ms 00:18:05.781 [2024-12-06 05:05:43.379450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.781 [2024-12-06 05:05:43.383456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.781 [2024-12-06 05:05:43.383503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:05.781 [2024-12-06 05:05:43.383514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.963 ms 00:18:05.781 [2024-12-06 05:05:43.383523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.781 [2024-12-06 05:05:43.383877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.781 [2024-12-06 05:05:43.383890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:05.781 [2024-12-06 05:05:43.383900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.316 ms 00:18:05.781 [2024-12-06 05:05:43.383911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.781 [2024-12-06 05:05:43.417272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.781 [2024-12-06 05:05:43.417445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:05.781 [2024-12-06 05:05:43.417464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.341 ms 00:18:05.781 [2024-12-06 05:05:43.417474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.781 [2024-12-06 05:05:43.423304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.781 [2024-12-06 05:05:43.423350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:05.781 [2024-12-06 05:05:43.423362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.732 ms 00:18:05.781 [2024-12-06 05:05:43.423372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.781 [2024-12-06 05:05:43.428092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.781 [2024-12-06 05:05:43.428134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:05.781 [2024-12-06 05:05:43.428144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.680 ms 00:18:05.781 [2024-12-06 05:05:43.428154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.781 [2024-12-06 05:05:43.433503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.781 [2024-12-06 05:05:43.433547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:05.781 [2024-12-06 05:05:43.433557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.310 ms 00:18:05.781 [2024-12-06 05:05:43.433569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.781 [2024-12-06 05:05:43.434540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.781 [2024-12-06 05:05:43.434694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:05.781 [2024-12-06 05:05:43.434733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:18:05.781 [2024-12-06 05:05:43.434776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.781 [2024-12-06 05:05:43.435021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.781 [2024-12-06 05:05:43.435061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:05.781 [2024-12-06 05:05:43.435087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:18:05.782 [2024-12-06 05:05:43.435115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.782 [2024-12-06 05:05:43.437504] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2884.879 ms, result 0 00:18:05.782 { 00:18:05.782 "name": "ftl0", 00:18:05.782 "uuid": "d8aca738-c2cb-42fc-8bd7-b1e6d40ace6a" 00:18:05.782 } 00:18:05.782 05:05:43 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:18:05.782 05:05:43 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:18:05.782 05:05:43 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:18:05.782 05:05:43 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:18:05.782 [2024-12-06 05:05:43.875215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.782 [2024-12-06 05:05:43.875442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:05.782 [2024-12-06 05:05:43.875470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:05.782 [2024-12-06 05:05:43.875479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.782 [2024-12-06 05:05:43.875513] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:05.782 [2024-12-06 05:05:43.876281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.782 [2024-12-06 05:05:43.876328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:05.782 [2024-12-06 05:05:43.876340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.749 ms 00:18:05.782 [2024-12-06 05:05:43.876352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.782 [2024-12-06 05:05:43.876620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.782 [2024-12-06 05:05:43.876635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:05.782 [2024-12-06 05:05:43.876645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.243 ms 00:18:05.782 [2024-12-06 05:05:43.876657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.782 [2024-12-06 05:05:43.880225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.782 [2024-12-06 05:05:43.880257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:05.782 [2024-12-06 05:05:43.880266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.532 ms 00:18:05.782 [2024-12-06 05:05:43.880276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.782 [2024-12-06 05:05:43.886532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.782 [2024-12-06 05:05:43.886713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:05.782 [2024-12-06 05:05:43.886733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.237 ms 00:18:05.782 [2024-12-06 05:05:43.886744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.782 [2024-12-06 05:05:43.889518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.782 [2024-12-06 05:05:43.889577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:05.782 [2024-12-06 05:05:43.889587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.681 ms 00:18:05.782 [2024-12-06 05:05:43.889596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.782 [2024-12-06 05:05:43.896274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.782 [2024-12-06 05:05:43.896334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:05.782 [2024-12-06 05:05:43.896346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.632 ms 00:18:05.782 [2024-12-06 05:05:43.896356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.782 [2024-12-06 05:05:43.896489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.782 [2024-12-06 05:05:43.896501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:05.782 [2024-12-06 05:05:43.896511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:18:05.782 [2024-12-06 05:05:43.896520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.782 [2024-12-06 05:05:43.899516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.782 [2024-12-06 05:05:43.899572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:05.782 [2024-12-06 05:05:43.899581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.975 ms 00:18:05.782 [2024-12-06 05:05:43.899591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.782 [2024-12-06 05:05:43.902013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.782 [2024-12-06 05:05:43.902069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:05.782 [2024-12-06 05:05:43.902080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.376 ms 00:18:05.782 [2024-12-06 05:05:43.902090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.782 [2024-12-06 05:05:43.904394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.782 [2024-12-06 05:05:43.904447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:05.782 [2024-12-06 05:05:43.904457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.258 ms 00:18:05.782 [2024-12-06 05:05:43.904466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.782 [2024-12-06 05:05:43.906618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.782 [2024-12-06 05:05:43.906698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:05.782 [2024-12-06 05:05:43.906717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.080 ms 00:18:05.782 [2024-12-06 05:05:43.906730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.782 [2024-12-06 05:05:43.906776] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:05.782 [2024-12-06 05:05:43.906799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:05.782 [2024-12-06 05:05:43.906810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:05.782 [2024-12-06 05:05:43.906820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:05.782 [2024-12-06 05:05:43.906828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:05.782 [2024-12-06 05:05:43.906842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:05.782 [2024-12-06 05:05:43.906874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:05.782 [2024-12-06 05:05:43.906884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:05.782 [2024-12-06 05:05:43.906892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:05.782 [2024-12-06 05:05:43.906903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:05.782 [2024-12-06 05:05:43.906911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:05.782 [2024-12-06 05:05:43.906921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:05.782 [2024-12-06 05:05:43.906929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:05.782 [2024-12-06 05:05:43.906938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:05.782 [2024-12-06 05:05:43.906946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:05.783 [2024-12-06 05:05:43.906957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:05.783 [2024-12-06 05:05:43.906964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:05.783 [2024-12-06 05:05:43.906973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:05.783 [2024-12-06 05:05:43.906981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:05.783 [2024-12-06 05:05:43.906990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:05.783 [2024-12-06 05:05:43.906997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:05.783 [2024-12-06 05:05:43.907009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:05.783 [2024-12-06 05:05:43.907017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:05.783 [2024-12-06 05:05:43.907029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:05.783 [2024-12-06 05:05:43.907036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:05.783 [2024-12-06 05:05:43.907046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:05.783 [2024-12-06 05:05:43.907053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:05.783 [2024-12-06 05:05:43.907064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:05.783 [2024-12-06 05:05:43.907072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:05.783 [2024-12-06 05:05:43.907089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:05.783 [2024-12-06 05:05:43.907097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:05.783 [2024-12-06 05:05:43.907106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:05.783 [2024-12-06 05:05:43.907115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:05.783 [2024-12-06 05:05:43.907125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:05.783 [2024-12-06 05:05:43.907133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:05.783 [2024-12-06 05:05:43.907143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:05.783 [2024-12-06 05:05:43.907151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:05.783 [2024-12-06 05:05:43.907162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:05.783 [2024-12-06 05:05:43.907170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:05.783 [2024-12-06 05:05:43.907181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:05.783 [2024-12-06 05:05:43.907188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:05.783 [2024-12-06 05:05:43.907198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:05.783 [2024-12-06 05:05:43.907205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:05.783 [2024-12-06 05:05:43.907215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:05.783 [2024-12-06 05:05:43.907223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:05.783 [2024-12-06 05:05:43.907233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:05.783 [2024-12-06 05:05:43.907241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:05.783 [2024-12-06 05:05:43.907250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:05.783 [2024-12-06 05:05:43.907258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:05.783 [2024-12-06 05:05:43.907268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:05.783 [2024-12-06 05:05:43.907275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:05.783 [2024-12-06 05:05:43.907286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:05.783 [2024-12-06 05:05:43.907294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:05.783 [2024-12-06 05:05:43.907305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:05.783 [2024-12-06 05:05:43.907313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:05.783 [2024-12-06 05:05:43.907324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:05.783 [2024-12-06 05:05:43.907332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:05.783 [2024-12-06 05:05:43.907341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:05.783 [2024-12-06 05:05:43.907348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:05.783 [2024-12-06 05:05:43.907358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:05.783 [2024-12-06 05:05:43.907365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:05.783 [2024-12-06 05:05:43.907375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:05.783 [2024-12-06 05:05:43.907382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:05.783 [2024-12-06 05:05:43.907391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:05.783 [2024-12-06 05:05:43.907399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:05.783 [2024-12-06 05:05:43.907408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:05.783 [2024-12-06 05:05:43.907416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:05.783 [2024-12-06 05:05:43.907427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:05.783 [2024-12-06 05:05:43.907434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:05.783 [2024-12-06 05:05:43.907446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:05.783 [2024-12-06 05:05:43.907454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:05.783 [2024-12-06 05:05:43.907463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:05.783 [2024-12-06 05:05:43.907471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:05.783 [2024-12-06 05:05:43.907480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:05.783 [2024-12-06 05:05:43.907488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:05.783 [2024-12-06 05:05:43.907497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:05.783 [2024-12-06 05:05:43.907505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:05.783 [2024-12-06 05:05:43.907516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:05.783 [2024-12-06 05:05:43.907523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:05.783 [2024-12-06 05:05:43.907533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:05.783 [2024-12-06 05:05:43.907540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:05.783 [2024-12-06 05:05:43.907549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:05.783 [2024-12-06 05:05:43.907556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:05.783 [2024-12-06 05:05:43.907566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:05.783 [2024-12-06 05:05:43.907573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:05.783 [2024-12-06 05:05:43.907584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:05.783 [2024-12-06 05:05:43.907591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:05.783 [2024-12-06 05:05:43.907602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:05.783 [2024-12-06 05:05:43.907610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:05.783 [2024-12-06 05:05:43.907619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:05.783 [2024-12-06 05:05:43.907627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:05.783 [2024-12-06 05:05:43.907636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:05.784 [2024-12-06 05:05:43.907643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:05.784 [2024-12-06 05:05:43.907653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:05.784 [2024-12-06 05:05:43.907661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:05.784 [2024-12-06 05:05:43.907685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:05.784 [2024-12-06 05:05:43.907693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:05.784 [2024-12-06 05:05:43.907703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:05.784 [2024-12-06 05:05:43.907711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:05.784 [2024-12-06 05:05:43.907722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:05.784 [2024-12-06 05:05:43.907730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:05.784 [2024-12-06 05:05:43.907751] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:05.784 [2024-12-06 05:05:43.907760] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d8aca738-c2cb-42fc-8bd7-b1e6d40ace6a 00:18:05.784 [2024-12-06 05:05:43.907772] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:05.784 [2024-12-06 05:05:43.907780] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:05.784 [2024-12-06 05:05:43.907791] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:05.784 [2024-12-06 05:05:43.907799] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:05.784 [2024-12-06 05:05:43.907808] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:05.784 [2024-12-06 05:05:43.907816] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:05.784 [2024-12-06 05:05:43.907827] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:05.784 [2024-12-06 05:05:43.907834] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:05.784 [2024-12-06 05:05:43.907843] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:05.784 [2024-12-06 05:05:43.907850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.784 [2024-12-06 05:05:43.907860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:05.784 [2024-12-06 05:05:43.907872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.076 ms 00:18:05.784 [2024-12-06 05:05:43.907882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.784 [2024-12-06 05:05:43.910142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.784 [2024-12-06 05:05:43.910313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:05.784 [2024-12-06 05:05:43.910333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.240 ms 00:18:05.784 [2024-12-06 05:05:43.910345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.784 [2024-12-06 05:05:43.910497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.784 [2024-12-06 05:05:43.910509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:05.784 [2024-12-06 05:05:43.910519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:18:05.784 [2024-12-06 05:05:43.910529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.784 [2024-12-06 05:05:43.918475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:05.784 [2024-12-06 05:05:43.918535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:05.784 [2024-12-06 05:05:43.918553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:05.784 [2024-12-06 05:05:43.918563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.784 [2024-12-06 05:05:43.918636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:05.784 [2024-12-06 05:05:43.918647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:05.784 [2024-12-06 05:05:43.918655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:05.784 [2024-12-06 05:05:43.918695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.784 [2024-12-06 05:05:43.918774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:05.784 [2024-12-06 05:05:43.918791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:05.784 [2024-12-06 05:05:43.918803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:05.784 [2024-12-06 05:05:43.918813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.784 [2024-12-06 05:05:43.918831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:05.784 [2024-12-06 05:05:43.918845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:05.784 [2024-12-06 05:05:43.918852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:05.784 [2024-12-06 05:05:43.918862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.784 [2024-12-06 05:05:43.932624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:05.784 [2024-12-06 05:05:43.932706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:05.784 [2024-12-06 05:05:43.932718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:05.784 [2024-12-06 05:05:43.932729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.784 [2024-12-06 05:05:43.944600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:05.784 [2024-12-06 05:05:43.944688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:05.784 [2024-12-06 05:05:43.944700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:05.784 [2024-12-06 05:05:43.944715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.784 [2024-12-06 05:05:43.944794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:05.784 [2024-12-06 05:05:43.944810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:05.784 [2024-12-06 05:05:43.944819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:05.784 [2024-12-06 05:05:43.944835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.784 [2024-12-06 05:05:43.944912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:05.784 [2024-12-06 05:05:43.944925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:05.784 [2024-12-06 05:05:43.944934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:05.784 [2024-12-06 05:05:43.944952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.784 [2024-12-06 05:05:43.945030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:05.784 [2024-12-06 05:05:43.945044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:05.784 [2024-12-06 05:05:43.945056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:05.784 [2024-12-06 05:05:43.945066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.784 [2024-12-06 05:05:43.945098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:05.784 [2024-12-06 05:05:43.945111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:05.784 [2024-12-06 05:05:43.945120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:05.784 [2024-12-06 05:05:43.945133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.784 [2024-12-06 05:05:43.945179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:05.784 [2024-12-06 05:05:43.945193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:05.784 [2024-12-06 05:05:43.945201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:05.784 [2024-12-06 05:05:43.945212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.784 [2024-12-06 05:05:43.945264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:05.784 [2024-12-06 05:05:43.945277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:05.784 [2024-12-06 05:05:43.945287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:05.784 [2024-12-06 05:05:43.945300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.784 [2024-12-06 05:05:43.945452] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 70.195 ms, result 0 00:18:05.784 true 00:18:05.784 05:05:43 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 86153 00:18:05.784 05:05:43 ftl.ftl_restore -- common/autotest_common.sh@950 -- # '[' -z 86153 ']' 00:18:05.784 05:05:43 ftl.ftl_restore -- common/autotest_common.sh@954 -- # kill -0 86153 00:18:05.784 05:05:43 ftl.ftl_restore -- common/autotest_common.sh@955 -- # uname 00:18:05.784 05:05:43 ftl.ftl_restore -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:05.784 05:05:43 ftl.ftl_restore -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86153 00:18:05.784 killing process with pid 86153 00:18:05.784 05:05:43 ftl.ftl_restore -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:05.784 05:05:43 ftl.ftl_restore -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:05.784 05:05:43 ftl.ftl_restore -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86153' 00:18:05.784 05:05:43 ftl.ftl_restore -- common/autotest_common.sh@969 -- # kill 86153 00:18:05.784 05:05:43 ftl.ftl_restore -- common/autotest_common.sh@974 -- # wait 86153 00:18:11.072 05:05:49 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:18:15.273 262144+0 records in 00:18:15.273 262144+0 records out 00:18:15.273 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.71365 s, 289 MB/s 00:18:15.273 05:05:52 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:18:17.181 05:05:54 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:17.181 [2024-12-06 05:05:54.951377] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:18:17.181 [2024-12-06 05:05:54.951501] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86353 ] 00:18:17.181 [2024-12-06 05:05:55.086394] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:17.181 [2024-12-06 05:05:55.127754] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:17.181 [2024-12-06 05:05:55.239051] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:17.181 [2024-12-06 05:05:55.239143] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:17.181 [2024-12-06 05:05:55.398626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.181 [2024-12-06 05:05:55.398870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:17.181 [2024-12-06 05:05:55.398906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:17.181 [2024-12-06 05:05:55.398915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.181 [2024-12-06 05:05:55.398985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.181 [2024-12-06 05:05:55.398996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:17.181 [2024-12-06 05:05:55.399005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:18:17.181 [2024-12-06 05:05:55.399019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.181 [2024-12-06 05:05:55.399047] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:17.181 [2024-12-06 05:05:55.399300] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:17.181 [2024-12-06 05:05:55.399316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.181 [2024-12-06 05:05:55.399325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:17.181 [2024-12-06 05:05:55.399336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.280 ms 00:18:17.181 [2024-12-06 05:05:55.399344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.181 [2024-12-06 05:05:55.401023] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:17.181 [2024-12-06 05:05:55.404634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.181 [2024-12-06 05:05:55.404701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:17.181 [2024-12-06 05:05:55.404714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.613 ms 00:18:17.181 [2024-12-06 05:05:55.404722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.181 [2024-12-06 05:05:55.404800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.181 [2024-12-06 05:05:55.404813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:17.181 [2024-12-06 05:05:55.404824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:18:17.181 [2024-12-06 05:05:55.404835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.440 [2024-12-06 05:05:55.413053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.440 [2024-12-06 05:05:55.413114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:17.440 [2024-12-06 05:05:55.413126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.177 ms 00:18:17.440 [2024-12-06 05:05:55.413140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.440 [2024-12-06 05:05:55.413268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.440 [2024-12-06 05:05:55.413278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:17.440 [2024-12-06 05:05:55.413290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:18:17.440 [2024-12-06 05:05:55.413301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.440 [2024-12-06 05:05:55.413364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.440 [2024-12-06 05:05:55.413378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:17.440 [2024-12-06 05:05:55.413386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:17.440 [2024-12-06 05:05:55.413393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.440 [2024-12-06 05:05:55.413416] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:17.440 [2024-12-06 05:05:55.415487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.440 [2024-12-06 05:05:55.415527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:17.440 [2024-12-06 05:05:55.415538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.076 ms 00:18:17.440 [2024-12-06 05:05:55.415552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.440 [2024-12-06 05:05:55.415589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.440 [2024-12-06 05:05:55.415601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:17.440 [2024-12-06 05:05:55.415613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:17.440 [2024-12-06 05:05:55.415620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.440 [2024-12-06 05:05:55.415645] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:17.440 [2024-12-06 05:05:55.415693] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:17.440 [2024-12-06 05:05:55.415736] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:17.440 [2024-12-06 05:05:55.415753] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:17.440 [2024-12-06 05:05:55.415857] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:17.440 [2024-12-06 05:05:55.415868] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:17.440 [2024-12-06 05:05:55.415879] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:17.440 [2024-12-06 05:05:55.415889] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:17.440 [2024-12-06 05:05:55.415903] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:17.440 [2024-12-06 05:05:55.415912] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:17.440 [2024-12-06 05:05:55.415920] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:17.440 [2024-12-06 05:05:55.415928] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:17.440 [2024-12-06 05:05:55.415935] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:17.440 [2024-12-06 05:05:55.415943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.440 [2024-12-06 05:05:55.415951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:17.440 [2024-12-06 05:05:55.415960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.301 ms 00:18:17.440 [2024-12-06 05:05:55.415967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.440 [2024-12-06 05:05:55.416051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.440 [2024-12-06 05:05:55.416061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:17.440 [2024-12-06 05:05:55.416074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:18:17.440 [2024-12-06 05:05:55.416081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.440 [2024-12-06 05:05:55.416178] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:17.440 [2024-12-06 05:05:55.416189] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:17.440 [2024-12-06 05:05:55.416199] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:17.440 [2024-12-06 05:05:55.416208] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:17.440 [2024-12-06 05:05:55.416217] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:17.440 [2024-12-06 05:05:55.416225] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:17.440 [2024-12-06 05:05:55.416233] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:17.440 [2024-12-06 05:05:55.416241] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:17.440 [2024-12-06 05:05:55.416250] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:17.440 [2024-12-06 05:05:55.416259] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:17.440 [2024-12-06 05:05:55.416266] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:17.440 [2024-12-06 05:05:55.416274] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:17.440 [2024-12-06 05:05:55.416284] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:17.440 [2024-12-06 05:05:55.416292] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:17.440 [2024-12-06 05:05:55.416302] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:17.440 [2024-12-06 05:05:55.416311] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:17.440 [2024-12-06 05:05:55.416319] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:17.440 [2024-12-06 05:05:55.416327] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:17.440 [2024-12-06 05:05:55.416334] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:17.440 [2024-12-06 05:05:55.416342] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:17.440 [2024-12-06 05:05:55.416350] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:17.440 [2024-12-06 05:05:55.416359] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:17.440 [2024-12-06 05:05:55.416367] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:17.440 [2024-12-06 05:05:55.416375] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:17.440 [2024-12-06 05:05:55.416383] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:17.440 [2024-12-06 05:05:55.416391] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:17.440 [2024-12-06 05:05:55.416399] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:17.440 [2024-12-06 05:05:55.416407] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:17.440 [2024-12-06 05:05:55.416420] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:17.440 [2024-12-06 05:05:55.416428] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:17.440 [2024-12-06 05:05:55.416435] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:17.440 [2024-12-06 05:05:55.416441] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:17.440 [2024-12-06 05:05:55.416448] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:17.440 [2024-12-06 05:05:55.416455] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:17.440 [2024-12-06 05:05:55.416462] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:17.440 [2024-12-06 05:05:55.416469] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:17.440 [2024-12-06 05:05:55.416476] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:17.440 [2024-12-06 05:05:55.416484] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:17.440 [2024-12-06 05:05:55.416491] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:17.440 [2024-12-06 05:05:55.416497] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:17.440 [2024-12-06 05:05:55.416504] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:17.440 [2024-12-06 05:05:55.416511] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:17.440 [2024-12-06 05:05:55.416517] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:17.440 [2024-12-06 05:05:55.416523] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:17.440 [2024-12-06 05:05:55.416533] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:17.440 [2024-12-06 05:05:55.416541] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:17.440 [2024-12-06 05:05:55.416553] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:17.440 [2024-12-06 05:05:55.416561] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:17.440 [2024-12-06 05:05:55.416569] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:17.440 [2024-12-06 05:05:55.416575] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:17.440 [2024-12-06 05:05:55.416582] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:17.440 [2024-12-06 05:05:55.416588] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:17.440 [2024-12-06 05:05:55.416595] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:17.440 [2024-12-06 05:05:55.416603] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:17.440 [2024-12-06 05:05:55.416613] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:17.440 [2024-12-06 05:05:55.416621] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:17.440 [2024-12-06 05:05:55.416628] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:17.440 [2024-12-06 05:05:55.416635] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:17.440 [2024-12-06 05:05:55.416642] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:17.440 [2024-12-06 05:05:55.416650] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:17.440 [2024-12-06 05:05:55.416660] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:17.440 [2024-12-06 05:05:55.416683] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:17.440 [2024-12-06 05:05:55.416691] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:17.440 [2024-12-06 05:05:55.416698] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:17.440 [2024-12-06 05:05:55.416711] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:17.440 [2024-12-06 05:05:55.416718] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:17.440 [2024-12-06 05:05:55.416725] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:17.440 [2024-12-06 05:05:55.416733] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:17.440 [2024-12-06 05:05:55.416740] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:17.440 [2024-12-06 05:05:55.416748] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:17.440 [2024-12-06 05:05:55.416756] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:17.440 [2024-12-06 05:05:55.416764] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:17.440 [2024-12-06 05:05:55.416771] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:17.440 [2024-12-06 05:05:55.416779] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:17.440 [2024-12-06 05:05:55.416787] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:17.440 [2024-12-06 05:05:55.416795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.440 [2024-12-06 05:05:55.416805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:17.440 [2024-12-06 05:05:55.416813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.685 ms 00:18:17.440 [2024-12-06 05:05:55.416822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.440 [2024-12-06 05:05:55.438807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.440 [2024-12-06 05:05:55.438876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:17.440 [2024-12-06 05:05:55.438899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.939 ms 00:18:17.440 [2024-12-06 05:05:55.438912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.440 [2024-12-06 05:05:55.439051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.440 [2024-12-06 05:05:55.439066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:17.440 [2024-12-06 05:05:55.439080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:18:17.440 [2024-12-06 05:05:55.439091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.440 [2024-12-06 05:05:55.451281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.440 [2024-12-06 05:05:55.451325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:17.440 [2024-12-06 05:05:55.451336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.103 ms 00:18:17.440 [2024-12-06 05:05:55.451344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.440 [2024-12-06 05:05:55.451378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.440 [2024-12-06 05:05:55.451392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:17.440 [2024-12-06 05:05:55.451404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:17.440 [2024-12-06 05:05:55.451415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.440 [2024-12-06 05:05:55.451952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.440 [2024-12-06 05:05:55.451978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:17.440 [2024-12-06 05:05:55.451989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.481 ms 00:18:17.440 [2024-12-06 05:05:55.451998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.440 [2024-12-06 05:05:55.452140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.440 [2024-12-06 05:05:55.452158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:17.440 [2024-12-06 05:05:55.452168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:18:17.440 [2024-12-06 05:05:55.452180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.440 [2024-12-06 05:05:55.458539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.440 [2024-12-06 05:05:55.458727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:17.440 [2024-12-06 05:05:55.458756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.334 ms 00:18:17.440 [2024-12-06 05:05:55.458764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.440 [2024-12-06 05:05:55.462290] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:18:17.440 [2024-12-06 05:05:55.462453] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:17.440 [2024-12-06 05:05:55.462477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.440 [2024-12-06 05:05:55.462486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:17.440 [2024-12-06 05:05:55.462494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.610 ms 00:18:17.440 [2024-12-06 05:05:55.462501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.440 [2024-12-06 05:05:55.478027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.440 [2024-12-06 05:05:55.478081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:17.440 [2024-12-06 05:05:55.478093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.483 ms 00:18:17.440 [2024-12-06 05:05:55.478105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.440 [2024-12-06 05:05:55.480659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.440 [2024-12-06 05:05:55.480716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:17.440 [2024-12-06 05:05:55.480727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.502 ms 00:18:17.440 [2024-12-06 05:05:55.480734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.440 [2024-12-06 05:05:55.483217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.440 [2024-12-06 05:05:55.483262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:17.440 [2024-12-06 05:05:55.483272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.440 ms 00:18:17.440 [2024-12-06 05:05:55.483279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.440 [2024-12-06 05:05:55.483708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.441 [2024-12-06 05:05:55.483725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:17.441 [2024-12-06 05:05:55.483735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.352 ms 00:18:17.441 [2024-12-06 05:05:55.483743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.441 [2024-12-06 05:05:55.506366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.441 [2024-12-06 05:05:55.506440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:17.441 [2024-12-06 05:05:55.506459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.603 ms 00:18:17.441 [2024-12-06 05:05:55.506468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.441 [2024-12-06 05:05:55.514721] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:17.441 [2024-12-06 05:05:55.517773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.441 [2024-12-06 05:05:55.517821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:17.441 [2024-12-06 05:05:55.517834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.250 ms 00:18:17.441 [2024-12-06 05:05:55.517847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.441 [2024-12-06 05:05:55.517930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.441 [2024-12-06 05:05:55.517942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:17.441 [2024-12-06 05:05:55.517952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:18:17.441 [2024-12-06 05:05:55.517960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.441 [2024-12-06 05:05:55.518031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.441 [2024-12-06 05:05:55.518042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:17.441 [2024-12-06 05:05:55.518051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:18:17.441 [2024-12-06 05:05:55.518059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.441 [2024-12-06 05:05:55.518090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.441 [2024-12-06 05:05:55.518107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:17.441 [2024-12-06 05:05:55.518116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:18:17.441 [2024-12-06 05:05:55.518124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.441 [2024-12-06 05:05:55.518161] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:17.441 [2024-12-06 05:05:55.518178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.441 [2024-12-06 05:05:55.518188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:17.441 [2024-12-06 05:05:55.518196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:18:17.441 [2024-12-06 05:05:55.518205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.441 [2024-12-06 05:05:55.523785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.441 [2024-12-06 05:05:55.523971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:17.441 [2024-12-06 05:05:55.523990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.556 ms 00:18:17.441 [2024-12-06 05:05:55.523998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.441 [2024-12-06 05:05:55.524077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.441 [2024-12-06 05:05:55.524088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:17.441 [2024-12-06 05:05:55.524097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:18:17.441 [2024-12-06 05:05:55.524111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.441 [2024-12-06 05:05:55.525258] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 126.169 ms, result 0 00:18:18.382  [2024-12-06T05:05:57.553Z] Copying: 16/1024 [MB] (16 MBps) [2024-12-06T05:05:58.936Z] Copying: 41/1024 [MB] (25 MBps) [2024-12-06T05:05:59.876Z] Copying: 61/1024 [MB] (19 MBps) [2024-12-06T05:06:00.819Z] Copying: 82/1024 [MB] (21 MBps) [2024-12-06T05:06:01.760Z] Copying: 104/1024 [MB] (21 MBps) [2024-12-06T05:06:02.704Z] Copying: 121/1024 [MB] (17 MBps) [2024-12-06T05:06:03.649Z] Copying: 142/1024 [MB] (20 MBps) [2024-12-06T05:06:04.595Z] Copying: 157/1024 [MB] (14 MBps) [2024-12-06T05:06:05.541Z] Copying: 167/1024 [MB] (10 MBps) [2024-12-06T05:06:06.927Z] Copying: 178/1024 [MB] (10 MBps) [2024-12-06T05:06:07.868Z] Copying: 188/1024 [MB] (10 MBps) [2024-12-06T05:06:08.810Z] Copying: 199/1024 [MB] (10 MBps) [2024-12-06T05:06:09.756Z] Copying: 212/1024 [MB] (13 MBps) [2024-12-06T05:06:10.701Z] Copying: 227/1024 [MB] (14 MBps) [2024-12-06T05:06:11.645Z] Copying: 238/1024 [MB] (10 MBps) [2024-12-06T05:06:12.587Z] Copying: 256/1024 [MB] (18 MBps) [2024-12-06T05:06:13.974Z] Copying: 279/1024 [MB] (22 MBps) [2024-12-06T05:06:14.547Z] Copying: 300/1024 [MB] (21 MBps) [2024-12-06T05:06:15.931Z] Copying: 325/1024 [MB] (25 MBps) [2024-12-06T05:06:16.903Z] Copying: 343/1024 [MB] (18 MBps) [2024-12-06T05:06:17.872Z] Copying: 362/1024 [MB] (18 MBps) [2024-12-06T05:06:18.817Z] Copying: 377/1024 [MB] (15 MBps) [2024-12-06T05:06:19.761Z] Copying: 403/1024 [MB] (25 MBps) [2024-12-06T05:06:20.704Z] Copying: 425/1024 [MB] (22 MBps) [2024-12-06T05:06:21.650Z] Copying: 449/1024 [MB] (23 MBps) [2024-12-06T05:06:22.597Z] Copying: 471/1024 [MB] (22 MBps) [2024-12-06T05:06:23.542Z] Copying: 487/1024 [MB] (15 MBps) [2024-12-06T05:06:24.931Z] Copying: 506/1024 [MB] (18 MBps) [2024-12-06T05:06:25.873Z] Copying: 522/1024 [MB] (15 MBps) [2024-12-06T05:06:26.823Z] Copying: 535/1024 [MB] (13 MBps) [2024-12-06T05:06:27.770Z] Copying: 552/1024 [MB] (16 MBps) [2024-12-06T05:06:28.717Z] Copying: 570/1024 [MB] (18 MBps) [2024-12-06T05:06:29.662Z] Copying: 587/1024 [MB] (16 MBps) [2024-12-06T05:06:30.607Z] Copying: 598/1024 [MB] (11 MBps) [2024-12-06T05:06:31.554Z] Copying: 610/1024 [MB] (11 MBps) [2024-12-06T05:06:32.945Z] Copying: 621/1024 [MB] (11 MBps) [2024-12-06T05:06:33.547Z] Copying: 632/1024 [MB] (10 MBps) [2024-12-06T05:06:34.934Z] Copying: 643/1024 [MB] (11 MBps) [2024-12-06T05:06:35.876Z] Copying: 653/1024 [MB] (10 MBps) [2024-12-06T05:06:36.819Z] Copying: 664/1024 [MB] (11 MBps) [2024-12-06T05:06:37.761Z] Copying: 675/1024 [MB] (11 MBps) [2024-12-06T05:06:38.704Z] Copying: 687/1024 [MB] (11 MBps) [2024-12-06T05:06:39.649Z] Copying: 699/1024 [MB] (11 MBps) [2024-12-06T05:06:40.591Z] Copying: 711/1024 [MB] (11 MBps) [2024-12-06T05:06:41.979Z] Copying: 722/1024 [MB] (11 MBps) [2024-12-06T05:06:42.552Z] Copying: 733/1024 [MB] (10 MBps) [2024-12-06T05:06:43.939Z] Copying: 744/1024 [MB] (11 MBps) [2024-12-06T05:06:44.884Z] Copying: 755/1024 [MB] (11 MBps) [2024-12-06T05:06:45.838Z] Copying: 766/1024 [MB] (11 MBps) [2024-12-06T05:06:46.823Z] Copying: 777/1024 [MB] (11 MBps) [2024-12-06T05:06:47.769Z] Copying: 788/1024 [MB] (11 MBps) [2024-12-06T05:06:48.713Z] Copying: 800/1024 [MB] (11 MBps) [2024-12-06T05:06:49.656Z] Copying: 811/1024 [MB] (11 MBps) [2024-12-06T05:06:50.598Z] Copying: 822/1024 [MB] (11 MBps) [2024-12-06T05:06:51.543Z] Copying: 834/1024 [MB] (11 MBps) [2024-12-06T05:06:52.931Z] Copying: 846/1024 [MB] (11 MBps) [2024-12-06T05:06:53.874Z] Copying: 857/1024 [MB] (11 MBps) [2024-12-06T05:06:54.820Z] Copying: 869/1024 [MB] (11 MBps) [2024-12-06T05:06:55.763Z] Copying: 881/1024 [MB] (12 MBps) [2024-12-06T05:06:56.709Z] Copying: 892/1024 [MB] (10 MBps) [2024-12-06T05:06:57.654Z] Copying: 904/1024 [MB] (11 MBps) [2024-12-06T05:06:58.598Z] Copying: 915/1024 [MB] (11 MBps) [2024-12-06T05:06:59.537Z] Copying: 927/1024 [MB] (11 MBps) [2024-12-06T05:07:00.926Z] Copying: 939/1024 [MB] (11 MBps) [2024-12-06T05:07:01.869Z] Copying: 951/1024 [MB] (11 MBps) [2024-12-06T05:07:02.812Z] Copying: 962/1024 [MB] (11 MBps) [2024-12-06T05:07:03.757Z] Copying: 973/1024 [MB] (11 MBps) [2024-12-06T05:07:04.703Z] Copying: 984/1024 [MB] (10 MBps) [2024-12-06T05:07:05.658Z] Copying: 995/1024 [MB] (11 MBps) [2024-12-06T05:07:06.599Z] Copying: 1006/1024 [MB] (11 MBps) [2024-12-06T05:07:06.860Z] Copying: 1018/1024 [MB] (11 MBps) [2024-12-06T05:07:06.860Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-12-06 05:07:06.828424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.628 [2024-12-06 05:07:06.828469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:28.628 [2024-12-06 05:07:06.828487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:28.628 [2024-12-06 05:07:06.828497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.628 [2024-12-06 05:07:06.828514] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:28.628 [2024-12-06 05:07:06.829058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.628 [2024-12-06 05:07:06.829076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:28.628 [2024-12-06 05:07:06.829084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.528 ms 00:19:28.628 [2024-12-06 05:07:06.829090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.628 [2024-12-06 05:07:06.831097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.628 [2024-12-06 05:07:06.831125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:28.628 [2024-12-06 05:07:06.831133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.991 ms 00:19:28.628 [2024-12-06 05:07:06.831139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.628 [2024-12-06 05:07:06.845003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.628 [2024-12-06 05:07:06.845037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:28.628 [2024-12-06 05:07:06.845048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.851 ms 00:19:28.628 [2024-12-06 05:07:06.845054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.628 [2024-12-06 05:07:06.849985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.628 [2024-12-06 05:07:06.850095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:28.628 [2024-12-06 05:07:06.850107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.904 ms 00:19:28.628 [2024-12-06 05:07:06.850113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.628 [2024-12-06 05:07:06.851897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.628 [2024-12-06 05:07:06.851924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:28.628 [2024-12-06 05:07:06.851932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.741 ms 00:19:28.628 [2024-12-06 05:07:06.851937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.628 [2024-12-06 05:07:06.856225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.628 [2024-12-06 05:07:06.856256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:28.628 [2024-12-06 05:07:06.856263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.264 ms 00:19:28.628 [2024-12-06 05:07:06.856270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.628 [2024-12-06 05:07:06.856356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.628 [2024-12-06 05:07:06.856363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:28.628 [2024-12-06 05:07:06.856376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:19:28.628 [2024-12-06 05:07:06.856382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.890 [2024-12-06 05:07:06.859193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.890 [2024-12-06 05:07:06.859218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:28.890 [2024-12-06 05:07:06.859224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.800 ms 00:19:28.890 [2024-12-06 05:07:06.859229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.890 [2024-12-06 05:07:06.861614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.890 [2024-12-06 05:07:06.861638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:28.890 [2024-12-06 05:07:06.861645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.362 ms 00:19:28.890 [2024-12-06 05:07:06.861650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.890 [2024-12-06 05:07:06.863301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.890 [2024-12-06 05:07:06.863327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:28.890 [2024-12-06 05:07:06.863334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.608 ms 00:19:28.890 [2024-12-06 05:07:06.863340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.890 [2024-12-06 05:07:06.864842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.890 [2024-12-06 05:07:06.864865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:28.890 [2024-12-06 05:07:06.864872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.460 ms 00:19:28.890 [2024-12-06 05:07:06.864877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.890 [2024-12-06 05:07:06.864898] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:28.890 [2024-12-06 05:07:06.864911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:28.890 [2024-12-06 05:07:06.864922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:28.890 [2024-12-06 05:07:06.864928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:28.890 [2024-12-06 05:07:06.864934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:28.890 [2024-12-06 05:07:06.864940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:28.890 [2024-12-06 05:07:06.864946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:28.890 [2024-12-06 05:07:06.864951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:28.890 [2024-12-06 05:07:06.864957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:28.890 [2024-12-06 05:07:06.864963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:28.890 [2024-12-06 05:07:06.864968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:28.890 [2024-12-06 05:07:06.864974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:28.890 [2024-12-06 05:07:06.864979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:28.890 [2024-12-06 05:07:06.864985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:28.890 [2024-12-06 05:07:06.864991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:28.890 [2024-12-06 05:07:06.864996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:28.890 [2024-12-06 05:07:06.865003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:28.890 [2024-12-06 05:07:06.865008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:28.890 [2024-12-06 05:07:06.865014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:28.890 [2024-12-06 05:07:06.865020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:28.891 [2024-12-06 05:07:06.865025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:28.891 [2024-12-06 05:07:06.865031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:28.891 [2024-12-06 05:07:06.865037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:28.891 [2024-12-06 05:07:06.865043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:28.891 [2024-12-06 05:07:06.865049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:28.891 [2024-12-06 05:07:06.865054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:28.891 [2024-12-06 05:07:06.865060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:28.891 [2024-12-06 05:07:06.865066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:28.891 [2024-12-06 05:07:06.865072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:28.891 [2024-12-06 05:07:06.865078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:28.891 [2024-12-06 05:07:06.865084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:28.891 [2024-12-06 05:07:06.865090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:28.891 [2024-12-06 05:07:06.865095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:28.891 [2024-12-06 05:07:06.865101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:28.891 [2024-12-06 05:07:06.865106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:28.891 [2024-12-06 05:07:06.865112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:28.891 [2024-12-06 05:07:06.865118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:28.891 [2024-12-06 05:07:06.865124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:28.891 [2024-12-06 05:07:06.865129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:28.891 [2024-12-06 05:07:06.865134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:28.891 [2024-12-06 05:07:06.865140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:28.891 [2024-12-06 05:07:06.865145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:28.891 [2024-12-06 05:07:06.865151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:28.891 [2024-12-06 05:07:06.865156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:28.891 [2024-12-06 05:07:06.865162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:28.891 [2024-12-06 05:07:06.865169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:28.891 [2024-12-06 05:07:06.865175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:28.891 [2024-12-06 05:07:06.865180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:28.891 [2024-12-06 05:07:06.865186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:28.891 [2024-12-06 05:07:06.865191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:28.891 [2024-12-06 05:07:06.865203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:28.891 [2024-12-06 05:07:06.865209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:28.891 [2024-12-06 05:07:06.865216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:28.891 [2024-12-06 05:07:06.865221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:28.891 [2024-12-06 05:07:06.865227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:28.891 [2024-12-06 05:07:06.865233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:28.891 [2024-12-06 05:07:06.865240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:28.891 [2024-12-06 05:07:06.865246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:28.891 [2024-12-06 05:07:06.865252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:28.891 [2024-12-06 05:07:06.865258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:28.891 [2024-12-06 05:07:06.865266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:28.891 [2024-12-06 05:07:06.865271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:28.891 [2024-12-06 05:07:06.865277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:28.891 [2024-12-06 05:07:06.865283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:28.891 [2024-12-06 05:07:06.865288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:28.891 [2024-12-06 05:07:06.865294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:28.891 [2024-12-06 05:07:06.865300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:28.891 [2024-12-06 05:07:06.865306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:28.891 [2024-12-06 05:07:06.865312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:28.891 [2024-12-06 05:07:06.865318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:28.891 [2024-12-06 05:07:06.865324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:28.891 [2024-12-06 05:07:06.865329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:28.891 [2024-12-06 05:07:06.865335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:28.891 [2024-12-06 05:07:06.865340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:28.891 [2024-12-06 05:07:06.865346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:28.891 [2024-12-06 05:07:06.865351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:28.891 [2024-12-06 05:07:06.865357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:28.891 [2024-12-06 05:07:06.865364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:28.891 [2024-12-06 05:07:06.865370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:28.891 [2024-12-06 05:07:06.865376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:28.891 [2024-12-06 05:07:06.865381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:28.891 [2024-12-06 05:07:06.865387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:28.891 [2024-12-06 05:07:06.865392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:28.891 [2024-12-06 05:07:06.865398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:28.891 [2024-12-06 05:07:06.865403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:28.891 [2024-12-06 05:07:06.865409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:28.891 [2024-12-06 05:07:06.865415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:28.891 [2024-12-06 05:07:06.865421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:28.891 [2024-12-06 05:07:06.865426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:28.891 [2024-12-06 05:07:06.865432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:28.891 [2024-12-06 05:07:06.865437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:28.891 [2024-12-06 05:07:06.865443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:28.891 [2024-12-06 05:07:06.865449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:28.891 [2024-12-06 05:07:06.865455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:28.891 [2024-12-06 05:07:06.865461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:28.891 [2024-12-06 05:07:06.865466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:28.891 [2024-12-06 05:07:06.865472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:28.891 [2024-12-06 05:07:06.865477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:28.891 [2024-12-06 05:07:06.865483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:28.891 [2024-12-06 05:07:06.865489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:28.891 [2024-12-06 05:07:06.865494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:28.891 [2024-12-06 05:07:06.865507] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:28.891 [2024-12-06 05:07:06.865513] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d8aca738-c2cb-42fc-8bd7-b1e6d40ace6a 00:19:28.891 [2024-12-06 05:07:06.865519] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:28.891 [2024-12-06 05:07:06.865525] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:28.891 [2024-12-06 05:07:06.865530] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:28.891 [2024-12-06 05:07:06.865536] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:28.891 [2024-12-06 05:07:06.865547] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:28.891 [2024-12-06 05:07:06.865553] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:28.891 [2024-12-06 05:07:06.865559] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:28.891 [2024-12-06 05:07:06.865564] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:28.891 [2024-12-06 05:07:06.865569] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:28.891 [2024-12-06 05:07:06.865574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.891 [2024-12-06 05:07:06.865580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:28.891 [2024-12-06 05:07:06.865586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.677 ms 00:19:28.891 [2024-12-06 05:07:06.865596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.891 [2024-12-06 05:07:06.867323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.891 [2024-12-06 05:07:06.867342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:28.891 [2024-12-06 05:07:06.867355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.716 ms 00:19:28.891 [2024-12-06 05:07:06.867362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.891 [2024-12-06 05:07:06.867451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.891 [2024-12-06 05:07:06.867458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:28.891 [2024-12-06 05:07:06.867467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:19:28.891 [2024-12-06 05:07:06.867473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.891 [2024-12-06 05:07:06.872592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.891 [2024-12-06 05:07:06.872690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:28.891 [2024-12-06 05:07:06.872732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.891 [2024-12-06 05:07:06.872749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.891 [2024-12-06 05:07:06.872806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.891 [2024-12-06 05:07:06.872824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:28.891 [2024-12-06 05:07:06.872844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.891 [2024-12-06 05:07:06.872858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.891 [2024-12-06 05:07:06.872912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.891 [2024-12-06 05:07:06.872933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:28.891 [2024-12-06 05:07:06.872950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.891 [2024-12-06 05:07:06.872999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.891 [2024-12-06 05:07:06.873023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.891 [2024-12-06 05:07:06.873039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:28.891 [2024-12-06 05:07:06.873055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.891 [2024-12-06 05:07:06.873073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.891 [2024-12-06 05:07:06.883583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.892 [2024-12-06 05:07:06.883712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:28.892 [2024-12-06 05:07:06.883751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.892 [2024-12-06 05:07:06.883770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.892 [2024-12-06 05:07:06.892155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.892 [2024-12-06 05:07:06.892268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:28.892 [2024-12-06 05:07:06.892307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.892 [2024-12-06 05:07:06.892329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.892 [2024-12-06 05:07:06.892381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.892 [2024-12-06 05:07:06.892398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:28.892 [2024-12-06 05:07:06.892414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.892 [2024-12-06 05:07:06.892428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.892 [2024-12-06 05:07:06.892459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.892 [2024-12-06 05:07:06.892482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:28.892 [2024-12-06 05:07:06.892502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.892 [2024-12-06 05:07:06.892539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.892 [2024-12-06 05:07:06.892616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.892 [2024-12-06 05:07:06.892636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:28.892 [2024-12-06 05:07:06.892652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.892 [2024-12-06 05:07:06.892681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.892 [2024-12-06 05:07:06.892722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.892 [2024-12-06 05:07:06.892880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:28.892 [2024-12-06 05:07:06.892900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.892 [2024-12-06 05:07:06.892916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.892 [2024-12-06 05:07:06.892965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.892 [2024-12-06 05:07:06.892987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:28.892 [2024-12-06 05:07:06.893006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.892 [2024-12-06 05:07:06.893021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.892 [2024-12-06 05:07:06.893071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.892 [2024-12-06 05:07:06.893127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:28.892 [2024-12-06 05:07:06.893146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.892 [2024-12-06 05:07:06.893162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.892 [2024-12-06 05:07:06.893283] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 64.830 ms, result 0 00:19:29.152 00:19:29.152 00:19:29.152 05:07:07 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:19:29.152 [2024-12-06 05:07:07.372218] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:19:29.152 [2024-12-06 05:07:07.372338] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87108 ] 00:19:29.412 [2024-12-06 05:07:07.505013] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:29.412 [2024-12-06 05:07:07.545845] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:29.675 [2024-12-06 05:07:07.644540] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:29.675 [2024-12-06 05:07:07.644762] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:29.675 [2024-12-06 05:07:07.798308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.675 [2024-12-06 05:07:07.798343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:29.675 [2024-12-06 05:07:07.798356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:29.675 [2024-12-06 05:07:07.798365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.675 [2024-12-06 05:07:07.798405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.675 [2024-12-06 05:07:07.798414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:29.675 [2024-12-06 05:07:07.798420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:19:29.675 [2024-12-06 05:07:07.798430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.675 [2024-12-06 05:07:07.798445] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:29.675 [2024-12-06 05:07:07.798636] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:29.675 [2024-12-06 05:07:07.798649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.675 [2024-12-06 05:07:07.798658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:29.675 [2024-12-06 05:07:07.798681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.207 ms 00:19:29.675 [2024-12-06 05:07:07.798688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.675 [2024-12-06 05:07:07.799986] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:29.675 [2024-12-06 05:07:07.802811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.675 [2024-12-06 05:07:07.802843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:29.675 [2024-12-06 05:07:07.802851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.826 ms 00:19:29.675 [2024-12-06 05:07:07.802857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.675 [2024-12-06 05:07:07.802904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.675 [2024-12-06 05:07:07.802913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:29.675 [2024-12-06 05:07:07.802921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:19:29.675 [2024-12-06 05:07:07.802928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.675 [2024-12-06 05:07:07.809028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.675 [2024-12-06 05:07:07.809056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:29.675 [2024-12-06 05:07:07.809063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.062 ms 00:19:29.675 [2024-12-06 05:07:07.809069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.675 [2024-12-06 05:07:07.809137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.675 [2024-12-06 05:07:07.809145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:29.675 [2024-12-06 05:07:07.809151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:19:29.675 [2024-12-06 05:07:07.809157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.675 [2024-12-06 05:07:07.809192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.675 [2024-12-06 05:07:07.809202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:29.675 [2024-12-06 05:07:07.809209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:29.675 [2024-12-06 05:07:07.809215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.675 [2024-12-06 05:07:07.809234] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:29.675 [2024-12-06 05:07:07.810804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.675 [2024-12-06 05:07:07.810831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:29.675 [2024-12-06 05:07:07.810839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.574 ms 00:19:29.675 [2024-12-06 05:07:07.810847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.675 [2024-12-06 05:07:07.810872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.675 [2024-12-06 05:07:07.810878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:29.675 [2024-12-06 05:07:07.810884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:29.675 [2024-12-06 05:07:07.810890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.675 [2024-12-06 05:07:07.810905] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:29.675 [2024-12-06 05:07:07.810923] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:29.675 [2024-12-06 05:07:07.810957] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:29.675 [2024-12-06 05:07:07.810971] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:29.675 [2024-12-06 05:07:07.811055] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:29.675 [2024-12-06 05:07:07.811064] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:29.675 [2024-12-06 05:07:07.811073] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:29.675 [2024-12-06 05:07:07.811081] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:29.675 [2024-12-06 05:07:07.811092] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:29.675 [2024-12-06 05:07:07.811098] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:29.675 [2024-12-06 05:07:07.811104] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:29.675 [2024-12-06 05:07:07.811109] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:29.675 [2024-12-06 05:07:07.811115] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:29.675 [2024-12-06 05:07:07.811122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.675 [2024-12-06 05:07:07.811130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:29.675 [2024-12-06 05:07:07.811136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.219 ms 00:19:29.675 [2024-12-06 05:07:07.811142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.675 [2024-12-06 05:07:07.811208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.675 [2024-12-06 05:07:07.811216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:29.675 [2024-12-06 05:07:07.811225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:19:29.675 [2024-12-06 05:07:07.811230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.675 [2024-12-06 05:07:07.811307] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:29.675 [2024-12-06 05:07:07.811316] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:29.675 [2024-12-06 05:07:07.811322] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:29.675 [2024-12-06 05:07:07.811329] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:29.675 [2024-12-06 05:07:07.811336] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:29.675 [2024-12-06 05:07:07.811341] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:29.675 [2024-12-06 05:07:07.811346] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:29.675 [2024-12-06 05:07:07.811353] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:29.675 [2024-12-06 05:07:07.811359] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:29.675 [2024-12-06 05:07:07.811364] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:29.675 [2024-12-06 05:07:07.811369] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:29.675 [2024-12-06 05:07:07.811375] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:29.675 [2024-12-06 05:07:07.811383] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:29.675 [2024-12-06 05:07:07.811388] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:29.675 [2024-12-06 05:07:07.811393] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:29.675 [2024-12-06 05:07:07.811400] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:29.675 [2024-12-06 05:07:07.811406] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:29.675 [2024-12-06 05:07:07.811411] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:29.675 [2024-12-06 05:07:07.811416] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:29.675 [2024-12-06 05:07:07.811421] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:29.675 [2024-12-06 05:07:07.811427] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:29.675 [2024-12-06 05:07:07.811432] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:29.675 [2024-12-06 05:07:07.811438] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:29.675 [2024-12-06 05:07:07.811443] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:29.675 [2024-12-06 05:07:07.811448] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:29.675 [2024-12-06 05:07:07.811454] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:29.675 [2024-12-06 05:07:07.811460] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:29.675 [2024-12-06 05:07:07.811466] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:29.675 [2024-12-06 05:07:07.811475] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:29.675 [2024-12-06 05:07:07.811482] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:29.675 [2024-12-06 05:07:07.811489] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:29.675 [2024-12-06 05:07:07.811494] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:29.675 [2024-12-06 05:07:07.811500] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:29.675 [2024-12-06 05:07:07.811506] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:29.675 [2024-12-06 05:07:07.811512] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:29.675 [2024-12-06 05:07:07.811518] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:29.675 [2024-12-06 05:07:07.811525] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:29.675 [2024-12-06 05:07:07.811531] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:29.675 [2024-12-06 05:07:07.811536] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:29.675 [2024-12-06 05:07:07.811542] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:29.675 [2024-12-06 05:07:07.811548] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:29.675 [2024-12-06 05:07:07.811555] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:29.675 [2024-12-06 05:07:07.811562] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:29.676 [2024-12-06 05:07:07.811568] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:29.676 [2024-12-06 05:07:07.811578] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:29.676 [2024-12-06 05:07:07.811584] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:29.676 [2024-12-06 05:07:07.811593] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:29.676 [2024-12-06 05:07:07.811602] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:29.676 [2024-12-06 05:07:07.811608] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:29.676 [2024-12-06 05:07:07.811615] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:29.676 [2024-12-06 05:07:07.811621] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:29.676 [2024-12-06 05:07:07.811628] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:29.676 [2024-12-06 05:07:07.811634] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:29.676 [2024-12-06 05:07:07.811641] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:29.676 [2024-12-06 05:07:07.811649] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:29.676 [2024-12-06 05:07:07.811659] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:29.676 [2024-12-06 05:07:07.811680] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:29.676 [2024-12-06 05:07:07.811687] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:29.676 [2024-12-06 05:07:07.811693] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:29.676 [2024-12-06 05:07:07.811701] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:29.676 [2024-12-06 05:07:07.811709] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:29.676 [2024-12-06 05:07:07.811716] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:29.676 [2024-12-06 05:07:07.811723] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:29.676 [2024-12-06 05:07:07.811730] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:29.676 [2024-12-06 05:07:07.811741] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:29.676 [2024-12-06 05:07:07.811748] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:29.676 [2024-12-06 05:07:07.811754] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:29.676 [2024-12-06 05:07:07.811761] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:29.676 [2024-12-06 05:07:07.811768] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:29.676 [2024-12-06 05:07:07.811774] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:29.676 [2024-12-06 05:07:07.811782] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:29.676 [2024-12-06 05:07:07.811920] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:29.676 [2024-12-06 05:07:07.811927] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:29.676 [2024-12-06 05:07:07.811934] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:29.676 [2024-12-06 05:07:07.811940] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:29.676 [2024-12-06 05:07:07.811947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.676 [2024-12-06 05:07:07.811954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:29.676 [2024-12-06 05:07:07.811961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.692 ms 00:19:29.676 [2024-12-06 05:07:07.811969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.676 [2024-12-06 05:07:07.836193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.676 [2024-12-06 05:07:07.836288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:29.676 [2024-12-06 05:07:07.836337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.184 ms 00:19:29.676 [2024-12-06 05:07:07.836365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.676 [2024-12-06 05:07:07.836610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.676 [2024-12-06 05:07:07.836655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:29.676 [2024-12-06 05:07:07.836703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.171 ms 00:19:29.676 [2024-12-06 05:07:07.836724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.676 [2024-12-06 05:07:07.847323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.676 [2024-12-06 05:07:07.847352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:29.676 [2024-12-06 05:07:07.847360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.463 ms 00:19:29.676 [2024-12-06 05:07:07.847366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.676 [2024-12-06 05:07:07.847389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.676 [2024-12-06 05:07:07.847396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:29.676 [2024-12-06 05:07:07.847403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:29.676 [2024-12-06 05:07:07.847408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.676 [2024-12-06 05:07:07.847832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.676 [2024-12-06 05:07:07.847853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:29.676 [2024-12-06 05:07:07.847860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.386 ms 00:19:29.676 [2024-12-06 05:07:07.847867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.676 [2024-12-06 05:07:07.847978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.676 [2024-12-06 05:07:07.847987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:29.676 [2024-12-06 05:07:07.847994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:19:29.676 [2024-12-06 05:07:07.848001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.676 [2024-12-06 05:07:07.853274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.676 [2024-12-06 05:07:07.853299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:29.676 [2024-12-06 05:07:07.853310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.256 ms 00:19:29.676 [2024-12-06 05:07:07.853320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.676 [2024-12-06 05:07:07.856244] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:29.676 [2024-12-06 05:07:07.856271] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:29.676 [2024-12-06 05:07:07.856283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.676 [2024-12-06 05:07:07.856290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:29.676 [2024-12-06 05:07:07.856297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.890 ms 00:19:29.676 [2024-12-06 05:07:07.856302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.676 [2024-12-06 05:07:07.867726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.676 [2024-12-06 05:07:07.867757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:29.676 [2024-12-06 05:07:07.867770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.393 ms 00:19:29.676 [2024-12-06 05:07:07.867776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.676 [2024-12-06 05:07:07.869550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.676 [2024-12-06 05:07:07.869576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:29.676 [2024-12-06 05:07:07.869583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.745 ms 00:19:29.676 [2024-12-06 05:07:07.869589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.676 [2024-12-06 05:07:07.871273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.676 [2024-12-06 05:07:07.871298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:29.676 [2024-12-06 05:07:07.871305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.660 ms 00:19:29.676 [2024-12-06 05:07:07.871310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.676 [2024-12-06 05:07:07.871562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.676 [2024-12-06 05:07:07.871572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:29.676 [2024-12-06 05:07:07.871579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.201 ms 00:19:29.676 [2024-12-06 05:07:07.871584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.676 [2024-12-06 05:07:07.889780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.676 [2024-12-06 05:07:07.889819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:29.676 [2024-12-06 05:07:07.889834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.180 ms 00:19:29.676 [2024-12-06 05:07:07.889840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.676 [2024-12-06 05:07:07.895685] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:29.676 [2024-12-06 05:07:07.897824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.676 [2024-12-06 05:07:07.897853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:29.676 [2024-12-06 05:07:07.897866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.950 ms 00:19:29.676 [2024-12-06 05:07:07.897877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.676 [2024-12-06 05:07:07.897919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.676 [2024-12-06 05:07:07.897928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:29.676 [2024-12-06 05:07:07.897935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:29.676 [2024-12-06 05:07:07.897942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.676 [2024-12-06 05:07:07.898020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.676 [2024-12-06 05:07:07.898029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:29.676 [2024-12-06 05:07:07.898037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:19:29.676 [2024-12-06 05:07:07.898045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.676 [2024-12-06 05:07:07.898070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.676 [2024-12-06 05:07:07.898077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:29.676 [2024-12-06 05:07:07.898084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:29.676 [2024-12-06 05:07:07.898090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.676 [2024-12-06 05:07:07.898117] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:29.676 [2024-12-06 05:07:07.898126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.676 [2024-12-06 05:07:07.898133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:29.676 [2024-12-06 05:07:07.898141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:29.676 [2024-12-06 05:07:07.898147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.676 [2024-12-06 05:07:07.902087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.676 [2024-12-06 05:07:07.902113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:29.676 [2024-12-06 05:07:07.902122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.923 ms 00:19:29.676 [2024-12-06 05:07:07.902129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.676 [2024-12-06 05:07:07.902185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.676 [2024-12-06 05:07:07.902196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:29.676 [2024-12-06 05:07:07.902203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:19:29.676 [2024-12-06 05:07:07.902209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.676 [2024-12-06 05:07:07.903079] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 104.409 ms, result 0 00:19:31.064  [2024-12-06T05:07:10.238Z] Copying: 11/1024 [MB] (11 MBps) [2024-12-06T05:07:11.183Z] Copying: 23/1024 [MB] (11 MBps) [2024-12-06T05:07:12.128Z] Copying: 34/1024 [MB] (10 MBps) [2024-12-06T05:07:13.072Z] Copying: 45/1024 [MB] (11 MBps) [2024-12-06T05:07:14.518Z] Copying: 57/1024 [MB] (11 MBps) [2024-12-06T05:07:15.091Z] Copying: 68/1024 [MB] (10 MBps) [2024-12-06T05:07:16.477Z] Copying: 78/1024 [MB] (10 MBps) [2024-12-06T05:07:17.051Z] Copying: 90/1024 [MB] (11 MBps) [2024-12-06T05:07:18.440Z] Copying: 100/1024 [MB] (10 MBps) [2024-12-06T05:07:19.388Z] Copying: 112/1024 [MB] (11 MBps) [2024-12-06T05:07:20.334Z] Copying: 123/1024 [MB] (11 MBps) [2024-12-06T05:07:21.278Z] Copying: 135/1024 [MB] (11 MBps) [2024-12-06T05:07:22.223Z] Copying: 147/1024 [MB] (11 MBps) [2024-12-06T05:07:23.168Z] Copying: 158/1024 [MB] (11 MBps) [2024-12-06T05:07:24.114Z] Copying: 170/1024 [MB] (11 MBps) [2024-12-06T05:07:25.057Z] Copying: 182/1024 [MB] (11 MBps) [2024-12-06T05:07:26.441Z] Copying: 193/1024 [MB] (11 MBps) [2024-12-06T05:07:27.385Z] Copying: 205/1024 [MB] (11 MBps) [2024-12-06T05:07:28.328Z] Copying: 216/1024 [MB] (10 MBps) [2024-12-06T05:07:29.267Z] Copying: 228/1024 [MB] (12 MBps) [2024-12-06T05:07:30.209Z] Copying: 240/1024 [MB] (12 MBps) [2024-12-06T05:07:31.150Z] Copying: 251/1024 [MB] (11 MBps) [2024-12-06T05:07:32.094Z] Copying: 263/1024 [MB] (11 MBps) [2024-12-06T05:07:33.482Z] Copying: 275/1024 [MB] (11 MBps) [2024-12-06T05:07:34.055Z] Copying: 287/1024 [MB] (11 MBps) [2024-12-06T05:07:35.442Z] Copying: 299/1024 [MB] (11 MBps) [2024-12-06T05:07:36.384Z] Copying: 309/1024 [MB] (10 MBps) [2024-12-06T05:07:37.326Z] Copying: 320/1024 [MB] (11 MBps) [2024-12-06T05:07:38.269Z] Copying: 334/1024 [MB] (13 MBps) [2024-12-06T05:07:39.214Z] Copying: 346/1024 [MB] (11 MBps) [2024-12-06T05:07:40.159Z] Copying: 357/1024 [MB] (11 MBps) [2024-12-06T05:07:41.105Z] Copying: 368/1024 [MB] (11 MBps) [2024-12-06T05:07:42.051Z] Copying: 380/1024 [MB] (11 MBps) [2024-12-06T05:07:43.462Z] Copying: 392/1024 [MB] (11 MBps) [2024-12-06T05:07:44.048Z] Copying: 403/1024 [MB] (10 MBps) [2024-12-06T05:07:45.436Z] Copying: 415/1024 [MB] (12 MBps) [2024-12-06T05:07:46.379Z] Copying: 427/1024 [MB] (11 MBps) [2024-12-06T05:07:47.326Z] Copying: 438/1024 [MB] (11 MBps) [2024-12-06T05:07:48.273Z] Copying: 449/1024 [MB] (11 MBps) [2024-12-06T05:07:49.214Z] Copying: 460/1024 [MB] (10 MBps) [2024-12-06T05:07:50.157Z] Copying: 471/1024 [MB] (10 MBps) [2024-12-06T05:07:51.102Z] Copying: 483/1024 [MB] (12 MBps) [2024-12-06T05:07:52.047Z] Copying: 495/1024 [MB] (11 MBps) [2024-12-06T05:07:53.436Z] Copying: 512/1024 [MB] (16 MBps) [2024-12-06T05:07:54.378Z] Copying: 522/1024 [MB] (10 MBps) [2024-12-06T05:07:55.323Z] Copying: 532/1024 [MB] (10 MBps) [2024-12-06T05:07:56.267Z] Copying: 543/1024 [MB] (10 MBps) [2024-12-06T05:07:57.210Z] Copying: 555/1024 [MB] (11 MBps) [2024-12-06T05:07:58.155Z] Copying: 566/1024 [MB] (11 MBps) [2024-12-06T05:07:59.100Z] Copying: 578/1024 [MB] (11 MBps) [2024-12-06T05:08:00.046Z] Copying: 590/1024 [MB] (11 MBps) [2024-12-06T05:08:01.439Z] Copying: 601/1024 [MB] (11 MBps) [2024-12-06T05:08:02.384Z] Copying: 612/1024 [MB] (10 MBps) [2024-12-06T05:08:03.330Z] Copying: 623/1024 [MB] (10 MBps) [2024-12-06T05:08:04.275Z] Copying: 633/1024 [MB] (10 MBps) [2024-12-06T05:08:05.223Z] Copying: 644/1024 [MB] (10 MBps) [2024-12-06T05:08:06.164Z] Copying: 655/1024 [MB] (10 MBps) [2024-12-06T05:08:07.106Z] Copying: 666/1024 [MB] (10 MBps) [2024-12-06T05:08:08.050Z] Copying: 679/1024 [MB] (13 MBps) [2024-12-06T05:08:09.438Z] Copying: 690/1024 [MB] (11 MBps) [2024-12-06T05:08:10.383Z] Copying: 701/1024 [MB] (10 MBps) [2024-12-06T05:08:11.325Z] Copying: 712/1024 [MB] (10 MBps) [2024-12-06T05:08:12.292Z] Copying: 723/1024 [MB] (11 MBps) [2024-12-06T05:08:13.243Z] Copying: 735/1024 [MB] (11 MBps) [2024-12-06T05:08:14.196Z] Copying: 746/1024 [MB] (11 MBps) [2024-12-06T05:08:15.138Z] Copying: 758/1024 [MB] (11 MBps) [2024-12-06T05:08:16.075Z] Copying: 769/1024 [MB] (10 MBps) [2024-12-06T05:08:17.453Z] Copying: 782/1024 [MB] (13 MBps) [2024-12-06T05:08:18.393Z] Copying: 793/1024 [MB] (10 MBps) [2024-12-06T05:08:19.339Z] Copying: 804/1024 [MB] (11 MBps) [2024-12-06T05:08:20.280Z] Copying: 816/1024 [MB] (11 MBps) [2024-12-06T05:08:21.254Z] Copying: 827/1024 [MB] (11 MBps) [2024-12-06T05:08:22.192Z] Copying: 838/1024 [MB] (11 MBps) [2024-12-06T05:08:23.134Z] Copying: 854/1024 [MB] (15 MBps) [2024-12-06T05:08:24.079Z] Copying: 872/1024 [MB] (18 MBps) [2024-12-06T05:08:25.464Z] Copying: 889/1024 [MB] (16 MBps) [2024-12-06T05:08:26.404Z] Copying: 906/1024 [MB] (16 MBps) [2024-12-06T05:08:27.350Z] Copying: 930/1024 [MB] (24 MBps) [2024-12-06T05:08:28.295Z] Copying: 952/1024 [MB] (21 MBps) [2024-12-06T05:08:29.240Z] Copying: 974/1024 [MB] (21 MBps) [2024-12-06T05:08:30.185Z] Copying: 987/1024 [MB] (12 MBps) [2024-12-06T05:08:31.130Z] Copying: 997/1024 [MB] (10 MBps) [2024-12-06T05:08:32.071Z] Copying: 1010/1024 [MB] (13 MBps) [2024-12-06T05:08:32.332Z] Copying: 1022/1024 [MB] (11 MBps) [2024-12-06T05:08:32.595Z] Copying: 1024/1024 [MB] (average 12 MBps)[2024-12-06 05:08:32.477955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.363 [2024-12-06 05:08:32.478268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:54.363 [2024-12-06 05:08:32.478303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:54.363 [2024-12-06 05:08:32.478316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.363 [2024-12-06 05:08:32.478367] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:54.363 [2024-12-06 05:08:32.479186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.363 [2024-12-06 05:08:32.479220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:54.363 [2024-12-06 05:08:32.479237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.796 ms 00:20:54.363 [2024-12-06 05:08:32.479250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.363 [2024-12-06 05:08:32.479579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.363 [2024-12-06 05:08:32.479597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:54.363 [2024-12-06 05:08:32.479611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.296 ms 00:20:54.363 [2024-12-06 05:08:32.479622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.363 [2024-12-06 05:08:32.485255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.363 [2024-12-06 05:08:32.485306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:54.363 [2024-12-06 05:08:32.485321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.609 ms 00:20:54.363 [2024-12-06 05:08:32.485332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.363 [2024-12-06 05:08:32.491966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.363 [2024-12-06 05:08:32.492028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:54.363 [2024-12-06 05:08:32.492041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.606 ms 00:20:54.363 [2024-12-06 05:08:32.492050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.363 [2024-12-06 05:08:32.495142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.363 [2024-12-06 05:08:32.495195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:54.363 [2024-12-06 05:08:32.495207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.019 ms 00:20:54.363 [2024-12-06 05:08:32.495216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.363 [2024-12-06 05:08:32.500126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.363 [2024-12-06 05:08:32.500181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:54.363 [2024-12-06 05:08:32.500195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.863 ms 00:20:54.363 [2024-12-06 05:08:32.500204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.363 [2024-12-06 05:08:32.500334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.363 [2024-12-06 05:08:32.500346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:54.363 [2024-12-06 05:08:32.500356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:20:54.363 [2024-12-06 05:08:32.500365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.363 [2024-12-06 05:08:32.503774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.363 [2024-12-06 05:08:32.503820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:54.363 [2024-12-06 05:08:32.503831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.380 ms 00:20:54.363 [2024-12-06 05:08:32.503839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.363 [2024-12-06 05:08:32.506700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.363 [2024-12-06 05:08:32.506744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:54.363 [2024-12-06 05:08:32.506755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.818 ms 00:20:54.363 [2024-12-06 05:08:32.506762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.363 [2024-12-06 05:08:32.509469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.363 [2024-12-06 05:08:32.509521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:54.363 [2024-12-06 05:08:32.509534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.665 ms 00:20:54.363 [2024-12-06 05:08:32.509543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.363 [2024-12-06 05:08:32.511951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.363 [2024-12-06 05:08:32.512014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:54.363 [2024-12-06 05:08:32.512027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.329 ms 00:20:54.363 [2024-12-06 05:08:32.512036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.363 [2024-12-06 05:08:32.512090] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:54.363 [2024-12-06 05:08:32.512118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:54.363 [2024-12-06 05:08:32.512128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:54.363 [2024-12-06 05:08:32.512137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:54.363 [2024-12-06 05:08:32.512146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:54.363 [2024-12-06 05:08:32.512156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:54.363 [2024-12-06 05:08:32.512164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:54.363 [2024-12-06 05:08:32.512173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:54.363 [2024-12-06 05:08:32.512181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:54.363 [2024-12-06 05:08:32.512189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:54.363 [2024-12-06 05:08:32.512199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:54.363 [2024-12-06 05:08:32.512207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:54.363 [2024-12-06 05:08:32.512215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:54.363 [2024-12-06 05:08:32.512223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:54.363 [2024-12-06 05:08:32.512231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:54.363 [2024-12-06 05:08:32.512239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:54.364 [2024-12-06 05:08:32.512246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:54.364 [2024-12-06 05:08:32.512254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:54.364 [2024-12-06 05:08:32.512262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:54.364 [2024-12-06 05:08:32.512269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:54.364 [2024-12-06 05:08:32.512277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:54.364 [2024-12-06 05:08:32.512284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:54.364 [2024-12-06 05:08:32.512292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:54.364 [2024-12-06 05:08:32.512299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:54.364 [2024-12-06 05:08:32.512307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:54.364 [2024-12-06 05:08:32.512314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:54.364 [2024-12-06 05:08:32.512322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:54.364 [2024-12-06 05:08:32.512331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:54.364 [2024-12-06 05:08:32.512342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:54.364 [2024-12-06 05:08:32.512350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:54.364 [2024-12-06 05:08:32.512357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:54.364 [2024-12-06 05:08:32.512365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:54.364 [2024-12-06 05:08:32.512373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:54.364 [2024-12-06 05:08:32.512381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:54.364 [2024-12-06 05:08:32.512388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:54.364 [2024-12-06 05:08:32.512396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:54.364 [2024-12-06 05:08:32.512406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:54.364 [2024-12-06 05:08:32.512414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:54.364 [2024-12-06 05:08:32.512422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:54.364 [2024-12-06 05:08:32.512430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:54.364 [2024-12-06 05:08:32.512438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:54.364 [2024-12-06 05:08:32.512446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:54.364 [2024-12-06 05:08:32.512453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:54.364 [2024-12-06 05:08:32.512461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:54.364 [2024-12-06 05:08:32.512469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:54.364 [2024-12-06 05:08:32.512477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:54.364 [2024-12-06 05:08:32.512485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:54.364 [2024-12-06 05:08:32.512492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:54.364 [2024-12-06 05:08:32.512500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:54.364 [2024-12-06 05:08:32.512516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:54.364 [2024-12-06 05:08:32.512524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:54.364 [2024-12-06 05:08:32.512534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:54.364 [2024-12-06 05:08:32.512541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:54.364 [2024-12-06 05:08:32.512549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:54.364 [2024-12-06 05:08:32.512556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:54.364 [2024-12-06 05:08:32.512564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:54.364 [2024-12-06 05:08:32.512572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:54.364 [2024-12-06 05:08:32.512580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:54.364 [2024-12-06 05:08:32.512588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:54.364 [2024-12-06 05:08:32.512598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:54.364 [2024-12-06 05:08:32.512606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:54.364 [2024-12-06 05:08:32.512615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:54.364 [2024-12-06 05:08:32.512623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:54.364 [2024-12-06 05:08:32.512631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:54.364 [2024-12-06 05:08:32.512640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:54.364 [2024-12-06 05:08:32.512648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:54.364 [2024-12-06 05:08:32.512657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:54.364 [2024-12-06 05:08:32.512692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:54.364 [2024-12-06 05:08:32.512702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:54.364 [2024-12-06 05:08:32.512710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:54.364 [2024-12-06 05:08:32.512718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:54.364 [2024-12-06 05:08:32.512728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:54.364 [2024-12-06 05:08:32.512736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:54.364 [2024-12-06 05:08:32.512744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:54.364 [2024-12-06 05:08:32.512752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:54.364 [2024-12-06 05:08:32.512760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:54.364 [2024-12-06 05:08:32.512769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:54.364 [2024-12-06 05:08:32.512778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:54.364 [2024-12-06 05:08:32.512786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:54.364 [2024-12-06 05:08:32.512795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:54.364 [2024-12-06 05:08:32.512803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:54.364 [2024-12-06 05:08:32.512811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:54.364 [2024-12-06 05:08:32.512818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:54.364 [2024-12-06 05:08:32.512828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:54.364 [2024-12-06 05:08:32.512837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:54.364 [2024-12-06 05:08:32.512846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:54.364 [2024-12-06 05:08:32.512854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:54.364 [2024-12-06 05:08:32.512862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:54.364 [2024-12-06 05:08:32.512870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:54.364 [2024-12-06 05:08:32.512880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:54.364 [2024-12-06 05:08:32.512888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:54.364 [2024-12-06 05:08:32.512896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:54.364 [2024-12-06 05:08:32.512909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:54.364 [2024-12-06 05:08:32.512919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:54.364 [2024-12-06 05:08:32.512929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:54.364 [2024-12-06 05:08:32.512937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:54.364 [2024-12-06 05:08:32.512945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:54.364 [2024-12-06 05:08:32.512952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:54.364 [2024-12-06 05:08:32.512960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:54.364 [2024-12-06 05:08:32.512968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:54.364 [2024-12-06 05:08:32.512978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:54.364 [2024-12-06 05:08:32.512995] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:54.364 [2024-12-06 05:08:32.513018] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d8aca738-c2cb-42fc-8bd7-b1e6d40ace6a 00:20:54.364 [2024-12-06 05:08:32.513028] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:54.364 [2024-12-06 05:08:32.513045] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:54.365 [2024-12-06 05:08:32.513053] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:54.365 [2024-12-06 05:08:32.513062] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:54.365 [2024-12-06 05:08:32.513070] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:54.365 [2024-12-06 05:08:32.513079] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:54.365 [2024-12-06 05:08:32.513087] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:54.365 [2024-12-06 05:08:32.513094] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:54.365 [2024-12-06 05:08:32.513100] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:54.365 [2024-12-06 05:08:32.513122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.365 [2024-12-06 05:08:32.513135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:54.365 [2024-12-06 05:08:32.513151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.033 ms 00:20:54.365 [2024-12-06 05:08:32.513162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.365 [2024-12-06 05:08:32.515606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.365 [2024-12-06 05:08:32.515640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:54.365 [2024-12-06 05:08:32.515692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.421 ms 00:20:54.365 [2024-12-06 05:08:32.515703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.365 [2024-12-06 05:08:32.515849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.365 [2024-12-06 05:08:32.515866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:54.365 [2024-12-06 05:08:32.515876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:20:54.365 [2024-12-06 05:08:32.515883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.365 [2024-12-06 05:08:32.522821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:54.365 [2024-12-06 05:08:32.522876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:54.365 [2024-12-06 05:08:32.522889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:54.365 [2024-12-06 05:08:32.522898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.365 [2024-12-06 05:08:32.522959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:54.365 [2024-12-06 05:08:32.522974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:54.365 [2024-12-06 05:08:32.522983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:54.365 [2024-12-06 05:08:32.522991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.365 [2024-12-06 05:08:32.523048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:54.365 [2024-12-06 05:08:32.523060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:54.365 [2024-12-06 05:08:32.523068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:54.365 [2024-12-06 05:08:32.523081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.365 [2024-12-06 05:08:32.523099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:54.365 [2024-12-06 05:08:32.523107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:54.365 [2024-12-06 05:08:32.523118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:54.365 [2024-12-06 05:08:32.523126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.365 [2024-12-06 05:08:32.537482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:54.365 [2024-12-06 05:08:32.537711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:54.365 [2024-12-06 05:08:32.537734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:54.365 [2024-12-06 05:08:32.537742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.365 [2024-12-06 05:08:32.549265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:54.365 [2024-12-06 05:08:32.549454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:54.365 [2024-12-06 05:08:32.549480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:54.365 [2024-12-06 05:08:32.549490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.365 [2024-12-06 05:08:32.549548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:54.365 [2024-12-06 05:08:32.549558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:54.365 [2024-12-06 05:08:32.549567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:54.365 [2024-12-06 05:08:32.549575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.365 [2024-12-06 05:08:32.549611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:54.365 [2024-12-06 05:08:32.549622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:54.365 [2024-12-06 05:08:32.549630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:54.365 [2024-12-06 05:08:32.549642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.365 [2024-12-06 05:08:32.549760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:54.365 [2024-12-06 05:08:32.549772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:54.365 [2024-12-06 05:08:32.549781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:54.365 [2024-12-06 05:08:32.549790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.365 [2024-12-06 05:08:32.549829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:54.365 [2024-12-06 05:08:32.549838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:54.365 [2024-12-06 05:08:32.549848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:54.365 [2024-12-06 05:08:32.549863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.365 [2024-12-06 05:08:32.549911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:54.365 [2024-12-06 05:08:32.549923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:54.365 [2024-12-06 05:08:32.549933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:54.365 [2024-12-06 05:08:32.549945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.365 [2024-12-06 05:08:32.550000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:54.365 [2024-12-06 05:08:32.550013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:54.365 [2024-12-06 05:08:32.550024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:54.365 [2024-12-06 05:08:32.550036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.365 [2024-12-06 05:08:32.550181] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 72.198 ms, result 0 00:20:54.626 00:20:54.626 00:20:54.626 05:08:32 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:57.171 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:20:57.171 05:08:34 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:20:57.171 [2024-12-06 05:08:34.998490] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:20:57.171 [2024-12-06 05:08:34.998699] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88015 ] 00:20:57.171 [2024-12-06 05:08:35.130828] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:57.171 [2024-12-06 05:08:35.169027] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:20:57.171 [2024-12-06 05:08:35.285402] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:57.171 [2024-12-06 05:08:35.285489] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:57.433 [2024-12-06 05:08:35.446092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.433 [2024-12-06 05:08:35.446151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:57.433 [2024-12-06 05:08:35.446169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:57.433 [2024-12-06 05:08:35.446178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.433 [2024-12-06 05:08:35.446233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.433 [2024-12-06 05:08:35.446248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:57.433 [2024-12-06 05:08:35.446258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:20:57.433 [2024-12-06 05:08:35.446272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.433 [2024-12-06 05:08:35.446298] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:57.433 [2024-12-06 05:08:35.446574] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:57.433 [2024-12-06 05:08:35.446592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.433 [2024-12-06 05:08:35.446606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:57.433 [2024-12-06 05:08:35.446618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.299 ms 00:20:57.433 [2024-12-06 05:08:35.446626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.434 [2024-12-06 05:08:35.448391] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:57.434 [2024-12-06 05:08:35.452224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.434 [2024-12-06 05:08:35.452278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:57.434 [2024-12-06 05:08:35.452290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.835 ms 00:20:57.434 [2024-12-06 05:08:35.452298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.434 [2024-12-06 05:08:35.452377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.434 [2024-12-06 05:08:35.452390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:57.434 [2024-12-06 05:08:35.452401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:20:57.434 [2024-12-06 05:08:35.452409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.434 [2024-12-06 05:08:35.460544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.434 [2024-12-06 05:08:35.460592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:57.434 [2024-12-06 05:08:35.460604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.091 ms 00:20:57.434 [2024-12-06 05:08:35.460618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.434 [2024-12-06 05:08:35.460739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.434 [2024-12-06 05:08:35.460751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:57.434 [2024-12-06 05:08:35.460765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:20:57.434 [2024-12-06 05:08:35.460773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.434 [2024-12-06 05:08:35.460834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.434 [2024-12-06 05:08:35.460849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:57.434 [2024-12-06 05:08:35.460858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:57.434 [2024-12-06 05:08:35.460866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.434 [2024-12-06 05:08:35.460892] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:57.434 [2024-12-06 05:08:35.462985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.434 [2024-12-06 05:08:35.463024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:57.434 [2024-12-06 05:08:35.463034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.098 ms 00:20:57.434 [2024-12-06 05:08:35.463042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.434 [2024-12-06 05:08:35.463085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.434 [2024-12-06 05:08:35.463094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:57.434 [2024-12-06 05:08:35.463103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:57.434 [2024-12-06 05:08:35.463111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.434 [2024-12-06 05:08:35.463137] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:57.434 [2024-12-06 05:08:35.463161] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:57.434 [2024-12-06 05:08:35.463207] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:57.434 [2024-12-06 05:08:35.463224] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:57.434 [2024-12-06 05:08:35.463330] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:57.434 [2024-12-06 05:08:35.463342] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:57.434 [2024-12-06 05:08:35.463352] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:57.434 [2024-12-06 05:08:35.463367] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:57.434 [2024-12-06 05:08:35.463377] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:57.434 [2024-12-06 05:08:35.463385] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:57.434 [2024-12-06 05:08:35.463397] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:57.434 [2024-12-06 05:08:35.463409] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:57.434 [2024-12-06 05:08:35.463418] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:57.434 [2024-12-06 05:08:35.463426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.434 [2024-12-06 05:08:35.463435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:57.434 [2024-12-06 05:08:35.463443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.292 ms 00:20:57.434 [2024-12-06 05:08:35.463450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.434 [2024-12-06 05:08:35.463535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.434 [2024-12-06 05:08:35.463547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:57.434 [2024-12-06 05:08:35.463555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:57.434 [2024-12-06 05:08:35.463563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.434 [2024-12-06 05:08:35.463659] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:57.434 [2024-12-06 05:08:35.463687] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:57.434 [2024-12-06 05:08:35.463701] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:57.434 [2024-12-06 05:08:35.463710] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:57.434 [2024-12-06 05:08:35.463720] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:57.434 [2024-12-06 05:08:35.463728] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:57.434 [2024-12-06 05:08:35.463736] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:57.434 [2024-12-06 05:08:35.463745] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:57.434 [2024-12-06 05:08:35.463754] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:57.434 [2024-12-06 05:08:35.463762] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:57.434 [2024-12-06 05:08:35.463771] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:57.434 [2024-12-06 05:08:35.463780] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:57.434 [2024-12-06 05:08:35.463790] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:57.434 [2024-12-06 05:08:35.463798] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:57.434 [2024-12-06 05:08:35.463807] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:57.434 [2024-12-06 05:08:35.463815] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:57.434 [2024-12-06 05:08:35.463822] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:57.434 [2024-12-06 05:08:35.463830] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:57.434 [2024-12-06 05:08:35.463839] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:57.434 [2024-12-06 05:08:35.463847] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:57.434 [2024-12-06 05:08:35.463858] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:57.434 [2024-12-06 05:08:35.463866] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:57.434 [2024-12-06 05:08:35.463875] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:57.434 [2024-12-06 05:08:35.463882] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:57.434 [2024-12-06 05:08:35.463890] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:57.434 [2024-12-06 05:08:35.463898] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:57.434 [2024-12-06 05:08:35.463906] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:57.434 [2024-12-06 05:08:35.463914] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:57.434 [2024-12-06 05:08:35.463929] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:57.434 [2024-12-06 05:08:35.463937] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:57.434 [2024-12-06 05:08:35.463945] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:57.434 [2024-12-06 05:08:35.463953] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:57.434 [2024-12-06 05:08:35.463961] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:57.434 [2024-12-06 05:08:35.463969] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:57.434 [2024-12-06 05:08:35.463977] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:57.434 [2024-12-06 05:08:35.463985] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:57.434 [2024-12-06 05:08:35.463993] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:57.434 [2024-12-06 05:08:35.464000] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:57.434 [2024-12-06 05:08:35.464008] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:57.434 [2024-12-06 05:08:35.464016] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:57.434 [2024-12-06 05:08:35.464025] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:57.434 [2024-12-06 05:08:35.464032] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:57.434 [2024-12-06 05:08:35.464040] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:57.434 [2024-12-06 05:08:35.464047] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:57.434 [2024-12-06 05:08:35.464057] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:57.434 [2024-12-06 05:08:35.464070] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:57.434 [2024-12-06 05:08:35.464081] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:57.434 [2024-12-06 05:08:35.464089] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:57.434 [2024-12-06 05:08:35.464095] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:57.434 [2024-12-06 05:08:35.464102] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:57.434 [2024-12-06 05:08:35.464109] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:57.434 [2024-12-06 05:08:35.464116] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:57.434 [2024-12-06 05:08:35.464125] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:57.434 [2024-12-06 05:08:35.464133] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:57.434 [2024-12-06 05:08:35.464143] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:57.434 [2024-12-06 05:08:35.464152] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:57.434 [2024-12-06 05:08:35.464160] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:57.434 [2024-12-06 05:08:35.464168] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:57.434 [2024-12-06 05:08:35.464175] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:57.434 [2024-12-06 05:08:35.464182] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:57.434 [2024-12-06 05:08:35.464192] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:57.434 [2024-12-06 05:08:35.464199] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:57.434 [2024-12-06 05:08:35.464207] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:57.434 [2024-12-06 05:08:35.464214] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:57.434 [2024-12-06 05:08:35.464228] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:57.434 [2024-12-06 05:08:35.464236] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:57.434 [2024-12-06 05:08:35.464243] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:57.434 [2024-12-06 05:08:35.464250] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:57.434 [2024-12-06 05:08:35.464257] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:57.434 [2024-12-06 05:08:35.464264] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:57.434 [2024-12-06 05:08:35.464277] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:57.434 [2024-12-06 05:08:35.464292] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:57.434 [2024-12-06 05:08:35.464299] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:57.434 [2024-12-06 05:08:35.464307] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:57.435 [2024-12-06 05:08:35.464315] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:57.435 [2024-12-06 05:08:35.464322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.435 [2024-12-06 05:08:35.464335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:57.435 [2024-12-06 05:08:35.464344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.731 ms 00:20:57.435 [2024-12-06 05:08:35.464351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.435 [2024-12-06 05:08:35.489791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.435 [2024-12-06 05:08:35.490013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:57.435 [2024-12-06 05:08:35.490098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.392 ms 00:20:57.435 [2024-12-06 05:08:35.490129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.435 [2024-12-06 05:08:35.490258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.435 [2024-12-06 05:08:35.490488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:57.435 [2024-12-06 05:08:35.490529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:20:57.435 [2024-12-06 05:08:35.490552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.435 [2024-12-06 05:08:35.502777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.435 [2024-12-06 05:08:35.502942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:57.435 [2024-12-06 05:08:35.503000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.122 ms 00:20:57.435 [2024-12-06 05:08:35.503022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.435 [2024-12-06 05:08:35.503073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.435 [2024-12-06 05:08:35.503094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:57.435 [2024-12-06 05:08:35.503115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:57.435 [2024-12-06 05:08:35.503134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.435 [2024-12-06 05:08:35.503732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.435 [2024-12-06 05:08:35.503970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:57.435 [2024-12-06 05:08:35.504040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.531 ms 00:20:57.435 [2024-12-06 05:08:35.504063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.435 [2024-12-06 05:08:35.504238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.435 [2024-12-06 05:08:35.504959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:57.435 [2024-12-06 05:08:35.505017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.132 ms 00:20:57.435 [2024-12-06 05:08:35.505039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.435 [2024-12-06 05:08:35.512122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.435 [2024-12-06 05:08:35.512277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:57.435 [2024-12-06 05:08:35.512332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.961 ms 00:20:57.435 [2024-12-06 05:08:35.512355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.435 [2024-12-06 05:08:35.516255] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:57.435 [2024-12-06 05:08:35.516429] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:57.435 [2024-12-06 05:08:35.516495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.435 [2024-12-06 05:08:35.516516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:57.435 [2024-12-06 05:08:35.516537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.005 ms 00:20:57.435 [2024-12-06 05:08:35.516555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.435 [2024-12-06 05:08:35.532729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.435 [2024-12-06 05:08:35.532910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:57.435 [2024-12-06 05:08:35.532972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.995 ms 00:20:57.435 [2024-12-06 05:08:35.533002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.435 [2024-12-06 05:08:35.535855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.435 [2024-12-06 05:08:35.536008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:57.435 [2024-12-06 05:08:35.536063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.713 ms 00:20:57.435 [2024-12-06 05:08:35.536085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.435 [2024-12-06 05:08:35.538751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.435 [2024-12-06 05:08:35.538901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:57.435 [2024-12-06 05:08:35.538955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.537 ms 00:20:57.435 [2024-12-06 05:08:35.538977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.435 [2024-12-06 05:08:35.539654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.435 [2024-12-06 05:08:35.539772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:57.435 [2024-12-06 05:08:35.539879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.272 ms 00:20:57.435 [2024-12-06 05:08:35.539905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.435 [2024-12-06 05:08:35.563796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.435 [2024-12-06 05:08:35.564025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:57.435 [2024-12-06 05:08:35.564104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.838 ms 00:20:57.435 [2024-12-06 05:08:35.564128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.435 [2024-12-06 05:08:35.572677] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:57.435 [2024-12-06 05:08:35.576032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.435 [2024-12-06 05:08:35.576189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:57.435 [2024-12-06 05:08:35.576246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.848 ms 00:20:57.435 [2024-12-06 05:08:35.576269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.435 [2024-12-06 05:08:35.576372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.435 [2024-12-06 05:08:35.576400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:57.435 [2024-12-06 05:08:35.576420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:57.435 [2024-12-06 05:08:35.576440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.435 [2024-12-06 05:08:35.576590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.435 [2024-12-06 05:08:35.576621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:57.435 [2024-12-06 05:08:35.576649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:20:57.435 [2024-12-06 05:08:35.576742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.435 [2024-12-06 05:08:35.576795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.435 [2024-12-06 05:08:35.576807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:57.435 [2024-12-06 05:08:35.576818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:57.435 [2024-12-06 05:08:35.576827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.435 [2024-12-06 05:08:35.576865] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:57.435 [2024-12-06 05:08:35.576879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.435 [2024-12-06 05:08:35.576888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:57.435 [2024-12-06 05:08:35.576896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:57.435 [2024-12-06 05:08:35.576908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.435 [2024-12-06 05:08:35.582618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.435 [2024-12-06 05:08:35.582690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:57.435 [2024-12-06 05:08:35.582716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.690 ms 00:20:57.435 [2024-12-06 05:08:35.582726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.435 [2024-12-06 05:08:35.582815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.435 [2024-12-06 05:08:35.582826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:57.435 [2024-12-06 05:08:35.582834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:20:57.435 [2024-12-06 05:08:35.582843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.435 [2024-12-06 05:08:35.584037] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 137.469 ms, result 0 00:20:58.376  [2024-12-06T05:08:37.992Z] Copying: 12/1024 [MB] (12 MBps) [2024-12-06T05:08:38.932Z] Copying: 25/1024 [MB] (13 MBps) [2024-12-06T05:08:39.875Z] Copying: 41/1024 [MB] (15 MBps) [2024-12-06T05:08:40.820Z] Copying: 55/1024 [MB] (14 MBps) [2024-12-06T05:08:41.879Z] Copying: 68/1024 [MB] (13 MBps) [2024-12-06T05:08:42.820Z] Copying: 82/1024 [MB] (14 MBps) [2024-12-06T05:08:43.763Z] Copying: 93/1024 [MB] (10 MBps) [2024-12-06T05:08:44.704Z] Copying: 104/1024 [MB] (10 MBps) [2024-12-06T05:08:45.648Z] Copying: 114/1024 [MB] (10 MBps) [2024-12-06T05:08:47.020Z] Copying: 124/1024 [MB] (10 MBps) [2024-12-06T05:08:47.955Z] Copying: 153/1024 [MB] (29 MBps) [2024-12-06T05:08:48.898Z] Copying: 184/1024 [MB] (31 MBps) [2024-12-06T05:08:49.839Z] Copying: 205/1024 [MB] (20 MBps) [2024-12-06T05:08:50.770Z] Copying: 217/1024 [MB] (11 MBps) [2024-12-06T05:08:51.701Z] Copying: 238/1024 [MB] (20 MBps) [2024-12-06T05:08:52.634Z] Copying: 270/1024 [MB] (32 MBps) [2024-12-06T05:08:54.004Z] Copying: 300/1024 [MB] (29 MBps) [2024-12-06T05:08:54.947Z] Copying: 329/1024 [MB] (28 MBps) [2024-12-06T05:08:55.890Z] Copying: 350/1024 [MB] (21 MBps) [2024-12-06T05:08:56.832Z] Copying: 364/1024 [MB] (13 MBps) [2024-12-06T05:08:57.765Z] Copying: 377/1024 [MB] (13 MBps) [2024-12-06T05:08:58.699Z] Copying: 407/1024 [MB] (29 MBps) [2024-12-06T05:08:59.632Z] Copying: 456/1024 [MB] (48 MBps) [2024-12-06T05:09:01.016Z] Copying: 487/1024 [MB] (30 MBps) [2024-12-06T05:09:01.953Z] Copying: 501/1024 [MB] (14 MBps) [2024-12-06T05:09:02.887Z] Copying: 526/1024 [MB] (24 MBps) [2024-12-06T05:09:03.829Z] Copying: 554/1024 [MB] (28 MBps) [2024-12-06T05:09:04.773Z] Copying: 571/1024 [MB] (17 MBps) [2024-12-06T05:09:05.714Z] Copying: 591/1024 [MB] (19 MBps) [2024-12-06T05:09:06.657Z] Copying: 609/1024 [MB] (18 MBps) [2024-12-06T05:09:07.605Z] Copying: 624/1024 [MB] (15 MBps) [2024-12-06T05:09:09.015Z] Copying: 636/1024 [MB] (11 MBps) [2024-12-06T05:09:10.036Z] Copying: 647/1024 [MB] (11 MBps) [2024-12-06T05:09:10.620Z] Copying: 658/1024 [MB] (10 MBps) [2024-12-06T05:09:12.002Z] Copying: 669/1024 [MB] (11 MBps) [2024-12-06T05:09:12.937Z] Copying: 695392/1048576 [kB] (10072 kBps) [2024-12-06T05:09:13.880Z] Copying: 708/1024 [MB] (29 MBps) [2024-12-06T05:09:14.822Z] Copying: 731/1024 [MB] (23 MBps) [2024-12-06T05:09:15.763Z] Copying: 744/1024 [MB] (12 MBps) [2024-12-06T05:09:16.703Z] Copying: 758/1024 [MB] (13 MBps) [2024-12-06T05:09:17.641Z] Copying: 786840/1048576 [kB] (10120 kBps) [2024-12-06T05:09:19.025Z] Copying: 800/1024 [MB] (32 MBps) [2024-12-06T05:09:19.598Z] Copying: 815/1024 [MB] (14 MBps) [2024-12-06T05:09:20.977Z] Copying: 830/1024 [MB] (15 MBps) [2024-12-06T05:09:21.918Z] Copying: 854/1024 [MB] (23 MBps) [2024-12-06T05:09:22.859Z] Copying: 868/1024 [MB] (13 MBps) [2024-12-06T05:09:23.803Z] Copying: 889/1024 [MB] (20 MBps) [2024-12-06T05:09:24.747Z] Copying: 902/1024 [MB] (13 MBps) [2024-12-06T05:09:25.689Z] Copying: 919/1024 [MB] (17 MBps) [2024-12-06T05:09:26.629Z] Copying: 936/1024 [MB] (16 MBps) [2024-12-06T05:09:28.014Z] Copying: 953/1024 [MB] (17 MBps) [2024-12-06T05:09:28.957Z] Copying: 965/1024 [MB] (11 MBps) [2024-12-06T05:09:29.917Z] Copying: 984/1024 [MB] (19 MBps) [2024-12-06T05:09:30.862Z] Copying: 995/1024 [MB] (10 MBps) [2024-12-06T05:09:31.805Z] Copying: 1010/1024 [MB] (15 MBps) [2024-12-06T05:09:32.746Z] Copying: 1045112/1048576 [kB] (10064 kBps) [2024-12-06T05:09:33.007Z] Copying: 1048404/1048576 [kB] (3292 kBps) [2024-12-06T05:09:33.007Z] Copying: 1024/1024 [MB] (average 17 MBps)[2024-12-06 05:09:32.754045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.775 [2024-12-06 05:09:32.754142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:54.775 [2024-12-06 05:09:32.754159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:54.775 [2024-12-06 05:09:32.754169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.775 [2024-12-06 05:09:32.757731] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:54.775 [2024-12-06 05:09:32.760151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.775 [2024-12-06 05:09:32.760357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:54.775 [2024-12-06 05:09:32.760391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.312 ms 00:21:54.775 [2024-12-06 05:09:32.760405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.775 [2024-12-06 05:09:32.773014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.775 [2024-12-06 05:09:32.773073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:54.775 [2024-12-06 05:09:32.773088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.246 ms 00:21:54.775 [2024-12-06 05:09:32.773097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.775 [2024-12-06 05:09:32.797040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.775 [2024-12-06 05:09:32.797095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:54.775 [2024-12-06 05:09:32.797108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.922 ms 00:21:54.775 [2024-12-06 05:09:32.797128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.775 [2024-12-06 05:09:32.803351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.775 [2024-12-06 05:09:32.803551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:54.775 [2024-12-06 05:09:32.803574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.172 ms 00:21:54.775 [2024-12-06 05:09:32.803583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.775 [2024-12-06 05:09:32.806477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.775 [2024-12-06 05:09:32.806528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:54.775 [2024-12-06 05:09:32.806540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.841 ms 00:21:54.775 [2024-12-06 05:09:32.806548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.775 [2024-12-06 05:09:32.812141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.775 [2024-12-06 05:09:32.812340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:54.775 [2024-12-06 05:09:32.812412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.544 ms 00:21:54.775 [2024-12-06 05:09:32.812439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.037 [2024-12-06 05:09:33.102974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.037 [2024-12-06 05:09:33.103173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:55.037 [2024-12-06 05:09:33.103267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 290.468 ms 00:21:55.037 [2024-12-06 05:09:33.103293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.037 [2024-12-06 05:09:33.106802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.037 [2024-12-06 05:09:33.106982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:55.037 [2024-12-06 05:09:33.107086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.469 ms 00:21:55.037 [2024-12-06 05:09:33.107111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.037 [2024-12-06 05:09:33.109955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.037 [2024-12-06 05:09:33.110133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:55.037 [2024-12-06 05:09:33.110203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.792 ms 00:21:55.037 [2024-12-06 05:09:33.110226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.037 [2024-12-06 05:09:33.112570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.037 [2024-12-06 05:09:33.112775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:55.037 [2024-12-06 05:09:33.112847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.294 ms 00:21:55.037 [2024-12-06 05:09:33.112872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.037 [2024-12-06 05:09:33.115833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.037 [2024-12-06 05:09:33.116006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:55.037 [2024-12-06 05:09:33.116067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.881 ms 00:21:55.037 [2024-12-06 05:09:33.116088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.037 [2024-12-06 05:09:33.116135] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:55.037 [2024-12-06 05:09:33.116164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 100608 / 261120 wr_cnt: 1 state: open 00:21:55.037 [2024-12-06 05:09:33.116198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:55.037 [2024-12-06 05:09:33.116266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:55.037 [2024-12-06 05:09:33.116299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:55.037 [2024-12-06 05:09:33.116329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:55.037 [2024-12-06 05:09:33.116359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:55.037 [2024-12-06 05:09:33.116688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:55.037 [2024-12-06 05:09:33.116722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:55.037 [2024-12-06 05:09:33.116752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:55.037 [2024-12-06 05:09:33.116814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:55.037 [2024-12-06 05:09:33.116845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:55.037 [2024-12-06 05:09:33.116875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:55.037 [2024-12-06 05:09:33.116924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:55.037 [2024-12-06 05:09:33.116956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:55.037 [2024-12-06 05:09:33.116985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:55.037 [2024-12-06 05:09:33.117013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:55.037 [2024-12-06 05:09:33.117043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:55.037 [2024-12-06 05:09:33.117073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:55.037 [2024-12-06 05:09:33.117101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:55.037 [2024-12-06 05:09:33.117166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:55.037 [2024-12-06 05:09:33.117199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:55.037 [2024-12-06 05:09:33.117229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:55.037 [2024-12-06 05:09:33.117257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:55.037 [2024-12-06 05:09:33.117312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:55.037 [2024-12-06 05:09:33.117344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:55.037 [2024-12-06 05:09:33.117373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:55.037 [2024-12-06 05:09:33.117402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:55.037 [2024-12-06 05:09:33.117432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:55.037 [2024-12-06 05:09:33.117481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:55.037 [2024-12-06 05:09:33.117511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:55.037 [2024-12-06 05:09:33.117540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:55.037 [2024-12-06 05:09:33.117568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:55.037 [2024-12-06 05:09:33.117597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:55.037 [2024-12-06 05:09:33.117626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:55.037 [2024-12-06 05:09:33.117711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:55.037 [2024-12-06 05:09:33.117745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:55.037 [2024-12-06 05:09:33.117774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:55.037 [2024-12-06 05:09:33.117804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:55.037 [2024-12-06 05:09:33.117860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:55.037 [2024-12-06 05:09:33.117890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:55.037 [2024-12-06 05:09:33.117920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:55.037 [2024-12-06 05:09:33.117949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:55.037 [2024-12-06 05:09:33.118003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:55.037 [2024-12-06 05:09:33.118148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:55.037 [2024-12-06 05:09:33.118177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:55.037 [2024-12-06 05:09:33.118207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:55.037 [2024-12-06 05:09:33.118236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:55.037 [2024-12-06 05:09:33.118265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:55.038 [2024-12-06 05:09:33.118376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:55.038 [2024-12-06 05:09:33.118406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:55.038 [2024-12-06 05:09:33.118436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:55.038 [2024-12-06 05:09:33.118465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:55.038 [2024-12-06 05:09:33.118511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:55.038 [2024-12-06 05:09:33.118522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:55.038 [2024-12-06 05:09:33.118530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:55.038 [2024-12-06 05:09:33.118538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:55.038 [2024-12-06 05:09:33.118547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:55.038 [2024-12-06 05:09:33.118556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:55.038 [2024-12-06 05:09:33.118564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:55.038 [2024-12-06 05:09:33.118572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:55.038 [2024-12-06 05:09:33.118580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:55.038 [2024-12-06 05:09:33.118589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:55.038 [2024-12-06 05:09:33.118598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:55.038 [2024-12-06 05:09:33.118606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:55.038 [2024-12-06 05:09:33.118620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:55.038 [2024-12-06 05:09:33.118628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:55.038 [2024-12-06 05:09:33.118636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:55.038 [2024-12-06 05:09:33.118644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:55.038 [2024-12-06 05:09:33.118651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:55.038 [2024-12-06 05:09:33.118659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:55.038 [2024-12-06 05:09:33.118760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:55.038 [2024-12-06 05:09:33.118791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:55.038 [2024-12-06 05:09:33.118854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:55.038 [2024-12-06 05:09:33.118886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:55.038 [2024-12-06 05:09:33.118896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:55.038 [2024-12-06 05:09:33.118905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:55.038 [2024-12-06 05:09:33.118913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:55.038 [2024-12-06 05:09:33.118922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:55.038 [2024-12-06 05:09:33.118931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:55.038 [2024-12-06 05:09:33.118943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:55.038 [2024-12-06 05:09:33.118951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:55.038 [2024-12-06 05:09:33.118959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:55.038 [2024-12-06 05:09:33.118968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:55.038 [2024-12-06 05:09:33.118977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:55.038 [2024-12-06 05:09:33.118985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:55.038 [2024-12-06 05:09:33.118993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:55.038 [2024-12-06 05:09:33.119001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:55.038 [2024-12-06 05:09:33.119009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:55.038 [2024-12-06 05:09:33.119017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:55.038 [2024-12-06 05:09:33.119025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:55.038 [2024-12-06 05:09:33.119032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:55.038 [2024-12-06 05:09:33.119040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:55.038 [2024-12-06 05:09:33.119048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:55.038 [2024-12-06 05:09:33.119056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:55.038 [2024-12-06 05:09:33.119063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:55.038 [2024-12-06 05:09:33.119071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:55.038 [2024-12-06 05:09:33.119079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:55.038 [2024-12-06 05:09:33.119087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:55.038 [2024-12-06 05:09:33.119094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:55.038 [2024-12-06 05:09:33.119102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:55.038 [2024-12-06 05:09:33.119119] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:55.038 [2024-12-06 05:09:33.119129] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d8aca738-c2cb-42fc-8bd7-b1e6d40ace6a 00:21:55.038 [2024-12-06 05:09:33.119137] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 100608 00:21:55.038 [2024-12-06 05:09:33.119146] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 101568 00:21:55.038 [2024-12-06 05:09:33.119163] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 100608 00:21:55.038 [2024-12-06 05:09:33.119178] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0095 00:21:55.038 [2024-12-06 05:09:33.119190] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:55.038 [2024-12-06 05:09:33.119199] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:55.038 [2024-12-06 05:09:33.119207] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:55.038 [2024-12-06 05:09:33.119214] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:55.038 [2024-12-06 05:09:33.119221] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:55.038 [2024-12-06 05:09:33.119237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.038 [2024-12-06 05:09:33.119247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:55.038 [2024-12-06 05:09:33.119256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.103 ms 00:21:55.038 [2024-12-06 05:09:33.119264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.038 [2024-12-06 05:09:33.121745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.038 [2024-12-06 05:09:33.121791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:55.038 [2024-12-06 05:09:33.121806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.437 ms 00:21:55.038 [2024-12-06 05:09:33.121819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.038 [2024-12-06 05:09:33.121962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.038 [2024-12-06 05:09:33.121973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:55.038 [2024-12-06 05:09:33.121983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:21:55.038 [2024-12-06 05:09:33.121991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.038 [2024-12-06 05:09:33.129108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:55.038 [2024-12-06 05:09:33.129294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:55.038 [2024-12-06 05:09:33.129313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:55.038 [2024-12-06 05:09:33.129321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.038 [2024-12-06 05:09:33.129384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:55.038 [2024-12-06 05:09:33.129403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:55.038 [2024-12-06 05:09:33.129411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:55.038 [2024-12-06 05:09:33.129419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.038 [2024-12-06 05:09:33.129490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:55.038 [2024-12-06 05:09:33.129501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:55.038 [2024-12-06 05:09:33.129510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:55.038 [2024-12-06 05:09:33.129518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.038 [2024-12-06 05:09:33.129534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:55.038 [2024-12-06 05:09:33.129542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:55.038 [2024-12-06 05:09:33.129555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:55.038 [2024-12-06 05:09:33.129563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.038 [2024-12-06 05:09:33.143774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:55.038 [2024-12-06 05:09:33.143966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:55.038 [2024-12-06 05:09:33.143993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:55.038 [2024-12-06 05:09:33.144002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.038 [2024-12-06 05:09:33.154416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:55.038 [2024-12-06 05:09:33.154472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:55.038 [2024-12-06 05:09:33.154485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:55.038 [2024-12-06 05:09:33.154494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.038 [2024-12-06 05:09:33.154544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:55.038 [2024-12-06 05:09:33.154558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:55.038 [2024-12-06 05:09:33.154566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:55.038 [2024-12-06 05:09:33.154575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.038 [2024-12-06 05:09:33.154637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:55.038 [2024-12-06 05:09:33.154647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:55.039 [2024-12-06 05:09:33.154660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:55.039 [2024-12-06 05:09:33.154696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.039 [2024-12-06 05:09:33.154788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:55.039 [2024-12-06 05:09:33.154800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:55.039 [2024-12-06 05:09:33.154813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:55.039 [2024-12-06 05:09:33.154821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.039 [2024-12-06 05:09:33.154860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:55.039 [2024-12-06 05:09:33.154871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:55.039 [2024-12-06 05:09:33.154880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:55.039 [2024-12-06 05:09:33.154888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.039 [2024-12-06 05:09:33.154933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:55.039 [2024-12-06 05:09:33.154942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:55.039 [2024-12-06 05:09:33.154955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:55.039 [2024-12-06 05:09:33.154966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.039 [2024-12-06 05:09:33.155011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:55.039 [2024-12-06 05:09:33.155021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:55.039 [2024-12-06 05:09:33.155031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:55.039 [2024-12-06 05:09:33.155040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.039 [2024-12-06 05:09:33.155166] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 402.550 ms, result 0 00:21:55.982 00:21:55.982 00:21:55.982 05:09:33 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:21:55.982 [2024-12-06 05:09:33.973659] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:21:55.982 [2024-12-06 05:09:33.973841] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88621 ] 00:21:55.982 [2024-12-06 05:09:34.110963] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:55.982 [2024-12-06 05:09:34.163284] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:21:56.245 [2024-12-06 05:09:34.274341] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:56.245 [2024-12-06 05:09:34.274416] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:56.245 [2024-12-06 05:09:34.435154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.245 [2024-12-06 05:09:34.435218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:56.245 [2024-12-06 05:09:34.435239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:56.245 [2024-12-06 05:09:34.435251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.245 [2024-12-06 05:09:34.435315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.245 [2024-12-06 05:09:34.435326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:56.245 [2024-12-06 05:09:34.435334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:21:56.245 [2024-12-06 05:09:34.435348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.245 [2024-12-06 05:09:34.435370] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:56.245 [2024-12-06 05:09:34.435656] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:56.245 [2024-12-06 05:09:34.435711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.245 [2024-12-06 05:09:34.435724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:56.245 [2024-12-06 05:09:34.435739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.346 ms 00:21:56.245 [2024-12-06 05:09:34.435757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.245 [2024-12-06 05:09:34.437563] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:56.245 [2024-12-06 05:09:34.441288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.245 [2024-12-06 05:09:34.441346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:56.245 [2024-12-06 05:09:34.441366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.727 ms 00:21:56.245 [2024-12-06 05:09:34.441374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.245 [2024-12-06 05:09:34.441469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.245 [2024-12-06 05:09:34.441482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:56.245 [2024-12-06 05:09:34.441491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:21:56.245 [2024-12-06 05:09:34.441499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.245 [2024-12-06 05:09:34.450030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.245 [2024-12-06 05:09:34.450078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:56.245 [2024-12-06 05:09:34.450089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.489 ms 00:21:56.245 [2024-12-06 05:09:34.450097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.245 [2024-12-06 05:09:34.450213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.245 [2024-12-06 05:09:34.450224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:56.245 [2024-12-06 05:09:34.450236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:21:56.246 [2024-12-06 05:09:34.450244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.246 [2024-12-06 05:09:34.450304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.246 [2024-12-06 05:09:34.450314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:56.246 [2024-12-06 05:09:34.450324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:56.246 [2024-12-06 05:09:34.450332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.246 [2024-12-06 05:09:34.450355] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:56.246 [2024-12-06 05:09:34.452413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.246 [2024-12-06 05:09:34.452613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:56.246 [2024-12-06 05:09:34.452633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.064 ms 00:21:56.246 [2024-12-06 05:09:34.452641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.246 [2024-12-06 05:09:34.452694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.246 [2024-12-06 05:09:34.452703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:56.246 [2024-12-06 05:09:34.452716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:21:56.246 [2024-12-06 05:09:34.452724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.246 [2024-12-06 05:09:34.452747] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:56.246 [2024-12-06 05:09:34.452771] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:56.246 [2024-12-06 05:09:34.452816] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:56.246 [2024-12-06 05:09:34.452832] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:56.246 [2024-12-06 05:09:34.452939] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:56.246 [2024-12-06 05:09:34.452951] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:56.246 [2024-12-06 05:09:34.452961] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:56.246 [2024-12-06 05:09:34.452972] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:56.246 [2024-12-06 05:09:34.452984] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:56.246 [2024-12-06 05:09:34.452993] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:56.246 [2024-12-06 05:09:34.453001] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:56.246 [2024-12-06 05:09:34.453008] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:56.246 [2024-12-06 05:09:34.453016] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:56.246 [2024-12-06 05:09:34.453024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.246 [2024-12-06 05:09:34.453038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:56.246 [2024-12-06 05:09:34.453046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.279 ms 00:21:56.246 [2024-12-06 05:09:34.453053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.246 [2024-12-06 05:09:34.453142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.246 [2024-12-06 05:09:34.453151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:56.246 [2024-12-06 05:09:34.453162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:21:56.246 [2024-12-06 05:09:34.453171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.246 [2024-12-06 05:09:34.453276] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:56.246 [2024-12-06 05:09:34.453287] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:56.246 [2024-12-06 05:09:34.453296] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:56.246 [2024-12-06 05:09:34.453309] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:56.246 [2024-12-06 05:09:34.453318] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:56.246 [2024-12-06 05:09:34.453326] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:56.246 [2024-12-06 05:09:34.453334] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:56.246 [2024-12-06 05:09:34.453342] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:56.246 [2024-12-06 05:09:34.453351] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:56.246 [2024-12-06 05:09:34.453365] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:56.246 [2024-12-06 05:09:34.453374] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:56.246 [2024-12-06 05:09:34.453381] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:56.246 [2024-12-06 05:09:34.453389] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:56.246 [2024-12-06 05:09:34.453397] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:56.246 [2024-12-06 05:09:34.453406] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:56.246 [2024-12-06 05:09:34.453414] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:56.246 [2024-12-06 05:09:34.453422] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:56.246 [2024-12-06 05:09:34.453432] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:56.246 [2024-12-06 05:09:34.453440] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:56.246 [2024-12-06 05:09:34.453448] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:56.246 [2024-12-06 05:09:34.453456] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:56.246 [2024-12-06 05:09:34.453464] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:56.246 [2024-12-06 05:09:34.453471] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:56.246 [2024-12-06 05:09:34.453479] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:56.246 [2024-12-06 05:09:34.453487] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:56.246 [2024-12-06 05:09:34.453499] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:56.246 [2024-12-06 05:09:34.453508] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:56.246 [2024-12-06 05:09:34.453515] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:56.246 [2024-12-06 05:09:34.453522] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:56.246 [2024-12-06 05:09:34.453528] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:56.246 [2024-12-06 05:09:34.453535] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:56.246 [2024-12-06 05:09:34.453541] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:56.246 [2024-12-06 05:09:34.453549] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:56.246 [2024-12-06 05:09:34.453555] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:56.246 [2024-12-06 05:09:34.453561] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:56.246 [2024-12-06 05:09:34.453568] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:56.246 [2024-12-06 05:09:34.453574] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:56.246 [2024-12-06 05:09:34.453581] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:56.246 [2024-12-06 05:09:34.453587] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:56.246 [2024-12-06 05:09:34.453594] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:56.246 [2024-12-06 05:09:34.453601] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:56.246 [2024-12-06 05:09:34.453609] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:56.246 [2024-12-06 05:09:34.453617] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:56.247 [2024-12-06 05:09:34.453624] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:56.247 [2024-12-06 05:09:34.453631] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:56.247 [2024-12-06 05:09:34.453639] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:56.247 [2024-12-06 05:09:34.453651] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:56.247 [2024-12-06 05:09:34.453659] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:56.247 [2024-12-06 05:09:34.453680] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:56.247 [2024-12-06 05:09:34.453704] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:56.247 [2024-12-06 05:09:34.453712] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:56.247 [2024-12-06 05:09:34.453719] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:56.247 [2024-12-06 05:09:34.453727] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:56.247 [2024-12-06 05:09:34.453735] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:56.247 [2024-12-06 05:09:34.453748] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:56.247 [2024-12-06 05:09:34.453758] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:56.247 [2024-12-06 05:09:34.453765] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:56.247 [2024-12-06 05:09:34.453775] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:56.247 [2024-12-06 05:09:34.453782] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:56.247 [2024-12-06 05:09:34.453790] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:56.247 [2024-12-06 05:09:34.453797] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:56.247 [2024-12-06 05:09:34.453805] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:56.247 [2024-12-06 05:09:34.453813] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:56.247 [2024-12-06 05:09:34.453820] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:56.247 [2024-12-06 05:09:34.453833] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:56.247 [2024-12-06 05:09:34.453840] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:56.247 [2024-12-06 05:09:34.453848] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:56.247 [2024-12-06 05:09:34.453856] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:56.247 [2024-12-06 05:09:34.453863] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:56.247 [2024-12-06 05:09:34.453871] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:56.247 [2024-12-06 05:09:34.453879] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:56.247 [2024-12-06 05:09:34.453888] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:56.247 [2024-12-06 05:09:34.453896] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:56.247 [2024-12-06 05:09:34.453905] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:56.247 [2024-12-06 05:09:34.453912] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:56.247 [2024-12-06 05:09:34.453920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.247 [2024-12-06 05:09:34.453927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:56.247 [2024-12-06 05:09:34.453939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.716 ms 00:21:56.247 [2024-12-06 05:09:34.453946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.510 [2024-12-06 05:09:34.476550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.510 [2024-12-06 05:09:34.476622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:56.510 [2024-12-06 05:09:34.476639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.556 ms 00:21:56.510 [2024-12-06 05:09:34.476649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.510 [2024-12-06 05:09:34.476799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.510 [2024-12-06 05:09:34.476813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:56.510 [2024-12-06 05:09:34.476823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:21:56.510 [2024-12-06 05:09:34.476832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.510 [2024-12-06 05:09:34.488236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.510 [2024-12-06 05:09:34.488285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:56.510 [2024-12-06 05:09:34.488301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.331 ms 00:21:56.510 [2024-12-06 05:09:34.488309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.510 [2024-12-06 05:09:34.488345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.510 [2024-12-06 05:09:34.488353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:56.510 [2024-12-06 05:09:34.488364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:56.510 [2024-12-06 05:09:34.488372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.510 [2024-12-06 05:09:34.488902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.510 [2024-12-06 05:09:34.488938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:56.510 [2024-12-06 05:09:34.488948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.478 ms 00:21:56.510 [2024-12-06 05:09:34.488957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.510 [2024-12-06 05:09:34.489100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.510 [2024-12-06 05:09:34.489109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:56.510 [2024-12-06 05:09:34.489119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.120 ms 00:21:56.510 [2024-12-06 05:09:34.489128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.510 [2024-12-06 05:09:34.495490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.510 [2024-12-06 05:09:34.495697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:56.510 [2024-12-06 05:09:34.495721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.337 ms 00:21:56.510 [2024-12-06 05:09:34.495729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.510 [2024-12-06 05:09:34.499304] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:21:56.510 [2024-12-06 05:09:34.499466] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:56.510 [2024-12-06 05:09:34.499484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.510 [2024-12-06 05:09:34.499493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:56.510 [2024-12-06 05:09:34.499502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.659 ms 00:21:56.510 [2024-12-06 05:09:34.499510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.510 [2024-12-06 05:09:34.515223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.510 [2024-12-06 05:09:34.515271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:56.510 [2024-12-06 05:09:34.515292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.673 ms 00:21:56.510 [2024-12-06 05:09:34.515301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.510 [2024-12-06 05:09:34.518468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.510 [2024-12-06 05:09:34.518639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:56.510 [2024-12-06 05:09:34.518657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.112 ms 00:21:56.510 [2024-12-06 05:09:34.518687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.510 [2024-12-06 05:09:34.521300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.510 [2024-12-06 05:09:34.521347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:56.510 [2024-12-06 05:09:34.521364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.565 ms 00:21:56.510 [2024-12-06 05:09:34.521371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.510 [2024-12-06 05:09:34.521867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.510 [2024-12-06 05:09:34.521912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:56.510 [2024-12-06 05:09:34.521935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.419 ms 00:21:56.510 [2024-12-06 05:09:34.521954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.510 [2024-12-06 05:09:34.545642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.510 [2024-12-06 05:09:34.545979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:56.510 [2024-12-06 05:09:34.545998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.609 ms 00:21:56.510 [2024-12-06 05:09:34.546007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.510 [2024-12-06 05:09:34.554136] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:56.510 [2024-12-06 05:09:34.557012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.510 [2024-12-06 05:09:34.557171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:56.510 [2024-12-06 05:09:34.557191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.966 ms 00:21:56.510 [2024-12-06 05:09:34.557199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.510 [2024-12-06 05:09:34.557277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.510 [2024-12-06 05:09:34.557288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:56.510 [2024-12-06 05:09:34.557298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:21:56.510 [2024-12-06 05:09:34.557306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.510 [2024-12-06 05:09:34.558974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.510 [2024-12-06 05:09:34.559018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:56.510 [2024-12-06 05:09:34.559029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.630 ms 00:21:56.510 [2024-12-06 05:09:34.559040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.510 [2024-12-06 05:09:34.559078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.510 [2024-12-06 05:09:34.559093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:56.510 [2024-12-06 05:09:34.559103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:56.510 [2024-12-06 05:09:34.559111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.510 [2024-12-06 05:09:34.559148] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:56.510 [2024-12-06 05:09:34.559159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.510 [2024-12-06 05:09:34.559167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:56.511 [2024-12-06 05:09:34.559175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:21:56.511 [2024-12-06 05:09:34.559183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.511 [2024-12-06 05:09:34.564422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.511 [2024-12-06 05:09:34.564581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:56.511 [2024-12-06 05:09:34.564598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.215 ms 00:21:56.511 [2024-12-06 05:09:34.564612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.511 [2024-12-06 05:09:34.564794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.511 [2024-12-06 05:09:34.564820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:56.511 [2024-12-06 05:09:34.564831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:21:56.511 [2024-12-06 05:09:34.564838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.511 [2024-12-06 05:09:34.565940] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 130.301 ms, result 0 00:21:57.899  [2024-12-06T05:09:37.078Z] Copying: 9136/1048576 [kB] (9136 kBps) [2024-12-06T05:09:38.023Z] Copying: 19/1024 [MB] (10 MBps) [2024-12-06T05:09:39.134Z] Copying: 33/1024 [MB] (14 MBps) [2024-12-06T05:09:40.077Z] Copying: 46/1024 [MB] (12 MBps) [2024-12-06T05:09:41.023Z] Copying: 60/1024 [MB] (13 MBps) [2024-12-06T05:09:41.968Z] Copying: 76/1024 [MB] (15 MBps) [2024-12-06T05:09:42.913Z] Copying: 89/1024 [MB] (12 MBps) [2024-12-06T05:09:43.858Z] Copying: 101/1024 [MB] (12 MBps) [2024-12-06T05:09:44.847Z] Copying: 112/1024 [MB] (10 MBps) [2024-12-06T05:09:45.786Z] Copying: 129/1024 [MB] (17 MBps) [2024-12-06T05:09:47.166Z] Copying: 144/1024 [MB] (14 MBps) [2024-12-06T05:09:48.102Z] Copying: 164/1024 [MB] (20 MBps) [2024-12-06T05:09:49.041Z] Copying: 187/1024 [MB] (22 MBps) [2024-12-06T05:09:49.983Z] Copying: 212/1024 [MB] (25 MBps) [2024-12-06T05:09:50.923Z] Copying: 228/1024 [MB] (15 MBps) [2024-12-06T05:09:51.868Z] Copying: 244/1024 [MB] (15 MBps) [2024-12-06T05:09:52.810Z] Copying: 267/1024 [MB] (23 MBps) [2024-12-06T05:09:54.196Z] Copying: 293/1024 [MB] (25 MBps) [2024-12-06T05:09:54.769Z] Copying: 315/1024 [MB] (22 MBps) [2024-12-06T05:09:56.150Z] Copying: 331/1024 [MB] (16 MBps) [2024-12-06T05:09:57.092Z] Copying: 345/1024 [MB] (13 MBps) [2024-12-06T05:09:58.033Z] Copying: 357/1024 [MB] (11 MBps) [2024-12-06T05:09:58.971Z] Copying: 372/1024 [MB] (15 MBps) [2024-12-06T05:09:59.908Z] Copying: 390/1024 [MB] (17 MBps) [2024-12-06T05:10:00.849Z] Copying: 411/1024 [MB] (20 MBps) [2024-12-06T05:10:01.785Z] Copying: 426/1024 [MB] (15 MBps) [2024-12-06T05:10:03.166Z] Copying: 451/1024 [MB] (24 MBps) [2024-12-06T05:10:04.105Z] Copying: 471/1024 [MB] (20 MBps) [2024-12-06T05:10:05.044Z] Copying: 490/1024 [MB] (19 MBps) [2024-12-06T05:10:05.982Z] Copying: 512/1024 [MB] (21 MBps) [2024-12-06T05:10:06.925Z] Copying: 528/1024 [MB] (16 MBps) [2024-12-06T05:10:07.869Z] Copying: 548/1024 [MB] (20 MBps) [2024-12-06T05:10:08.813Z] Copying: 559/1024 [MB] (10 MBps) [2024-12-06T05:10:09.758Z] Copying: 570/1024 [MB] (10 MBps) [2024-12-06T05:10:11.148Z] Copying: 582/1024 [MB] (11 MBps) [2024-12-06T05:10:12.094Z] Copying: 595/1024 [MB] (13 MBps) [2024-12-06T05:10:13.037Z] Copying: 606/1024 [MB] (10 MBps) [2024-12-06T05:10:13.981Z] Copying: 620/1024 [MB] (14 MBps) [2024-12-06T05:10:14.943Z] Copying: 637/1024 [MB] (17 MBps) [2024-12-06T05:10:15.886Z] Copying: 647/1024 [MB] (10 MBps) [2024-12-06T05:10:16.829Z] Copying: 658/1024 [MB] (10 MBps) [2024-12-06T05:10:17.770Z] Copying: 670/1024 [MB] (11 MBps) [2024-12-06T05:10:19.158Z] Copying: 681/1024 [MB] (10 MBps) [2024-12-06T05:10:20.104Z] Copying: 692/1024 [MB] (10 MBps) [2024-12-06T05:10:21.049Z] Copying: 703/1024 [MB] (11 MBps) [2024-12-06T05:10:21.993Z] Copying: 713/1024 [MB] (10 MBps) [2024-12-06T05:10:22.935Z] Copying: 724/1024 [MB] (10 MBps) [2024-12-06T05:10:23.876Z] Copying: 743/1024 [MB] (19 MBps) [2024-12-06T05:10:24.821Z] Copying: 754/1024 [MB] (10 MBps) [2024-12-06T05:10:25.787Z] Copying: 764/1024 [MB] (10 MBps) [2024-12-06T05:10:26.778Z] Copying: 774/1024 [MB] (10 MBps) [2024-12-06T05:10:28.157Z] Copying: 790/1024 [MB] (15 MBps) [2024-12-06T05:10:29.096Z] Copying: 808/1024 [MB] (18 MBps) [2024-12-06T05:10:30.039Z] Copying: 830/1024 [MB] (21 MBps) [2024-12-06T05:10:30.972Z] Copying: 840/1024 [MB] (10 MBps) [2024-12-06T05:10:31.905Z] Copying: 862/1024 [MB] (21 MBps) [2024-12-06T05:10:32.842Z] Copying: 884/1024 [MB] (22 MBps) [2024-12-06T05:10:33.774Z] Copying: 900/1024 [MB] (15 MBps) [2024-12-06T05:10:35.151Z] Copying: 921/1024 [MB] (20 MBps) [2024-12-06T05:10:36.091Z] Copying: 942/1024 [MB] (21 MBps) [2024-12-06T05:10:37.030Z] Copying: 953/1024 [MB] (10 MBps) [2024-12-06T05:10:37.970Z] Copying: 965/1024 [MB] (12 MBps) [2024-12-06T05:10:38.911Z] Copying: 980/1024 [MB] (14 MBps) [2024-12-06T05:10:39.856Z] Copying: 997/1024 [MB] (17 MBps) [2024-12-06T05:10:40.799Z] Copying: 1012/1024 [MB] (15 MBps) [2024-12-06T05:10:40.799Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-12-06 05:10:40.604069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.567 [2024-12-06 05:10:40.604259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:02.567 [2024-12-06 05:10:40.604278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:02.567 [2024-12-06 05:10:40.604288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.567 [2024-12-06 05:10:40.604314] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:02.567 [2024-12-06 05:10:40.605108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.567 [2024-12-06 05:10:40.605140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:02.567 [2024-12-06 05:10:40.605153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.776 ms 00:23:02.567 [2024-12-06 05:10:40.605163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.567 [2024-12-06 05:10:40.605504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.567 [2024-12-06 05:10:40.605538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:02.567 [2024-12-06 05:10:40.605550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.295 ms 00:23:02.567 [2024-12-06 05:10:40.605560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.567 [2024-12-06 05:10:40.613951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.567 [2024-12-06 05:10:40.614004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:02.567 [2024-12-06 05:10:40.614019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.370 ms 00:23:02.567 [2024-12-06 05:10:40.614032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.567 [2024-12-06 05:10:40.624867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.567 [2024-12-06 05:10:40.624987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:02.567 [2024-12-06 05:10:40.625010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.768 ms 00:23:02.567 [2024-12-06 05:10:40.625023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.567 [2024-12-06 05:10:40.628986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.567 [2024-12-06 05:10:40.629041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:02.567 [2024-12-06 05:10:40.629057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.790 ms 00:23:02.567 [2024-12-06 05:10:40.629069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.567 [2024-12-06 05:10:40.635311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.567 [2024-12-06 05:10:40.635498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:02.567 [2024-12-06 05:10:40.635576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.177 ms 00:23:02.567 [2024-12-06 05:10:40.635605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.828 [2024-12-06 05:10:40.851039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.828 [2024-12-06 05:10:40.851251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:02.828 [2024-12-06 05:10:40.851323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 215.353 ms 00:23:02.828 [2024-12-06 05:10:40.851349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.828 [2024-12-06 05:10:40.854555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.828 [2024-12-06 05:10:40.854791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:02.828 [2024-12-06 05:10:40.854874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.165 ms 00:23:02.828 [2024-12-06 05:10:40.854902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.828 [2024-12-06 05:10:40.857052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.828 [2024-12-06 05:10:40.857225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:02.828 [2024-12-06 05:10:40.857292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.093 ms 00:23:02.828 [2024-12-06 05:10:40.857315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.828 [2024-12-06 05:10:40.858986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.828 [2024-12-06 05:10:40.859150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:02.828 [2024-12-06 05:10:40.859214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.622 ms 00:23:02.828 [2024-12-06 05:10:40.859240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.828 [2024-12-06 05:10:40.860903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.828 [2024-12-06 05:10:40.861061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:02.828 [2024-12-06 05:10:40.861123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.578 ms 00:23:02.828 [2024-12-06 05:10:40.861145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.828 [2024-12-06 05:10:40.861190] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:02.828 [2024-12-06 05:10:40.861220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:23:02.828 [2024-12-06 05:10:40.861252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:02.828 [2024-12-06 05:10:40.861280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:02.828 [2024-12-06 05:10:40.861348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:02.828 [2024-12-06 05:10:40.861377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:02.828 [2024-12-06 05:10:40.861407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:02.828 [2024-12-06 05:10:40.861435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:02.828 [2024-12-06 05:10:40.861463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:02.828 [2024-12-06 05:10:40.861591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:02.828 [2024-12-06 05:10:40.861618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:02.828 [2024-12-06 05:10:40.861646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:02.828 [2024-12-06 05:10:40.861702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:02.828 [2024-12-06 05:10:40.861775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:02.828 [2024-12-06 05:10:40.861805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:02.828 [2024-12-06 05:10:40.861833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:02.828 [2024-12-06 05:10:40.861861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:02.828 [2024-12-06 05:10:40.861897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:02.828 [2024-12-06 05:10:40.861991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:02.828 [2024-12-06 05:10:40.862019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:02.828 [2024-12-06 05:10:40.862047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:02.828 [2024-12-06 05:10:40.862075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:02.828 [2024-12-06 05:10:40.862130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:02.828 [2024-12-06 05:10:40.862160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:02.828 [2024-12-06 05:10:40.862188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:02.828 [2024-12-06 05:10:40.862215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:02.828 [2024-12-06 05:10:40.862243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:02.828 [2024-12-06 05:10:40.862425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:02.828 [2024-12-06 05:10:40.862508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:02.828 [2024-12-06 05:10:40.862537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:02.828 [2024-12-06 05:10:40.862565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:02.828 [2024-12-06 05:10:40.862593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:02.828 [2024-12-06 05:10:40.862622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:02.828 [2024-12-06 05:10:40.862698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:02.828 [2024-12-06 05:10:40.862730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:02.828 [2024-12-06 05:10:40.862759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:02.828 [2024-12-06 05:10:40.862787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:02.828 [2024-12-06 05:10:40.862815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:02.828 [2024-12-06 05:10:40.862873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:02.828 [2024-12-06 05:10:40.862902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:02.828 [2024-12-06 05:10:40.862930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:02.828 [2024-12-06 05:10:40.862959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:02.828 [2024-12-06 05:10:40.862987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:02.828 [2024-12-06 05:10:40.863040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:02.828 [2024-12-06 05:10:40.863069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:02.828 [2024-12-06 05:10:40.863078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:02.828 [2024-12-06 05:10:40.863087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:02.829 [2024-12-06 05:10:40.863095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:02.829 [2024-12-06 05:10:40.863103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:02.829 [2024-12-06 05:10:40.863127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:02.829 [2024-12-06 05:10:40.863148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:02.829 [2024-12-06 05:10:40.863156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:02.829 [2024-12-06 05:10:40.863164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:02.829 [2024-12-06 05:10:40.863172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:02.829 [2024-12-06 05:10:40.863180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:02.829 [2024-12-06 05:10:40.863188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:02.829 [2024-12-06 05:10:40.863196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:02.829 [2024-12-06 05:10:40.863204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:02.829 [2024-12-06 05:10:40.863211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:02.829 [2024-12-06 05:10:40.863219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:02.829 [2024-12-06 05:10:40.863227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:02.829 [2024-12-06 05:10:40.863235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:02.829 [2024-12-06 05:10:40.863243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:02.829 [2024-12-06 05:10:40.863251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:02.829 [2024-12-06 05:10:40.863258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:02.829 [2024-12-06 05:10:40.863266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:02.829 [2024-12-06 05:10:40.863274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:02.829 [2024-12-06 05:10:40.863282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:02.829 [2024-12-06 05:10:40.863289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:02.829 [2024-12-06 05:10:40.863298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:02.829 [2024-12-06 05:10:40.863305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:02.829 [2024-12-06 05:10:40.863313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:02.829 [2024-12-06 05:10:40.863320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:02.829 [2024-12-06 05:10:40.863328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:02.829 [2024-12-06 05:10:40.863336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:02.829 [2024-12-06 05:10:40.863344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:02.829 [2024-12-06 05:10:40.863351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:02.829 [2024-12-06 05:10:40.863359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:02.829 [2024-12-06 05:10:40.863366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:02.829 [2024-12-06 05:10:40.863374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:02.829 [2024-12-06 05:10:40.863381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:02.829 [2024-12-06 05:10:40.863395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:02.829 [2024-12-06 05:10:40.863402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:02.829 [2024-12-06 05:10:40.863410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:02.829 [2024-12-06 05:10:40.863418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:02.829 [2024-12-06 05:10:40.863425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:02.829 [2024-12-06 05:10:40.863432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:02.829 [2024-12-06 05:10:40.863440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:02.829 [2024-12-06 05:10:40.863448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:02.829 [2024-12-06 05:10:40.863455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:02.829 [2024-12-06 05:10:40.863462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:02.829 [2024-12-06 05:10:40.863470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:02.829 [2024-12-06 05:10:40.863478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:02.829 [2024-12-06 05:10:40.863485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:02.829 [2024-12-06 05:10:40.863493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:02.829 [2024-12-06 05:10:40.863500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:02.829 [2024-12-06 05:10:40.863508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:02.829 [2024-12-06 05:10:40.863515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:02.829 [2024-12-06 05:10:40.863523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:02.829 [2024-12-06 05:10:40.863530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:02.829 [2024-12-06 05:10:40.863538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:02.829 [2024-12-06 05:10:40.863555] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:02.829 [2024-12-06 05:10:40.863563] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d8aca738-c2cb-42fc-8bd7-b1e6d40ace6a 00:23:02.829 [2024-12-06 05:10:40.863578] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:23:02.829 [2024-12-06 05:10:40.863585] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 31424 00:23:02.829 [2024-12-06 05:10:40.863593] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 30464 00:23:02.829 [2024-12-06 05:10:40.863614] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0315 00:23:02.829 [2024-12-06 05:10:40.863622] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:02.829 [2024-12-06 05:10:40.863630] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:02.829 [2024-12-06 05:10:40.863637] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:02.829 [2024-12-06 05:10:40.863644] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:02.829 [2024-12-06 05:10:40.863651] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:02.829 [2024-12-06 05:10:40.863661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.829 [2024-12-06 05:10:40.863688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:02.829 [2024-12-06 05:10:40.863697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.472 ms 00:23:02.829 [2024-12-06 05:10:40.863705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.829 [2024-12-06 05:10:40.866339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.829 [2024-12-06 05:10:40.866373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:02.829 [2024-12-06 05:10:40.866392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.606 ms 00:23:02.829 [2024-12-06 05:10:40.866400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.829 [2024-12-06 05:10:40.866532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.829 [2024-12-06 05:10:40.866541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:02.829 [2024-12-06 05:10:40.866551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:23:02.829 [2024-12-06 05:10:40.866559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.829 [2024-12-06 05:10:40.873577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:02.829 [2024-12-06 05:10:40.876255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:02.829 [2024-12-06 05:10:40.876281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:02.829 [2024-12-06 05:10:40.876291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.829 [2024-12-06 05:10:40.876359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:02.829 [2024-12-06 05:10:40.876368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:02.829 [2024-12-06 05:10:40.876377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:02.829 [2024-12-06 05:10:40.876385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.829 [2024-12-06 05:10:40.876441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:02.829 [2024-12-06 05:10:40.876458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:02.829 [2024-12-06 05:10:40.876467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:02.829 [2024-12-06 05:10:40.876475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.829 [2024-12-06 05:10:40.876492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:02.829 [2024-12-06 05:10:40.876500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:02.829 [2024-12-06 05:10:40.876509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:02.829 [2024-12-06 05:10:40.876517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.829 [2024-12-06 05:10:40.890729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:02.829 [2024-12-06 05:10:40.890903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:02.829 [2024-12-06 05:10:40.890921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:02.829 [2024-12-06 05:10:40.890930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.829 [2024-12-06 05:10:40.901783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:02.829 [2024-12-06 05:10:40.901826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:02.829 [2024-12-06 05:10:40.901837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:02.829 [2024-12-06 05:10:40.901846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.829 [2024-12-06 05:10:40.901900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:02.830 [2024-12-06 05:10:40.901918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:02.830 [2024-12-06 05:10:40.901930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:02.830 [2024-12-06 05:10:40.901939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.830 [2024-12-06 05:10:40.901978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:02.830 [2024-12-06 05:10:40.901987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:02.830 [2024-12-06 05:10:40.901996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:02.830 [2024-12-06 05:10:40.902004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.830 [2024-12-06 05:10:40.902068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:02.830 [2024-12-06 05:10:40.902078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:02.830 [2024-12-06 05:10:40.902087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:02.830 [2024-12-06 05:10:40.902097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.830 [2024-12-06 05:10:40.902130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:02.830 [2024-12-06 05:10:40.902140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:02.830 [2024-12-06 05:10:40.902149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:02.830 [2024-12-06 05:10:40.902156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.830 [2024-12-06 05:10:40.902197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:02.830 [2024-12-06 05:10:40.902206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:02.830 [2024-12-06 05:10:40.902214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:02.830 [2024-12-06 05:10:40.902225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.830 [2024-12-06 05:10:40.902273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:02.830 [2024-12-06 05:10:40.902283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:02.830 [2024-12-06 05:10:40.902292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:02.830 [2024-12-06 05:10:40.902300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.830 [2024-12-06 05:10:40.902434] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 298.335 ms, result 0 00:23:03.090 00:23:03.090 00:23:03.090 05:10:41 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:05.638 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:23:05.638 05:10:43 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:23:05.638 05:10:43 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:23:05.638 05:10:43 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:23:05.638 05:10:43 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:05.638 05:10:43 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:05.638 Process with pid 86153 is not found 00:23:05.638 Remove shared memory files 00:23:05.638 05:10:43 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 86153 00:23:05.638 05:10:43 ftl.ftl_restore -- common/autotest_common.sh@950 -- # '[' -z 86153 ']' 00:23:05.638 05:10:43 ftl.ftl_restore -- common/autotest_common.sh@954 -- # kill -0 86153 00:23:05.638 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (86153) - No such process 00:23:05.638 05:10:43 ftl.ftl_restore -- common/autotest_common.sh@977 -- # echo 'Process with pid 86153 is not found' 00:23:05.638 05:10:43 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:23:05.638 05:10:43 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:23:05.638 05:10:43 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:23:05.638 05:10:43 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:23:05.638 05:10:43 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:23:05.638 05:10:43 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:23:05.638 05:10:43 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:23:05.638 ************************************ 00:23:05.638 END TEST ftl_restore 00:23:05.638 ************************************ 00:23:05.638 00:23:05.638 real 5m7.107s 00:23:05.638 user 4m55.219s 00:23:05.638 sys 0m11.534s 00:23:05.638 05:10:43 ftl.ftl_restore -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:05.638 05:10:43 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:23:05.638 05:10:43 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:23:05.638 05:10:43 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:23:05.638 05:10:43 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:05.638 05:10:43 ftl -- common/autotest_common.sh@10 -- # set +x 00:23:05.638 ************************************ 00:23:05.638 START TEST ftl_dirty_shutdown 00:23:05.638 ************************************ 00:23:05.638 05:10:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:23:05.638 * Looking for test storage... 00:23:05.638 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:23:05.638 05:10:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:05.638 05:10:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:05.638 05:10:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1681 -- # lcov --version 00:23:05.638 05:10:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:05.638 05:10:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:05.638 05:10:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:05.638 05:10:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:05.638 05:10:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:23:05.638 05:10:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:23:05.638 05:10:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:23:05.638 05:10:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:23:05.638 05:10:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:23:05.638 05:10:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:23:05.638 05:10:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:23:05.638 05:10:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:05.638 05:10:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:23:05.638 05:10:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:23:05.638 05:10:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:05.638 05:10:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:05.638 05:10:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:23:05.638 05:10:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:23:05.638 05:10:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:05.638 05:10:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:23:05.638 05:10:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:23:05.638 05:10:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:23:05.638 05:10:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:23:05.638 05:10:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:05.638 05:10:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:23:05.638 05:10:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:23:05.638 05:10:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:05.638 05:10:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:05.638 05:10:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:23:05.638 05:10:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:05.638 05:10:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:05.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:05.638 --rc genhtml_branch_coverage=1 00:23:05.638 --rc genhtml_function_coverage=1 00:23:05.638 --rc genhtml_legend=1 00:23:05.638 --rc geninfo_all_blocks=1 00:23:05.638 --rc geninfo_unexecuted_blocks=1 00:23:05.638 00:23:05.638 ' 00:23:05.638 05:10:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:05.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:05.638 --rc genhtml_branch_coverage=1 00:23:05.638 --rc genhtml_function_coverage=1 00:23:05.638 --rc genhtml_legend=1 00:23:05.638 --rc geninfo_all_blocks=1 00:23:05.638 --rc geninfo_unexecuted_blocks=1 00:23:05.638 00:23:05.638 ' 00:23:05.638 05:10:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:05.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:05.638 --rc genhtml_branch_coverage=1 00:23:05.638 --rc genhtml_function_coverage=1 00:23:05.638 --rc genhtml_legend=1 00:23:05.638 --rc geninfo_all_blocks=1 00:23:05.638 --rc geninfo_unexecuted_blocks=1 00:23:05.638 00:23:05.638 ' 00:23:05.638 05:10:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:05.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:05.638 --rc genhtml_branch_coverage=1 00:23:05.638 --rc genhtml_function_coverage=1 00:23:05.638 --rc genhtml_legend=1 00:23:05.638 --rc geninfo_all_blocks=1 00:23:05.638 --rc geninfo_unexecuted_blocks=1 00:23:05.638 00:23:05.639 ' 00:23:05.639 05:10:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:23:05.639 05:10:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:23:05.639 05:10:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:23:05.639 05:10:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:23:05.639 05:10:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:23:05.639 05:10:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:23:05.639 05:10:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:05.639 05:10:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:23:05.639 05:10:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:23:05.639 05:10:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:05.639 05:10:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:05.639 05:10:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:23:05.639 05:10:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:23:05.639 05:10:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:05.639 05:10:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:05.639 05:10:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:23:05.639 05:10:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:23:05.639 05:10:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:05.639 05:10:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:05.639 05:10:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:23:05.639 05:10:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:23:05.639 05:10:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:05.639 05:10:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:05.639 05:10:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:05.639 05:10:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:05.639 05:10:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:23:05.639 05:10:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:23:05.639 05:10:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:05.639 05:10:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:05.639 05:10:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:05.639 05:10:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:05.639 05:10:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:23:05.639 05:10:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:23:05.639 05:10:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:23:05.639 05:10:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:23:05.639 05:10:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:23:05.639 05:10:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:23:05.639 05:10:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:23:05.639 05:10:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:23:05.639 05:10:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:23:05.639 05:10:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:23:05.639 05:10:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:23:05.639 05:10:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=89408 00:23:05.639 05:10:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 89408 00:23:05.639 05:10:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:23:05.639 05:10:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@831 -- # '[' -z 89408 ']' 00:23:05.639 05:10:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:05.639 05:10:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:05.639 05:10:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:05.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:05.639 05:10:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:05.639 05:10:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:05.900 [2024-12-06 05:10:43.915042] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:23:05.900 [2024-12-06 05:10:43.915553] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89408 ] 00:23:05.900 [2024-12-06 05:10:44.047917] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:05.900 [2024-12-06 05:10:44.114352] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:06.845 05:10:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:06.845 05:10:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # return 0 00:23:06.845 05:10:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:23:06.845 05:10:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:23:06.845 05:10:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:23:06.845 05:10:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:23:06.845 05:10:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:23:06.845 05:10:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:23:07.104 05:10:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:23:07.104 05:10:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:23:07.104 05:10:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:23:07.104 05:10:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:23:07.104 05:10:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:07.104 05:10:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:23:07.104 05:10:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:23:07.104 05:10:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:23:07.104 05:10:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:07.104 { 00:23:07.104 "name": "nvme0n1", 00:23:07.104 "aliases": [ 00:23:07.104 "b8e4057d-dec9-4bd7-aae1-1042a0ea7f20" 00:23:07.104 ], 00:23:07.104 "product_name": "NVMe disk", 00:23:07.104 "block_size": 4096, 00:23:07.104 "num_blocks": 1310720, 00:23:07.104 "uuid": "b8e4057d-dec9-4bd7-aae1-1042a0ea7f20", 00:23:07.104 "numa_id": -1, 00:23:07.104 "assigned_rate_limits": { 00:23:07.104 "rw_ios_per_sec": 0, 00:23:07.104 "rw_mbytes_per_sec": 0, 00:23:07.104 "r_mbytes_per_sec": 0, 00:23:07.104 "w_mbytes_per_sec": 0 00:23:07.104 }, 00:23:07.104 "claimed": true, 00:23:07.104 "claim_type": "read_many_write_one", 00:23:07.104 "zoned": false, 00:23:07.104 "supported_io_types": { 00:23:07.104 "read": true, 00:23:07.104 "write": true, 00:23:07.104 "unmap": true, 00:23:07.104 "flush": true, 00:23:07.104 "reset": true, 00:23:07.104 "nvme_admin": true, 00:23:07.104 "nvme_io": true, 00:23:07.104 "nvme_io_md": false, 00:23:07.104 "write_zeroes": true, 00:23:07.104 "zcopy": false, 00:23:07.104 "get_zone_info": false, 00:23:07.104 "zone_management": false, 00:23:07.104 "zone_append": false, 00:23:07.104 "compare": true, 00:23:07.104 "compare_and_write": false, 00:23:07.104 "abort": true, 00:23:07.104 "seek_hole": false, 00:23:07.104 "seek_data": false, 00:23:07.104 "copy": true, 00:23:07.104 "nvme_iov_md": false 00:23:07.104 }, 00:23:07.104 "driver_specific": { 00:23:07.104 "nvme": [ 00:23:07.104 { 00:23:07.104 "pci_address": "0000:00:11.0", 00:23:07.104 "trid": { 00:23:07.104 "trtype": "PCIe", 00:23:07.104 "traddr": "0000:00:11.0" 00:23:07.104 }, 00:23:07.104 "ctrlr_data": { 00:23:07.104 "cntlid": 0, 00:23:07.104 "vendor_id": "0x1b36", 00:23:07.104 "model_number": "QEMU NVMe Ctrl", 00:23:07.104 "serial_number": "12341", 00:23:07.104 "firmware_revision": "8.0.0", 00:23:07.104 "subnqn": "nqn.2019-08.org.qemu:12341", 00:23:07.104 "oacs": { 00:23:07.104 "security": 0, 00:23:07.104 "format": 1, 00:23:07.104 "firmware": 0, 00:23:07.104 "ns_manage": 1 00:23:07.104 }, 00:23:07.104 "multi_ctrlr": false, 00:23:07.104 "ana_reporting": false 00:23:07.104 }, 00:23:07.104 "vs": { 00:23:07.104 "nvme_version": "1.4" 00:23:07.104 }, 00:23:07.104 "ns_data": { 00:23:07.104 "id": 1, 00:23:07.104 "can_share": false 00:23:07.104 } 00:23:07.104 } 00:23:07.104 ], 00:23:07.104 "mp_policy": "active_passive" 00:23:07.104 } 00:23:07.105 } 00:23:07.105 ]' 00:23:07.105 05:10:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:07.105 05:10:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:23:07.105 05:10:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:07.105 05:10:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:23:07.105 05:10:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:23:07.105 05:10:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:23:07.105 05:10:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:23:07.105 05:10:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:23:07.105 05:10:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:23:07.105 05:10:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:07.105 05:10:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:23:07.363 05:10:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=6a3e1662-ef36-4aa4-8576-a276798167a9 00:23:07.363 05:10:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:23:07.364 05:10:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6a3e1662-ef36-4aa4-8576-a276798167a9 00:23:07.624 05:10:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:23:07.884 05:10:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=a1687b15-5d96-461f-9a22-45c7b42bb861 00:23:07.884 05:10:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u a1687b15-5d96-461f-9a22-45c7b42bb861 00:23:08.145 05:10:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=0ffd438b-5a35-4489-8d08-a64afc910b6c 00:23:08.145 05:10:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:23:08.145 05:10:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 0ffd438b-5a35-4489-8d08-a64afc910b6c 00:23:08.145 05:10:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:23:08.145 05:10:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:23:08.145 05:10:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=0ffd438b-5a35-4489-8d08-a64afc910b6c 00:23:08.145 05:10:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:23:08.145 05:10:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 0ffd438b-5a35-4489-8d08-a64afc910b6c 00:23:08.145 05:10:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=0ffd438b-5a35-4489-8d08-a64afc910b6c 00:23:08.145 05:10:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:08.145 05:10:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:23:08.145 05:10:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:23:08.145 05:10:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0ffd438b-5a35-4489-8d08-a64afc910b6c 00:23:08.406 05:10:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:08.406 { 00:23:08.406 "name": "0ffd438b-5a35-4489-8d08-a64afc910b6c", 00:23:08.406 "aliases": [ 00:23:08.406 "lvs/nvme0n1p0" 00:23:08.406 ], 00:23:08.406 "product_name": "Logical Volume", 00:23:08.406 "block_size": 4096, 00:23:08.406 "num_blocks": 26476544, 00:23:08.406 "uuid": "0ffd438b-5a35-4489-8d08-a64afc910b6c", 00:23:08.406 "assigned_rate_limits": { 00:23:08.406 "rw_ios_per_sec": 0, 00:23:08.406 "rw_mbytes_per_sec": 0, 00:23:08.406 "r_mbytes_per_sec": 0, 00:23:08.406 "w_mbytes_per_sec": 0 00:23:08.406 }, 00:23:08.406 "claimed": false, 00:23:08.406 "zoned": false, 00:23:08.406 "supported_io_types": { 00:23:08.406 "read": true, 00:23:08.406 "write": true, 00:23:08.406 "unmap": true, 00:23:08.406 "flush": false, 00:23:08.406 "reset": true, 00:23:08.406 "nvme_admin": false, 00:23:08.406 "nvme_io": false, 00:23:08.406 "nvme_io_md": false, 00:23:08.406 "write_zeroes": true, 00:23:08.406 "zcopy": false, 00:23:08.406 "get_zone_info": false, 00:23:08.406 "zone_management": false, 00:23:08.406 "zone_append": false, 00:23:08.406 "compare": false, 00:23:08.406 "compare_and_write": false, 00:23:08.406 "abort": false, 00:23:08.406 "seek_hole": true, 00:23:08.406 "seek_data": true, 00:23:08.406 "copy": false, 00:23:08.406 "nvme_iov_md": false 00:23:08.406 }, 00:23:08.406 "driver_specific": { 00:23:08.406 "lvol": { 00:23:08.406 "lvol_store_uuid": "a1687b15-5d96-461f-9a22-45c7b42bb861", 00:23:08.406 "base_bdev": "nvme0n1", 00:23:08.406 "thin_provision": true, 00:23:08.406 "num_allocated_clusters": 0, 00:23:08.406 "snapshot": false, 00:23:08.406 "clone": false, 00:23:08.406 "esnap_clone": false 00:23:08.406 } 00:23:08.406 } 00:23:08.406 } 00:23:08.406 ]' 00:23:08.406 05:10:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:08.406 05:10:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:23:08.406 05:10:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:08.406 05:10:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:23:08.406 05:10:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:23:08.406 05:10:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:23:08.406 05:10:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:23:08.406 05:10:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:23:08.406 05:10:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:23:08.668 05:10:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:23:08.668 05:10:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:23:08.668 05:10:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 0ffd438b-5a35-4489-8d08-a64afc910b6c 00:23:08.668 05:10:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=0ffd438b-5a35-4489-8d08-a64afc910b6c 00:23:08.668 05:10:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:08.668 05:10:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:23:08.668 05:10:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:23:08.668 05:10:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0ffd438b-5a35-4489-8d08-a64afc910b6c 00:23:08.930 05:10:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:08.930 { 00:23:08.930 "name": "0ffd438b-5a35-4489-8d08-a64afc910b6c", 00:23:08.930 "aliases": [ 00:23:08.930 "lvs/nvme0n1p0" 00:23:08.930 ], 00:23:08.930 "product_name": "Logical Volume", 00:23:08.930 "block_size": 4096, 00:23:08.930 "num_blocks": 26476544, 00:23:08.930 "uuid": "0ffd438b-5a35-4489-8d08-a64afc910b6c", 00:23:08.930 "assigned_rate_limits": { 00:23:08.930 "rw_ios_per_sec": 0, 00:23:08.930 "rw_mbytes_per_sec": 0, 00:23:08.930 "r_mbytes_per_sec": 0, 00:23:08.930 "w_mbytes_per_sec": 0 00:23:08.930 }, 00:23:08.930 "claimed": false, 00:23:08.930 "zoned": false, 00:23:08.930 "supported_io_types": { 00:23:08.930 "read": true, 00:23:08.930 "write": true, 00:23:08.930 "unmap": true, 00:23:08.930 "flush": false, 00:23:08.930 "reset": true, 00:23:08.930 "nvme_admin": false, 00:23:08.930 "nvme_io": false, 00:23:08.930 "nvme_io_md": false, 00:23:08.930 "write_zeroes": true, 00:23:08.930 "zcopy": false, 00:23:08.930 "get_zone_info": false, 00:23:08.930 "zone_management": false, 00:23:08.930 "zone_append": false, 00:23:08.930 "compare": false, 00:23:08.930 "compare_and_write": false, 00:23:08.930 "abort": false, 00:23:08.930 "seek_hole": true, 00:23:08.930 "seek_data": true, 00:23:08.930 "copy": false, 00:23:08.930 "nvme_iov_md": false 00:23:08.930 }, 00:23:08.930 "driver_specific": { 00:23:08.930 "lvol": { 00:23:08.930 "lvol_store_uuid": "a1687b15-5d96-461f-9a22-45c7b42bb861", 00:23:08.930 "base_bdev": "nvme0n1", 00:23:08.930 "thin_provision": true, 00:23:08.930 "num_allocated_clusters": 0, 00:23:08.930 "snapshot": false, 00:23:08.930 "clone": false, 00:23:08.930 "esnap_clone": false 00:23:08.930 } 00:23:08.930 } 00:23:08.930 } 00:23:08.930 ]' 00:23:08.930 05:10:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:08.930 05:10:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:23:08.930 05:10:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:08.930 05:10:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:23:08.930 05:10:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:23:08.930 05:10:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:23:08.930 05:10:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:23:08.930 05:10:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:23:09.192 05:10:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:23:09.192 05:10:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 0ffd438b-5a35-4489-8d08-a64afc910b6c 00:23:09.192 05:10:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=0ffd438b-5a35-4489-8d08-a64afc910b6c 00:23:09.193 05:10:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:09.193 05:10:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:23:09.193 05:10:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:23:09.193 05:10:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0ffd438b-5a35-4489-8d08-a64afc910b6c 00:23:09.454 05:10:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:09.454 { 00:23:09.454 "name": "0ffd438b-5a35-4489-8d08-a64afc910b6c", 00:23:09.454 "aliases": [ 00:23:09.454 "lvs/nvme0n1p0" 00:23:09.454 ], 00:23:09.454 "product_name": "Logical Volume", 00:23:09.454 "block_size": 4096, 00:23:09.454 "num_blocks": 26476544, 00:23:09.454 "uuid": "0ffd438b-5a35-4489-8d08-a64afc910b6c", 00:23:09.454 "assigned_rate_limits": { 00:23:09.454 "rw_ios_per_sec": 0, 00:23:09.454 "rw_mbytes_per_sec": 0, 00:23:09.454 "r_mbytes_per_sec": 0, 00:23:09.454 "w_mbytes_per_sec": 0 00:23:09.454 }, 00:23:09.454 "claimed": false, 00:23:09.454 "zoned": false, 00:23:09.454 "supported_io_types": { 00:23:09.454 "read": true, 00:23:09.454 "write": true, 00:23:09.454 "unmap": true, 00:23:09.454 "flush": false, 00:23:09.454 "reset": true, 00:23:09.454 "nvme_admin": false, 00:23:09.454 "nvme_io": false, 00:23:09.454 "nvme_io_md": false, 00:23:09.454 "write_zeroes": true, 00:23:09.454 "zcopy": false, 00:23:09.454 "get_zone_info": false, 00:23:09.454 "zone_management": false, 00:23:09.454 "zone_append": false, 00:23:09.454 "compare": false, 00:23:09.454 "compare_and_write": false, 00:23:09.454 "abort": false, 00:23:09.454 "seek_hole": true, 00:23:09.454 "seek_data": true, 00:23:09.454 "copy": false, 00:23:09.454 "nvme_iov_md": false 00:23:09.454 }, 00:23:09.454 "driver_specific": { 00:23:09.454 "lvol": { 00:23:09.454 "lvol_store_uuid": "a1687b15-5d96-461f-9a22-45c7b42bb861", 00:23:09.454 "base_bdev": "nvme0n1", 00:23:09.454 "thin_provision": true, 00:23:09.454 "num_allocated_clusters": 0, 00:23:09.454 "snapshot": false, 00:23:09.454 "clone": false, 00:23:09.454 "esnap_clone": false 00:23:09.454 } 00:23:09.454 } 00:23:09.454 } 00:23:09.454 ]' 00:23:09.454 05:10:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:09.454 05:10:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:23:09.454 05:10:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:09.454 05:10:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:23:09.454 05:10:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:23:09.454 05:10:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:23:09.454 05:10:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:23:09.454 05:10:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 0ffd438b-5a35-4489-8d08-a64afc910b6c --l2p_dram_limit 10' 00:23:09.454 05:10:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:23:09.454 05:10:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:23:09.454 05:10:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:23:09.454 05:10:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 0ffd438b-5a35-4489-8d08-a64afc910b6c --l2p_dram_limit 10 -c nvc0n1p0 00:23:09.714 [2024-12-06 05:10:47.771776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.714 [2024-12-06 05:10:47.771837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:09.714 [2024-12-06 05:10:47.771851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:09.714 [2024-12-06 05:10:47.771859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.714 [2024-12-06 05:10:47.771911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.714 [2024-12-06 05:10:47.771922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:09.714 [2024-12-06 05:10:47.771929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:23:09.714 [2024-12-06 05:10:47.771940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.714 [2024-12-06 05:10:47.771958] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:09.714 [2024-12-06 05:10:47.772222] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:09.714 [2024-12-06 05:10:47.772235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.714 [2024-12-06 05:10:47.772245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:09.714 [2024-12-06 05:10:47.772254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.282 ms 00:23:09.714 [2024-12-06 05:10:47.772261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.714 [2024-12-06 05:10:47.772288] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 84366e27-4363-4609-9553-2ee9629251d7 00:23:09.714 [2024-12-06 05:10:47.773713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.714 [2024-12-06 05:10:47.773742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:23:09.714 [2024-12-06 05:10:47.773753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:23:09.714 [2024-12-06 05:10:47.773760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.714 [2024-12-06 05:10:47.780633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.714 [2024-12-06 05:10:47.780686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:09.714 [2024-12-06 05:10:47.780698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.803 ms 00:23:09.714 [2024-12-06 05:10:47.780709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.714 [2024-12-06 05:10:47.780781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.714 [2024-12-06 05:10:47.780789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:09.714 [2024-12-06 05:10:47.780798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:23:09.714 [2024-12-06 05:10:47.780807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.715 [2024-12-06 05:10:47.780853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.715 [2024-12-06 05:10:47.780861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:09.715 [2024-12-06 05:10:47.780870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:23:09.715 [2024-12-06 05:10:47.780877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.715 [2024-12-06 05:10:47.780897] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:09.715 [2024-12-06 05:10:47.782635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.715 [2024-12-06 05:10:47.782687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:09.715 [2024-12-06 05:10:47.782699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.745 ms 00:23:09.715 [2024-12-06 05:10:47.782707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.715 [2024-12-06 05:10:47.782738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.715 [2024-12-06 05:10:47.782747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:09.715 [2024-12-06 05:10:47.782754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:09.715 [2024-12-06 05:10:47.782764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.715 [2024-12-06 05:10:47.782784] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:23:09.715 [2024-12-06 05:10:47.782902] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:09.715 [2024-12-06 05:10:47.782917] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:09.715 [2024-12-06 05:10:47.782928] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:09.715 [2024-12-06 05:10:47.782939] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:09.715 [2024-12-06 05:10:47.782949] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:09.715 [2024-12-06 05:10:47.782956] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:09.715 [2024-12-06 05:10:47.782969] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:09.715 [2024-12-06 05:10:47.782976] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:09.715 [2024-12-06 05:10:47.782983] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:09.715 [2024-12-06 05:10:47.782992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.715 [2024-12-06 05:10:47.783001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:09.715 [2024-12-06 05:10:47.783008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.209 ms 00:23:09.715 [2024-12-06 05:10:47.783018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.715 [2024-12-06 05:10:47.783082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.715 [2024-12-06 05:10:47.783093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:09.715 [2024-12-06 05:10:47.783099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:23:09.715 [2024-12-06 05:10:47.783107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.715 [2024-12-06 05:10:47.783180] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:09.715 [2024-12-06 05:10:47.783422] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:09.715 [2024-12-06 05:10:47.783434] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:09.715 [2024-12-06 05:10:47.783443] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:09.715 [2024-12-06 05:10:47.783449] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:09.715 [2024-12-06 05:10:47.783457] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:09.715 [2024-12-06 05:10:47.783462] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:09.715 [2024-12-06 05:10:47.783469] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:09.715 [2024-12-06 05:10:47.783474] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:09.715 [2024-12-06 05:10:47.783482] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:09.715 [2024-12-06 05:10:47.783487] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:09.715 [2024-12-06 05:10:47.783496] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:09.715 [2024-12-06 05:10:47.783501] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:09.715 [2024-12-06 05:10:47.783510] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:09.715 [2024-12-06 05:10:47.783515] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:09.715 [2024-12-06 05:10:47.783522] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:09.715 [2024-12-06 05:10:47.783527] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:09.715 [2024-12-06 05:10:47.783534] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:09.715 [2024-12-06 05:10:47.783538] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:09.715 [2024-12-06 05:10:47.783546] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:09.715 [2024-12-06 05:10:47.783551] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:09.715 [2024-12-06 05:10:47.783558] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:09.715 [2024-12-06 05:10:47.783563] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:09.715 [2024-12-06 05:10:47.783569] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:09.715 [2024-12-06 05:10:47.783574] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:09.715 [2024-12-06 05:10:47.783581] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:09.715 [2024-12-06 05:10:47.783586] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:09.715 [2024-12-06 05:10:47.783593] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:09.715 [2024-12-06 05:10:47.783598] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:09.715 [2024-12-06 05:10:47.783607] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:09.715 [2024-12-06 05:10:47.783612] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:09.715 [2024-12-06 05:10:47.783622] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:09.715 [2024-12-06 05:10:47.783628] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:09.715 [2024-12-06 05:10:47.783634] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:09.715 [2024-12-06 05:10:47.783640] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:09.715 [2024-12-06 05:10:47.783647] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:09.715 [2024-12-06 05:10:47.783651] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:09.715 [2024-12-06 05:10:47.783659] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:09.715 [2024-12-06 05:10:47.783679] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:09.715 [2024-12-06 05:10:47.783686] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:09.715 [2024-12-06 05:10:47.783691] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:09.715 [2024-12-06 05:10:47.783697] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:09.715 [2024-12-06 05:10:47.783703] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:09.715 [2024-12-06 05:10:47.783709] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:09.715 [2024-12-06 05:10:47.783722] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:09.715 [2024-12-06 05:10:47.783731] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:09.715 [2024-12-06 05:10:47.783737] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:09.715 [2024-12-06 05:10:47.783745] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:09.715 [2024-12-06 05:10:47.783751] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:09.715 [2024-12-06 05:10:47.783758] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:09.715 [2024-12-06 05:10:47.783763] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:09.715 [2024-12-06 05:10:47.783770] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:09.715 [2024-12-06 05:10:47.783775] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:09.715 [2024-12-06 05:10:47.783785] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:09.715 [2024-12-06 05:10:47.783796] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:09.715 [2024-12-06 05:10:47.783805] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:09.715 [2024-12-06 05:10:47.783810] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:09.715 [2024-12-06 05:10:47.783817] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:09.715 [2024-12-06 05:10:47.783823] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:09.715 [2024-12-06 05:10:47.783832] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:09.715 [2024-12-06 05:10:47.783837] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:09.715 [2024-12-06 05:10:47.783846] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:09.715 [2024-12-06 05:10:47.783851] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:09.715 [2024-12-06 05:10:47.783859] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:09.715 [2024-12-06 05:10:47.783865] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:09.715 [2024-12-06 05:10:47.783873] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:09.715 [2024-12-06 05:10:47.783880] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:09.715 [2024-12-06 05:10:47.783888] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:09.716 [2024-12-06 05:10:47.783896] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:09.716 [2024-12-06 05:10:47.783904] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:09.716 [2024-12-06 05:10:47.783914] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:09.716 [2024-12-06 05:10:47.783923] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:09.716 [2024-12-06 05:10:47.783929] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:09.716 [2024-12-06 05:10:47.783937] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:09.716 [2024-12-06 05:10:47.783944] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:09.716 [2024-12-06 05:10:47.783953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.716 [2024-12-06 05:10:47.783960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:09.716 [2024-12-06 05:10:47.783970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.822 ms 00:23:09.716 [2024-12-06 05:10:47.783976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.716 [2024-12-06 05:10:47.784017] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:23:09.716 [2024-12-06 05:10:47.784030] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:23:12.247 [2024-12-06 05:10:49.883828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.247 [2024-12-06 05:10:49.883883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:23:12.247 [2024-12-06 05:10:49.883898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2099.800 ms 00:23:12.247 [2024-12-06 05:10:49.883905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.247 [2024-12-06 05:10:49.891335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.247 [2024-12-06 05:10:49.891490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:12.247 [2024-12-06 05:10:49.891507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.362 ms 00:23:12.247 [2024-12-06 05:10:49.891514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.247 [2024-12-06 05:10:49.891591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.247 [2024-12-06 05:10:49.891598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:12.247 [2024-12-06 05:10:49.891609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:23:12.247 [2024-12-06 05:10:49.891615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.247 [2024-12-06 05:10:49.898316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.247 [2024-12-06 05:10:49.898420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:12.247 [2024-12-06 05:10:49.898435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.666 ms 00:23:12.247 [2024-12-06 05:10:49.898441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.247 [2024-12-06 05:10:49.898463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.247 [2024-12-06 05:10:49.898470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:12.247 [2024-12-06 05:10:49.898479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:23:12.247 [2024-12-06 05:10:49.898485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.247 [2024-12-06 05:10:49.898786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.247 [2024-12-06 05:10:49.898802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:12.247 [2024-12-06 05:10:49.898810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.270 ms 00:23:12.247 [2024-12-06 05:10:49.898816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.247 [2024-12-06 05:10:49.898899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.247 [2024-12-06 05:10:49.898908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:12.247 [2024-12-06 05:10:49.898916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:23:12.247 [2024-12-06 05:10:49.898925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.247 [2024-12-06 05:10:49.912060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.247 [2024-12-06 05:10:49.912091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:12.247 [2024-12-06 05:10:49.912102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.118 ms 00:23:12.247 [2024-12-06 05:10:49.912112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.247 [2024-12-06 05:10:49.921286] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:12.247 [2024-12-06 05:10:49.924162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.247 [2024-12-06 05:10:49.924198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:12.247 [2024-12-06 05:10:49.924210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.982 ms 00:23:12.247 [2024-12-06 05:10:49.924221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.247 [2024-12-06 05:10:49.965503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.247 [2024-12-06 05:10:49.965540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:23:12.247 [2024-12-06 05:10:49.965550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.255 ms 00:23:12.247 [2024-12-06 05:10:49.965559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.247 [2024-12-06 05:10:49.965723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.247 [2024-12-06 05:10:49.965734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:12.247 [2024-12-06 05:10:49.965741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.133 ms 00:23:12.247 [2024-12-06 05:10:49.965749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.247 [2024-12-06 05:10:49.968187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.247 [2024-12-06 05:10:49.968217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:23:12.247 [2024-12-06 05:10:49.968225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.423 ms 00:23:12.247 [2024-12-06 05:10:49.968232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.247 [2024-12-06 05:10:49.970107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.247 [2024-12-06 05:10:49.970135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:23:12.247 [2024-12-06 05:10:49.970143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.847 ms 00:23:12.247 [2024-12-06 05:10:49.970150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.247 [2024-12-06 05:10:49.970383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.247 [2024-12-06 05:10:49.970392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:12.247 [2024-12-06 05:10:49.970398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.208 ms 00:23:12.247 [2024-12-06 05:10:49.970407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.247 [2024-12-06 05:10:49.994970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.247 [2024-12-06 05:10:49.995087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:23:12.247 [2024-12-06 05:10:49.995101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.548 ms 00:23:12.247 [2024-12-06 05:10:49.995109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.247 [2024-12-06 05:10:49.998150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.247 [2024-12-06 05:10:49.998256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:23:12.247 [2024-12-06 05:10:49.998268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.004 ms 00:23:12.247 [2024-12-06 05:10:49.998281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.247 [2024-12-06 05:10:50.000568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.247 [2024-12-06 05:10:50.000598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:23:12.247 [2024-12-06 05:10:50.000605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.261 ms 00:23:12.247 [2024-12-06 05:10:50.000611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.247 [2024-12-06 05:10:50.003327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.247 [2024-12-06 05:10:50.003359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:12.247 [2024-12-06 05:10:50.003367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.690 ms 00:23:12.247 [2024-12-06 05:10:50.003378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.247 [2024-12-06 05:10:50.003408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.247 [2024-12-06 05:10:50.003417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:12.247 [2024-12-06 05:10:50.003425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:12.247 [2024-12-06 05:10:50.003433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.247 [2024-12-06 05:10:50.003484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.247 [2024-12-06 05:10:50.003493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:12.247 [2024-12-06 05:10:50.003500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:23:12.247 [2024-12-06 05:10:50.003508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.247 [2024-12-06 05:10:50.004233] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2232.156 ms, result 0 00:23:12.247 { 00:23:12.247 "name": "ftl0", 00:23:12.247 "uuid": "84366e27-4363-4609-9553-2ee9629251d7" 00:23:12.247 } 00:23:12.247 05:10:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:23:12.247 05:10:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:23:12.247 05:10:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:23:12.247 05:10:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:23:12.247 05:10:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:23:12.247 /dev/nbd0 00:23:12.247 05:10:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:23:12.248 05:10:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:23:12.248 05:10:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@869 -- # local i 00:23:12.248 05:10:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:23:12.248 05:10:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:23:12.248 05:10:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:23:12.248 05:10:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # break 00:23:12.248 05:10:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:23:12.248 05:10:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:23:12.248 05:10:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:23:12.248 1+0 records in 00:23:12.248 1+0 records out 00:23:12.248 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000470158 s, 8.7 MB/s 00:23:12.248 05:10:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:23:12.248 05:10:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # size=4096 00:23:12.248 05:10:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:23:12.248 05:10:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:23:12.248 05:10:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # return 0 00:23:12.248 05:10:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:23:12.546 [2024-12-06 05:10:50.518352] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:23:12.546 [2024-12-06 05:10:50.518471] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89532 ] 00:23:12.546 [2024-12-06 05:10:50.654774] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:12.546 [2024-12-06 05:10:50.685351] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:13.935  [2024-12-06T05:10:52.734Z] Copying: 196/1024 [MB] (196 MBps) [2024-12-06T05:10:54.109Z] Copying: 430/1024 [MB] (233 MBps) [2024-12-06T05:10:55.044Z] Copying: 646/1024 [MB] (216 MBps) [2024-12-06T05:10:55.607Z] Copying: 839/1024 [MB] (193 MBps) [2024-12-06T05:10:55.864Z] Copying: 1024/1024 [MB] (average 216 MBps) 00:23:17.632 00:23:17.632 05:10:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:23:19.529 05:10:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:23:19.529 [2024-12-06 05:10:57.617818] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:23:19.530 [2024-12-06 05:10:57.618049] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89612 ] 00:23:19.787 [2024-12-06 05:10:57.760542] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:19.787 [2024-12-06 05:10:57.800753] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:20.728  [2024-12-06T05:11:00.342Z] Copying: 21/1024 [MB] (21 MBps) [2024-12-06T05:11:00.912Z] Copying: 33/1024 [MB] (12 MBps) [2024-12-06T05:11:02.297Z] Copying: 52/1024 [MB] (19 MBps) [2024-12-06T05:11:02.863Z] Copying: 68/1024 [MB] (16 MBps) [2024-12-06T05:11:04.243Z] Copying: 97/1024 [MB] (28 MBps) [2024-12-06T05:11:05.188Z] Copying: 119/1024 [MB] (22 MBps) [2024-12-06T05:11:06.126Z] Copying: 135/1024 [MB] (15 MBps) [2024-12-06T05:11:07.066Z] Copying: 160/1024 [MB] (24 MBps) [2024-12-06T05:11:08.001Z] Copying: 175/1024 [MB] (15 MBps) [2024-12-06T05:11:08.935Z] Copying: 189/1024 [MB] (13 MBps) [2024-12-06T05:11:09.869Z] Copying: 225/1024 [MB] (35 MBps) [2024-12-06T05:11:11.252Z] Copying: 255/1024 [MB] (30 MBps) [2024-12-06T05:11:12.187Z] Copying: 269/1024 [MB] (13 MBps) [2024-12-06T05:11:13.128Z] Copying: 296/1024 [MB] (27 MBps) [2024-12-06T05:11:14.070Z] Copying: 317/1024 [MB] (21 MBps) [2024-12-06T05:11:15.015Z] Copying: 335/1024 [MB] (17 MBps) [2024-12-06T05:11:15.962Z] Copying: 354/1024 [MB] (18 MBps) [2024-12-06T05:11:16.895Z] Copying: 370/1024 [MB] (16 MBps) [2024-12-06T05:11:18.276Z] Copying: 404/1024 [MB] (34 MBps) [2024-12-06T05:11:19.219Z] Copying: 435/1024 [MB] (30 MBps) [2024-12-06T05:11:20.161Z] Copying: 451/1024 [MB] (16 MBps) [2024-12-06T05:11:21.102Z] Copying: 468/1024 [MB] (16 MBps) [2024-12-06T05:11:22.046Z] Copying: 491/1024 [MB] (22 MBps) [2024-12-06T05:11:22.992Z] Copying: 506/1024 [MB] (15 MBps) [2024-12-06T05:11:23.936Z] Copying: 521/1024 [MB] (14 MBps) [2024-12-06T05:11:24.874Z] Copying: 538/1024 [MB] (16 MBps) [2024-12-06T05:11:26.280Z] Copying: 564/1024 [MB] (25 MBps) [2024-12-06T05:11:27.221Z] Copying: 588/1024 [MB] (23 MBps) [2024-12-06T05:11:28.157Z] Copying: 607/1024 [MB] (19 MBps) [2024-12-06T05:11:29.099Z] Copying: 628/1024 [MB] (21 MBps) [2024-12-06T05:11:30.035Z] Copying: 648/1024 [MB] (19 MBps) [2024-12-06T05:11:30.972Z] Copying: 669/1024 [MB] (20 MBps) [2024-12-06T05:11:31.910Z] Copying: 693/1024 [MB] (24 MBps) [2024-12-06T05:11:33.291Z] Copying: 712/1024 [MB] (19 MBps) [2024-12-06T05:11:34.230Z] Copying: 731/1024 [MB] (18 MBps) [2024-12-06T05:11:35.171Z] Copying: 750/1024 [MB] (18 MBps) [2024-12-06T05:11:36.109Z] Copying: 774/1024 [MB] (24 MBps) [2024-12-06T05:11:37.046Z] Copying: 793/1024 [MB] (18 MBps) [2024-12-06T05:11:37.988Z] Copying: 813/1024 [MB] (20 MBps) [2024-12-06T05:11:38.930Z] Copying: 836/1024 [MB] (22 MBps) [2024-12-06T05:11:39.873Z] Copying: 855/1024 [MB] (18 MBps) [2024-12-06T05:11:41.260Z] Copying: 872/1024 [MB] (17 MBps) [2024-12-06T05:11:42.233Z] Copying: 892/1024 [MB] (19 MBps) [2024-12-06T05:11:43.171Z] Copying: 908/1024 [MB] (16 MBps) [2024-12-06T05:11:44.108Z] Copying: 927/1024 [MB] (18 MBps) [2024-12-06T05:11:45.052Z] Copying: 962/1024 [MB] (34 MBps) [2024-12-06T05:11:45.997Z] Copying: 980/1024 [MB] (18 MBps) [2024-12-06T05:11:46.936Z] Copying: 999/1024 [MB] (19 MBps) [2024-12-06T05:11:47.505Z] Copying: 1018/1024 [MB] (18 MBps) [2024-12-06T05:11:47.505Z] Copying: 1024/1024 [MB] (average 20 MBps) 00:24:09.273 00:24:09.273 05:11:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:24:09.273 05:11:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:24:09.531 05:11:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:24:09.792 [2024-12-06 05:11:47.815457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.792 [2024-12-06 05:11:47.815496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:09.792 [2024-12-06 05:11:47.815508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:09.792 [2024-12-06 05:11:47.815515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.792 [2024-12-06 05:11:47.815536] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:09.792 [2024-12-06 05:11:47.815958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.792 [2024-12-06 05:11:47.815980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:09.792 [2024-12-06 05:11:47.815990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.410 ms 00:24:09.792 [2024-12-06 05:11:47.815999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.792 [2024-12-06 05:11:47.817821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.792 [2024-12-06 05:11:47.817943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:09.792 [2024-12-06 05:11:47.817956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.803 ms 00:24:09.792 [2024-12-06 05:11:47.817964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.792 [2024-12-06 05:11:47.831925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.792 [2024-12-06 05:11:47.831956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:09.792 [2024-12-06 05:11:47.831965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.944 ms 00:24:09.792 [2024-12-06 05:11:47.831973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.792 [2024-12-06 05:11:47.836756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.792 [2024-12-06 05:11:47.836858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:09.792 [2024-12-06 05:11:47.836870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.754 ms 00:24:09.792 [2024-12-06 05:11:47.836878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.792 [2024-12-06 05:11:47.837824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.792 [2024-12-06 05:11:47.837855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:09.792 [2024-12-06 05:11:47.837862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.887 ms 00:24:09.792 [2024-12-06 05:11:47.837870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.792 [2024-12-06 05:11:47.841815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.792 [2024-12-06 05:11:47.841846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:09.792 [2024-12-06 05:11:47.841854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.917 ms 00:24:09.792 [2024-12-06 05:11:47.841862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.792 [2024-12-06 05:11:47.841955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.792 [2024-12-06 05:11:47.841965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:09.792 [2024-12-06 05:11:47.841971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:24:09.792 [2024-12-06 05:11:47.841978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.792 [2024-12-06 05:11:47.843906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.792 [2024-12-06 05:11:47.844016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:09.792 [2024-12-06 05:11:47.844027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.916 ms 00:24:09.792 [2024-12-06 05:11:47.844034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.792 [2024-12-06 05:11:47.845419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.792 [2024-12-06 05:11:47.845452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:09.792 [2024-12-06 05:11:47.845458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.359 ms 00:24:09.792 [2024-12-06 05:11:47.845465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.792 [2024-12-06 05:11:47.846531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.792 [2024-12-06 05:11:47.846561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:09.792 [2024-12-06 05:11:47.846568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.040 ms 00:24:09.792 [2024-12-06 05:11:47.846575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.792 [2024-12-06 05:11:47.847431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.792 [2024-12-06 05:11:47.847524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:09.792 [2024-12-06 05:11:47.847535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.811 ms 00:24:09.792 [2024-12-06 05:11:47.847542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.792 [2024-12-06 05:11:47.847565] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:09.792 [2024-12-06 05:11:47.847577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:09.792 [2024-12-06 05:11:47.847584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:09.792 [2024-12-06 05:11:47.847592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:09.792 [2024-12-06 05:11:47.847598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:09.792 [2024-12-06 05:11:47.847606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:09.792 [2024-12-06 05:11:47.847612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:09.792 [2024-12-06 05:11:47.847620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:09.792 [2024-12-06 05:11:47.847626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:09.792 [2024-12-06 05:11:47.847633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:09.792 [2024-12-06 05:11:47.847638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:09.792 [2024-12-06 05:11:47.847652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:09.792 [2024-12-06 05:11:47.847657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:09.792 [2024-12-06 05:11:47.847676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:09.792 [2024-12-06 05:11:47.847682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:09.792 [2024-12-06 05:11:47.847689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:09.792 [2024-12-06 05:11:47.847695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:09.792 [2024-12-06 05:11:47.847702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:09.792 [2024-12-06 05:11:47.847707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:09.792 [2024-12-06 05:11:47.847714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:09.792 [2024-12-06 05:11:47.847720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:09.792 [2024-12-06 05:11:47.847729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:09.793 [2024-12-06 05:11:47.847734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:09.793 [2024-12-06 05:11:47.847741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:09.793 [2024-12-06 05:11:47.847746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:09.793 [2024-12-06 05:11:47.847753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:09.793 [2024-12-06 05:11:47.847759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:09.793 [2024-12-06 05:11:47.847766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:09.793 [2024-12-06 05:11:47.847772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:09.793 [2024-12-06 05:11:47.847779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:09.793 [2024-12-06 05:11:47.847784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:09.793 [2024-12-06 05:11:47.847791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:09.793 [2024-12-06 05:11:47.847797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:09.793 [2024-12-06 05:11:47.847804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:09.793 [2024-12-06 05:11:47.847810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:09.793 [2024-12-06 05:11:47.847818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:09.793 [2024-12-06 05:11:47.847824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:09.793 [2024-12-06 05:11:47.847833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:09.793 [2024-12-06 05:11:47.847838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:09.793 [2024-12-06 05:11:47.847845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:09.793 [2024-12-06 05:11:47.847851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:09.793 [2024-12-06 05:11:47.847858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:09.793 [2024-12-06 05:11:47.847863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:09.793 [2024-12-06 05:11:47.847872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:09.793 [2024-12-06 05:11:47.847877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:09.793 [2024-12-06 05:11:47.847884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:09.793 [2024-12-06 05:11:47.847890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:09.793 [2024-12-06 05:11:47.847897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:09.793 [2024-12-06 05:11:47.847902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:09.793 [2024-12-06 05:11:47.847909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:09.793 [2024-12-06 05:11:47.847914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:09.793 [2024-12-06 05:11:47.847921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:09.793 [2024-12-06 05:11:47.847927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:09.793 [2024-12-06 05:11:47.847935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:09.793 [2024-12-06 05:11:47.847940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:09.793 [2024-12-06 05:11:47.847947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:09.793 [2024-12-06 05:11:47.847953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:09.793 [2024-12-06 05:11:47.847960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:09.793 [2024-12-06 05:11:47.847966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:09.793 [2024-12-06 05:11:47.847973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:09.793 [2024-12-06 05:11:47.847978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:09.793 [2024-12-06 05:11:47.847986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:09.793 [2024-12-06 05:11:47.847991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:09.793 [2024-12-06 05:11:47.847998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:09.793 [2024-12-06 05:11:47.848003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:09.793 [2024-12-06 05:11:47.848011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:09.793 [2024-12-06 05:11:47.848016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:09.793 [2024-12-06 05:11:47.848023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:09.793 [2024-12-06 05:11:47.848029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:09.793 [2024-12-06 05:11:47.848037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:09.793 [2024-12-06 05:11:47.848042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:09.793 [2024-12-06 05:11:47.848049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:09.793 [2024-12-06 05:11:47.848054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:09.793 [2024-12-06 05:11:47.848062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:09.793 [2024-12-06 05:11:47.848067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:09.793 [2024-12-06 05:11:47.848074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:09.793 [2024-12-06 05:11:47.848079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:09.793 [2024-12-06 05:11:47.848086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:09.793 [2024-12-06 05:11:47.848092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:09.793 [2024-12-06 05:11:47.848099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:09.793 [2024-12-06 05:11:47.848105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:09.793 [2024-12-06 05:11:47.848111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:09.793 [2024-12-06 05:11:47.848117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:09.793 [2024-12-06 05:11:47.848123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:09.793 [2024-12-06 05:11:47.848129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:09.793 [2024-12-06 05:11:47.848137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:09.793 [2024-12-06 05:11:47.848143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:09.793 [2024-12-06 05:11:47.848151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:09.793 [2024-12-06 05:11:47.848156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:09.793 [2024-12-06 05:11:47.848163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:09.793 [2024-12-06 05:11:47.848168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:09.793 [2024-12-06 05:11:47.848175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:09.793 [2024-12-06 05:11:47.848180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:09.793 [2024-12-06 05:11:47.848187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:09.793 [2024-12-06 05:11:47.848193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:09.793 [2024-12-06 05:11:47.848199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:09.793 [2024-12-06 05:11:47.848205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:09.793 [2024-12-06 05:11:47.848212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:09.793 [2024-12-06 05:11:47.848219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:09.793 [2024-12-06 05:11:47.848226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:09.793 [2024-12-06 05:11:47.848232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:09.793 [2024-12-06 05:11:47.848246] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:09.793 [2024-12-06 05:11:47.848252] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 84366e27-4363-4609-9553-2ee9629251d7 00:24:09.793 [2024-12-06 05:11:47.848259] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:09.793 [2024-12-06 05:11:47.848266] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:09.793 [2024-12-06 05:11:47.848273] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:09.793 [2024-12-06 05:11:47.848279] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:09.793 [2024-12-06 05:11:47.848285] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:09.793 [2024-12-06 05:11:47.848291] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:09.793 [2024-12-06 05:11:47.848298] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:09.793 [2024-12-06 05:11:47.848302] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:09.793 [2024-12-06 05:11:47.848308] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:09.793 [2024-12-06 05:11:47.848314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.793 [2024-12-06 05:11:47.848322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:09.794 [2024-12-06 05:11:47.848328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.750 ms 00:24:09.794 [2024-12-06 05:11:47.848335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.794 [2024-12-06 05:11:47.849575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.794 [2024-12-06 05:11:47.849597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:09.794 [2024-12-06 05:11:47.849607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.227 ms 00:24:09.794 [2024-12-06 05:11:47.849615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.794 [2024-12-06 05:11:47.849705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.794 [2024-12-06 05:11:47.849714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:09.794 [2024-12-06 05:11:47.849720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:24:09.794 [2024-12-06 05:11:47.849727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.794 [2024-12-06 05:11:47.854367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:09.794 [2024-12-06 05:11:47.854472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:09.794 [2024-12-06 05:11:47.854517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:09.794 [2024-12-06 05:11:47.854536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.794 [2024-12-06 05:11:47.854590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:09.794 [2024-12-06 05:11:47.854627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:09.794 [2024-12-06 05:11:47.854660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:09.794 [2024-12-06 05:11:47.854686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.794 [2024-12-06 05:11:47.854757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:09.794 [2024-12-06 05:11:47.854780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:09.794 [2024-12-06 05:11:47.854795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:09.794 [2024-12-06 05:11:47.854884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.794 [2024-12-06 05:11:47.854910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:09.794 [2024-12-06 05:11:47.854927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:09.794 [2024-12-06 05:11:47.854943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:09.794 [2024-12-06 05:11:47.854962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.794 [2024-12-06 05:11:47.862492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:09.794 [2024-12-06 05:11:47.862613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:09.794 [2024-12-06 05:11:47.862656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:09.794 [2024-12-06 05:11:47.862690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.794 [2024-12-06 05:11:47.869338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:09.794 [2024-12-06 05:11:47.869468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:09.794 [2024-12-06 05:11:47.869508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:09.794 [2024-12-06 05:11:47.869527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.794 [2024-12-06 05:11:47.869576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:09.794 [2024-12-06 05:11:47.869659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:09.794 [2024-12-06 05:11:47.869687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:09.794 [2024-12-06 05:11:47.869712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.794 [2024-12-06 05:11:47.869768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:09.794 [2024-12-06 05:11:47.869792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:09.794 [2024-12-06 05:11:47.869899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:09.794 [2024-12-06 05:11:47.869919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.794 [2024-12-06 05:11:47.869982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:09.794 [2024-12-06 05:11:47.870008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:09.794 [2024-12-06 05:11:47.870055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:09.794 [2024-12-06 05:11:47.870074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.794 [2024-12-06 05:11:47.870115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:09.794 [2024-12-06 05:11:47.870138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:09.794 [2024-12-06 05:11:47.870154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:09.794 [2024-12-06 05:11:47.870194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.794 [2024-12-06 05:11:47.870264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:09.794 [2024-12-06 05:11:47.870290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:09.794 [2024-12-06 05:11:47.870328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:09.794 [2024-12-06 05:11:47.870347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.794 [2024-12-06 05:11:47.870391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:09.794 [2024-12-06 05:11:47.870474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:09.794 [2024-12-06 05:11:47.870491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:09.794 [2024-12-06 05:11:47.870507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.794 [2024-12-06 05:11:47.870621] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 55.136 ms, result 0 00:24:09.794 true 00:24:09.794 05:11:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 89408 00:24:09.794 05:11:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid89408 00:24:09.794 05:11:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:24:09.794 [2024-12-06 05:11:47.947071] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:24:09.794 [2024-12-06 05:11:47.947313] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90137 ] 00:24:10.052 [2024-12-06 05:11:48.077185] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:10.052 [2024-12-06 05:11:48.108410] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:10.984  [2024-12-06T05:11:50.588Z] Copying: 258/1024 [MB] (258 MBps) [2024-12-06T05:11:51.155Z] Copying: 519/1024 [MB] (260 MBps) [2024-12-06T05:11:52.530Z] Copying: 778/1024 [MB] (259 MBps) [2024-12-06T05:11:52.530Z] Copying: 1024/1024 [MB] (average 258 MBps) 00:24:14.298 00:24:14.298 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 89408 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:24:14.298 05:11:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:14.298 [2024-12-06 05:11:52.321233] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:24:14.298 [2024-12-06 05:11:52.321367] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90189 ] 00:24:14.298 [2024-12-06 05:11:52.455860] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:14.298 [2024-12-06 05:11:52.488170] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:14.556 [2024-12-06 05:11:52.569803] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:14.556 [2024-12-06 05:11:52.570025] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:14.556 [2024-12-06 05:11:52.631695] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:24:14.556 [2024-12-06 05:11:52.631967] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:24:14.556 [2024-12-06 05:11:52.632081] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:24:14.817 [2024-12-06 05:11:52.798841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.817 [2024-12-06 05:11:52.798887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:14.817 [2024-12-06 05:11:52.798899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:14.817 [2024-12-06 05:11:52.798911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.818 [2024-12-06 05:11:52.798952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.818 [2024-12-06 05:11:52.798960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:14.818 [2024-12-06 05:11:52.798969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:24:14.818 [2024-12-06 05:11:52.798975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.818 [2024-12-06 05:11:52.798988] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:14.818 [2024-12-06 05:11:52.799172] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:14.818 [2024-12-06 05:11:52.799184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.818 [2024-12-06 05:11:52.799190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:14.818 [2024-12-06 05:11:52.799196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.199 ms 00:24:14.818 [2024-12-06 05:11:52.799205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.818 [2024-12-06 05:11:52.800506] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:14.818 [2024-12-06 05:11:52.802579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.818 [2024-12-06 05:11:52.802618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:14.818 [2024-12-06 05:11:52.802627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.077 ms 00:24:14.818 [2024-12-06 05:11:52.802633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.818 [2024-12-06 05:11:52.802689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.818 [2024-12-06 05:11:52.802697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:14.818 [2024-12-06 05:11:52.802704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:24:14.818 [2024-12-06 05:11:52.802710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.818 [2024-12-06 05:11:52.807045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.818 [2024-12-06 05:11:52.807071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:14.818 [2024-12-06 05:11:52.807078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.294 ms 00:24:14.818 [2024-12-06 05:11:52.807084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.818 [2024-12-06 05:11:52.807149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.818 [2024-12-06 05:11:52.807159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:14.818 [2024-12-06 05:11:52.807165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:24:14.818 [2024-12-06 05:11:52.807171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.818 [2024-12-06 05:11:52.807209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.818 [2024-12-06 05:11:52.807219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:14.818 [2024-12-06 05:11:52.807226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:14.818 [2024-12-06 05:11:52.807233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.818 [2024-12-06 05:11:52.807255] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:14.818 [2024-12-06 05:11:52.808398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.818 [2024-12-06 05:11:52.808424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:14.818 [2024-12-06 05:11:52.808432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.150 ms 00:24:14.818 [2024-12-06 05:11:52.808437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.818 [2024-12-06 05:11:52.808459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.818 [2024-12-06 05:11:52.808468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:14.818 [2024-12-06 05:11:52.808478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:14.818 [2024-12-06 05:11:52.808483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.818 [2024-12-06 05:11:52.808498] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:14.818 [2024-12-06 05:11:52.808513] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:14.818 [2024-12-06 05:11:52.808544] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:14.818 [2024-12-06 05:11:52.808556] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:14.818 [2024-12-06 05:11:52.808639] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:14.818 [2024-12-06 05:11:52.808647] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:14.818 [2024-12-06 05:11:52.808656] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:14.818 [2024-12-06 05:11:52.808679] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:14.818 [2024-12-06 05:11:52.808686] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:14.818 [2024-12-06 05:11:52.808693] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:14.818 [2024-12-06 05:11:52.808699] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:14.818 [2024-12-06 05:11:52.808705] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:14.818 [2024-12-06 05:11:52.808711] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:14.818 [2024-12-06 05:11:52.808716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.818 [2024-12-06 05:11:52.808724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:14.818 [2024-12-06 05:11:52.808730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.220 ms 00:24:14.818 [2024-12-06 05:11:52.808735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.818 [2024-12-06 05:11:52.808799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.818 [2024-12-06 05:11:52.808808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:14.818 [2024-12-06 05:11:52.808814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:24:14.818 [2024-12-06 05:11:52.808822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.818 [2024-12-06 05:11:52.808893] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:14.818 [2024-12-06 05:11:52.808901] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:14.818 [2024-12-06 05:11:52.808910] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:14.818 [2024-12-06 05:11:52.808916] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:14.818 [2024-12-06 05:11:52.808922] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:14.818 [2024-12-06 05:11:52.808927] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:14.818 [2024-12-06 05:11:52.808932] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:14.818 [2024-12-06 05:11:52.808937] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:14.818 [2024-12-06 05:11:52.808942] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:14.818 [2024-12-06 05:11:52.808947] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:14.818 [2024-12-06 05:11:52.808953] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:14.818 [2024-12-06 05:11:52.808958] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:14.818 [2024-12-06 05:11:52.808963] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:14.818 [2024-12-06 05:11:52.808969] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:14.818 [2024-12-06 05:11:52.808974] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:14.818 [2024-12-06 05:11:52.808982] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:14.818 [2024-12-06 05:11:52.808987] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:14.818 [2024-12-06 05:11:52.808992] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:14.818 [2024-12-06 05:11:52.808997] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:14.818 [2024-12-06 05:11:52.809002] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:14.818 [2024-12-06 05:11:52.809008] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:14.818 [2024-12-06 05:11:52.809013] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:14.818 [2024-12-06 05:11:52.809017] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:14.818 [2024-12-06 05:11:52.809022] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:14.818 [2024-12-06 05:11:52.809027] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:14.818 [2024-12-06 05:11:52.809032] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:14.818 [2024-12-06 05:11:52.809037] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:14.818 [2024-12-06 05:11:52.809041] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:14.818 [2024-12-06 05:11:52.809046] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:14.818 [2024-12-06 05:11:52.809052] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:14.818 [2024-12-06 05:11:52.809057] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:14.818 [2024-12-06 05:11:52.809067] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:14.818 [2024-12-06 05:11:52.809073] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:14.818 [2024-12-06 05:11:52.809079] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:14.818 [2024-12-06 05:11:52.809085] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:14.818 [2024-12-06 05:11:52.809090] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:14.818 [2024-12-06 05:11:52.809096] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:14.818 [2024-12-06 05:11:52.809102] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:14.818 [2024-12-06 05:11:52.809108] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:14.818 [2024-12-06 05:11:52.809113] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:14.818 [2024-12-06 05:11:52.809119] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:14.819 [2024-12-06 05:11:52.809125] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:14.819 [2024-12-06 05:11:52.809130] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:14.819 [2024-12-06 05:11:52.809135] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:14.819 [2024-12-06 05:11:52.809143] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:14.819 [2024-12-06 05:11:52.809149] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:14.819 [2024-12-06 05:11:52.809155] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:14.819 [2024-12-06 05:11:52.809165] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:14.819 [2024-12-06 05:11:52.809171] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:14.819 [2024-12-06 05:11:52.809177] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:14.819 [2024-12-06 05:11:52.809183] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:14.819 [2024-12-06 05:11:52.809188] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:14.819 [2024-12-06 05:11:52.809194] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:14.819 [2024-12-06 05:11:52.809201] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:14.819 [2024-12-06 05:11:52.809209] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:14.819 [2024-12-06 05:11:52.809216] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:14.819 [2024-12-06 05:11:52.809222] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:14.819 [2024-12-06 05:11:52.809228] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:14.819 [2024-12-06 05:11:52.809234] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:14.819 [2024-12-06 05:11:52.809241] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:14.819 [2024-12-06 05:11:52.809250] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:14.819 [2024-12-06 05:11:52.809257] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:14.819 [2024-12-06 05:11:52.809263] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:14.819 [2024-12-06 05:11:52.809270] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:14.819 [2024-12-06 05:11:52.809276] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:14.819 [2024-12-06 05:11:52.809282] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:14.819 [2024-12-06 05:11:52.809288] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:14.819 [2024-12-06 05:11:52.809295] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:14.819 [2024-12-06 05:11:52.809301] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:14.819 [2024-12-06 05:11:52.809307] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:14.819 [2024-12-06 05:11:52.809313] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:14.819 [2024-12-06 05:11:52.809320] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:14.819 [2024-12-06 05:11:52.809326] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:14.819 [2024-12-06 05:11:52.809332] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:14.819 [2024-12-06 05:11:52.809338] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:14.819 [2024-12-06 05:11:52.809345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.819 [2024-12-06 05:11:52.809353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:14.819 [2024-12-06 05:11:52.809360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.503 ms 00:24:14.819 [2024-12-06 05:11:52.809366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.819 [2024-12-06 05:11:52.825781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.819 [2024-12-06 05:11:52.825938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:14.819 [2024-12-06 05:11:52.825959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.373 ms 00:24:14.819 [2024-12-06 05:11:52.825968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.819 [2024-12-06 05:11:52.826076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.819 [2024-12-06 05:11:52.826088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:14.819 [2024-12-06 05:11:52.826101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:24:14.819 [2024-12-06 05:11:52.826110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.819 [2024-12-06 05:11:52.834566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.819 [2024-12-06 05:11:52.834595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:14.819 [2024-12-06 05:11:52.834603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.385 ms 00:24:14.819 [2024-12-06 05:11:52.834609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.819 [2024-12-06 05:11:52.834633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.819 [2024-12-06 05:11:52.834642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:14.819 [2024-12-06 05:11:52.834649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:24:14.819 [2024-12-06 05:11:52.834655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.819 [2024-12-06 05:11:52.834969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.819 [2024-12-06 05:11:52.834993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:14.819 [2024-12-06 05:11:52.835000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.272 ms 00:24:14.819 [2024-12-06 05:11:52.835009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.819 [2024-12-06 05:11:52.835103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.819 [2024-12-06 05:11:52.835110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:14.819 [2024-12-06 05:11:52.835118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:24:14.819 [2024-12-06 05:11:52.835124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.819 [2024-12-06 05:11:52.839115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.819 [2024-12-06 05:11:52.839140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:14.819 [2024-12-06 05:11:52.839147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.974 ms 00:24:14.819 [2024-12-06 05:11:52.839153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.819 [2024-12-06 05:11:52.841198] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:14.819 [2024-12-06 05:11:52.841302] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:14.819 [2024-12-06 05:11:52.841314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.819 [2024-12-06 05:11:52.841319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:14.819 [2024-12-06 05:11:52.841329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.086 ms 00:24:14.819 [2024-12-06 05:11:52.841335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.819 [2024-12-06 05:11:52.852457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.819 [2024-12-06 05:11:52.852552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:14.819 [2024-12-06 05:11:52.852565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.095 ms 00:24:14.819 [2024-12-06 05:11:52.852571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.819 [2024-12-06 05:11:52.853970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.819 [2024-12-06 05:11:52.853996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:14.819 [2024-12-06 05:11:52.854003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.374 ms 00:24:14.819 [2024-12-06 05:11:52.854009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.819 [2024-12-06 05:11:52.855074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.819 [2024-12-06 05:11:52.855098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:14.819 [2024-12-06 05:11:52.855105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.037 ms 00:24:14.819 [2024-12-06 05:11:52.855111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.819 [2024-12-06 05:11:52.855346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.819 [2024-12-06 05:11:52.855358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:14.819 [2024-12-06 05:11:52.855370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.191 ms 00:24:14.819 [2024-12-06 05:11:52.855377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.819 [2024-12-06 05:11:52.869057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.819 [2024-12-06 05:11:52.869099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:14.819 [2024-12-06 05:11:52.869110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.668 ms 00:24:14.819 [2024-12-06 05:11:52.869116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.819 [2024-12-06 05:11:52.874903] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:14.819 [2024-12-06 05:11:52.876944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.819 [2024-12-06 05:11:52.876970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:14.819 [2024-12-06 05:11:52.876979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.795 ms 00:24:14.819 [2024-12-06 05:11:52.876985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.819 [2024-12-06 05:11:52.877030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.819 [2024-12-06 05:11:52.877039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:14.819 [2024-12-06 05:11:52.877046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:24:14.819 [2024-12-06 05:11:52.877053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.820 [2024-12-06 05:11:52.877111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.820 [2024-12-06 05:11:52.877120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:14.820 [2024-12-06 05:11:52.877127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:24:14.820 [2024-12-06 05:11:52.877133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.820 [2024-12-06 05:11:52.877151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.820 [2024-12-06 05:11:52.877159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:14.820 [2024-12-06 05:11:52.877166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:14.820 [2024-12-06 05:11:52.877172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.820 [2024-12-06 05:11:52.877198] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:14.820 [2024-12-06 05:11:52.877207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.820 [2024-12-06 05:11:52.877214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:14.820 [2024-12-06 05:11:52.877220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:14.820 [2024-12-06 05:11:52.877230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.820 [2024-12-06 05:11:52.880038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.820 [2024-12-06 05:11:52.880067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:14.820 [2024-12-06 05:11:52.880075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.794 ms 00:24:14.820 [2024-12-06 05:11:52.880082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.820 [2024-12-06 05:11:52.880138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.820 [2024-12-06 05:11:52.880149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:14.820 [2024-12-06 05:11:52.880159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:24:14.820 [2024-12-06 05:11:52.880166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.820 [2024-12-06 05:11:52.880873] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 81.708 ms, result 0 00:24:15.762  [2024-12-06T05:11:54.938Z] Copying: 19/1024 [MB] (19 MBps) [2024-12-06T05:11:55.939Z] Copying: 35/1024 [MB] (15 MBps) [2024-12-06T05:11:56.902Z] Copying: 49/1024 [MB] (13 MBps) [2024-12-06T05:11:58.291Z] Copying: 66/1024 [MB] (16 MBps) [2024-12-06T05:11:59.237Z] Copying: 81/1024 [MB] (15 MBps) [2024-12-06T05:12:00.183Z] Copying: 95/1024 [MB] (13 MBps) [2024-12-06T05:12:01.128Z] Copying: 108/1024 [MB] (12 MBps) [2024-12-06T05:12:02.074Z] Copying: 120/1024 [MB] (12 MBps) [2024-12-06T05:12:03.018Z] Copying: 137/1024 [MB] (16 MBps) [2024-12-06T05:12:03.963Z] Copying: 150/1024 [MB] (13 MBps) [2024-12-06T05:12:04.908Z] Copying: 172/1024 [MB] (21 MBps) [2024-12-06T05:12:06.302Z] Copying: 190/1024 [MB] (18 MBps) [2024-12-06T05:12:07.244Z] Copying: 203/1024 [MB] (12 MBps) [2024-12-06T05:12:08.187Z] Copying: 224/1024 [MB] (21 MBps) [2024-12-06T05:12:09.131Z] Copying: 235/1024 [MB] (10 MBps) [2024-12-06T05:12:10.075Z] Copying: 245/1024 [MB] (10 MBps) [2024-12-06T05:12:11.019Z] Copying: 255/1024 [MB] (10 MBps) [2024-12-06T05:12:11.965Z] Copying: 265/1024 [MB] (10 MBps) [2024-12-06T05:12:12.910Z] Copying: 279/1024 [MB] (13 MBps) [2024-12-06T05:12:14.299Z] Copying: 289/1024 [MB] (10 MBps) [2024-12-06T05:12:15.248Z] Copying: 299/1024 [MB] (10 MBps) [2024-12-06T05:12:16.194Z] Copying: 310/1024 [MB] (10 MBps) [2024-12-06T05:12:17.141Z] Copying: 320/1024 [MB] (10 MBps) [2024-12-06T05:12:18.085Z] Copying: 331/1024 [MB] (10 MBps) [2024-12-06T05:12:19.031Z] Copying: 341/1024 [MB] (10 MBps) [2024-12-06T05:12:19.977Z] Copying: 355/1024 [MB] (14 MBps) [2024-12-06T05:12:20.917Z] Copying: 366/1024 [MB] (10 MBps) [2024-12-06T05:12:22.304Z] Copying: 388/1024 [MB] (22 MBps) [2024-12-06T05:12:23.248Z] Copying: 400/1024 [MB] (11 MBps) [2024-12-06T05:12:24.187Z] Copying: 414/1024 [MB] (13 MBps) [2024-12-06T05:12:25.132Z] Copying: 428/1024 [MB] (14 MBps) [2024-12-06T05:12:26.074Z] Copying: 443/1024 [MB] (14 MBps) [2024-12-06T05:12:27.019Z] Copying: 454/1024 [MB] (10 MBps) [2024-12-06T05:12:27.999Z] Copying: 466/1024 [MB] (12 MBps) [2024-12-06T05:12:28.943Z] Copying: 480/1024 [MB] (13 MBps) [2024-12-06T05:12:29.908Z] Copying: 499/1024 [MB] (19 MBps) [2024-12-06T05:12:31.296Z] Copying: 512/1024 [MB] (12 MBps) [2024-12-06T05:12:32.240Z] Copying: 531/1024 [MB] (19 MBps) [2024-12-06T05:12:33.185Z] Copying: 548/1024 [MB] (17 MBps) [2024-12-06T05:12:34.133Z] Copying: 562/1024 [MB] (13 MBps) [2024-12-06T05:12:35.077Z] Copying: 575/1024 [MB] (13 MBps) [2024-12-06T05:12:36.015Z] Copying: 591/1024 [MB] (15 MBps) [2024-12-06T05:12:36.958Z] Copying: 606/1024 [MB] (14 MBps) [2024-12-06T05:12:37.905Z] Copying: 620/1024 [MB] (14 MBps) [2024-12-06T05:12:39.294Z] Copying: 635/1024 [MB] (15 MBps) [2024-12-06T05:12:40.237Z] Copying: 651/1024 [MB] (15 MBps) [2024-12-06T05:12:41.179Z] Copying: 673/1024 [MB] (21 MBps) [2024-12-06T05:12:42.123Z] Copying: 686/1024 [MB] (13 MBps) [2024-12-06T05:12:43.068Z] Copying: 699/1024 [MB] (12 MBps) [2024-12-06T05:12:44.038Z] Copying: 711/1024 [MB] (12 MBps) [2024-12-06T05:12:44.980Z] Copying: 723/1024 [MB] (12 MBps) [2024-12-06T05:12:45.920Z] Copying: 735/1024 [MB] (12 MBps) [2024-12-06T05:12:47.299Z] Copying: 748/1024 [MB] (12 MBps) [2024-12-06T05:12:48.240Z] Copying: 760/1024 [MB] (12 MBps) [2024-12-06T05:12:49.182Z] Copying: 772/1024 [MB] (12 MBps) [2024-12-06T05:12:50.124Z] Copying: 785/1024 [MB] (12 MBps) [2024-12-06T05:12:51.067Z] Copying: 803/1024 [MB] (18 MBps) [2024-12-06T05:12:52.008Z] Copying: 816/1024 [MB] (12 MBps) [2024-12-06T05:12:52.973Z] Copying: 828/1024 [MB] (11 MBps) [2024-12-06T05:12:53.911Z] Copying: 839/1024 [MB] (10 MBps) [2024-12-06T05:12:55.297Z] Copying: 851/1024 [MB] (12 MBps) [2024-12-06T05:12:56.238Z] Copying: 861/1024 [MB] (10 MBps) [2024-12-06T05:12:57.181Z] Copying: 873/1024 [MB] (12 MBps) [2024-12-06T05:12:58.124Z] Copying: 885/1024 [MB] (12 MBps) [2024-12-06T05:12:59.069Z] Copying: 897/1024 [MB] (11 MBps) [2024-12-06T05:13:00.108Z] Copying: 907/1024 [MB] (10 MBps) [2024-12-06T05:13:01.053Z] Copying: 917/1024 [MB] (10 MBps) [2024-12-06T05:13:01.999Z] Copying: 927/1024 [MB] (10 MBps) [2024-12-06T05:13:02.945Z] Copying: 938/1024 [MB] (11 MBps) [2024-12-06T05:13:04.333Z] Copying: 950/1024 [MB] (11 MBps) [2024-12-06T05:13:04.906Z] Copying: 961/1024 [MB] (11 MBps) [2024-12-06T05:13:06.292Z] Copying: 972/1024 [MB] (10 MBps) [2024-12-06T05:13:07.234Z] Copying: 982/1024 [MB] (10 MBps) [2024-12-06T05:13:08.175Z] Copying: 1016216/1048576 [kB] (10192 kBps) [2024-12-06T05:13:09.115Z] Copying: 1026416/1048576 [kB] (10200 kBps) [2024-12-06T05:13:10.057Z] Copying: 1012/1024 [MB] (10 MBps) [2024-12-06T05:13:10.999Z] Copying: 1022/1024 [MB] (10 MBps) [2024-12-06T05:13:10.999Z] Copying: 1024/1024 [MB] (average 13 MBps)[2024-12-06 05:13:10.852572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.767 [2024-12-06 05:13:10.852633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:32.767 [2024-12-06 05:13:10.852647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:32.768 [2024-12-06 05:13:10.852655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.768 [2024-12-06 05:13:10.853780] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:32.768 [2024-12-06 05:13:10.855192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.768 [2024-12-06 05:13:10.855335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:32.768 [2024-12-06 05:13:10.855354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.385 ms 00:25:32.768 [2024-12-06 05:13:10.855369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.768 [2024-12-06 05:13:10.868615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.768 [2024-12-06 05:13:10.868654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:32.768 [2024-12-06 05:13:10.868680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.059 ms 00:25:32.768 [2024-12-06 05:13:10.868688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.768 [2024-12-06 05:13:10.890302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.768 [2024-12-06 05:13:10.890437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:32.768 [2024-12-06 05:13:10.890461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.597 ms 00:25:32.768 [2024-12-06 05:13:10.890469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.768 [2024-12-06 05:13:10.896612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.768 [2024-12-06 05:13:10.896651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:32.768 [2024-12-06 05:13:10.896661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.113 ms 00:25:32.768 [2024-12-06 05:13:10.896682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.768 [2024-12-06 05:13:10.899121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.768 [2024-12-06 05:13:10.899158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:32.768 [2024-12-06 05:13:10.899168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.382 ms 00:25:32.768 [2024-12-06 05:13:10.899176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.768 [2024-12-06 05:13:10.903004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.768 [2024-12-06 05:13:10.903042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:32.768 [2024-12-06 05:13:10.903062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.794 ms 00:25:32.768 [2024-12-06 05:13:10.903070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.030 [2024-12-06 05:13:11.191006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.030 [2024-12-06 05:13:11.191056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:33.030 [2024-12-06 05:13:11.191070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 287.897 ms 00:25:33.030 [2024-12-06 05:13:11.191079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.030 [2024-12-06 05:13:11.194418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.030 [2024-12-06 05:13:11.194465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:33.030 [2024-12-06 05:13:11.194475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.322 ms 00:25:33.030 [2024-12-06 05:13:11.194482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.030 [2024-12-06 05:13:11.197310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.030 [2024-12-06 05:13:11.197356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:33.030 [2024-12-06 05:13:11.197366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.786 ms 00:25:33.030 [2024-12-06 05:13:11.197374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.030 [2024-12-06 05:13:11.199618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.030 [2024-12-06 05:13:11.199680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:33.030 [2024-12-06 05:13:11.199691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.203 ms 00:25:33.030 [2024-12-06 05:13:11.199699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.030 [2024-12-06 05:13:11.202047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.030 [2024-12-06 05:13:11.202092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:33.030 [2024-12-06 05:13:11.202102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.281 ms 00:25:33.030 [2024-12-06 05:13:11.202109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.030 [2024-12-06 05:13:11.202147] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:33.030 [2024-12-06 05:13:11.202162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 104192 / 261120 wr_cnt: 1 state: open 00:25:33.030 [2024-12-06 05:13:11.202183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:33.030 [2024-12-06 05:13:11.202191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:33.030 [2024-12-06 05:13:11.202200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:33.030 [2024-12-06 05:13:11.202208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:33.030 [2024-12-06 05:13:11.202216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:33.030 [2024-12-06 05:13:11.202224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:33.030 [2024-12-06 05:13:11.202232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:33.030 [2024-12-06 05:13:11.202240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:33.030 [2024-12-06 05:13:11.202248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:33.030 [2024-12-06 05:13:11.202257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:33.030 [2024-12-06 05:13:11.202265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:33.030 [2024-12-06 05:13:11.202273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:33.030 [2024-12-06 05:13:11.202282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:33.031 [2024-12-06 05:13:11.202289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:33.031 [2024-12-06 05:13:11.202297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:33.031 [2024-12-06 05:13:11.202305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:33.031 [2024-12-06 05:13:11.202313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:33.031 [2024-12-06 05:13:11.202320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:33.031 [2024-12-06 05:13:11.202328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:33.031 [2024-12-06 05:13:11.202335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:33.031 [2024-12-06 05:13:11.202342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:33.031 [2024-12-06 05:13:11.202350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:33.031 [2024-12-06 05:13:11.202357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:33.031 [2024-12-06 05:13:11.202365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:33.031 [2024-12-06 05:13:11.202372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:33.031 [2024-12-06 05:13:11.202381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:33.031 [2024-12-06 05:13:11.202390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:33.031 [2024-12-06 05:13:11.202398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:33.031 [2024-12-06 05:13:11.202407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:33.031 [2024-12-06 05:13:11.202415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:33.031 [2024-12-06 05:13:11.202423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:33.031 [2024-12-06 05:13:11.202431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:33.031 [2024-12-06 05:13:11.202439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:33.031 [2024-12-06 05:13:11.202446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:33.031 [2024-12-06 05:13:11.202454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:33.031 [2024-12-06 05:13:11.202462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:33.031 [2024-12-06 05:13:11.202470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:33.031 [2024-12-06 05:13:11.202478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:33.031 [2024-12-06 05:13:11.202486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:33.031 [2024-12-06 05:13:11.202494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:33.031 [2024-12-06 05:13:11.202502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:33.031 [2024-12-06 05:13:11.202510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:33.031 [2024-12-06 05:13:11.202518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:33.031 [2024-12-06 05:13:11.202533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:33.031 [2024-12-06 05:13:11.202541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:33.031 [2024-12-06 05:13:11.202548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:33.031 [2024-12-06 05:13:11.202558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:33.031 [2024-12-06 05:13:11.202566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:33.031 [2024-12-06 05:13:11.202574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:33.031 [2024-12-06 05:13:11.202583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:33.031 [2024-12-06 05:13:11.202590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:33.031 [2024-12-06 05:13:11.202598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:33.031 [2024-12-06 05:13:11.202606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:33.031 [2024-12-06 05:13:11.202614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:33.031 [2024-12-06 05:13:11.202622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:33.031 [2024-12-06 05:13:11.202629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:33.031 [2024-12-06 05:13:11.202637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:33.031 [2024-12-06 05:13:11.202645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:33.031 [2024-12-06 05:13:11.202654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:33.031 [2024-12-06 05:13:11.202662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:33.031 [2024-12-06 05:13:11.202686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:33.031 [2024-12-06 05:13:11.202694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:33.031 [2024-12-06 05:13:11.202702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:33.031 [2024-12-06 05:13:11.202710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:33.031 [2024-12-06 05:13:11.202718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:33.031 [2024-12-06 05:13:11.202727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:33.031 [2024-12-06 05:13:11.202735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:33.031 [2024-12-06 05:13:11.202743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:33.031 [2024-12-06 05:13:11.202751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:33.031 [2024-12-06 05:13:11.202759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:33.031 [2024-12-06 05:13:11.202767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:33.031 [2024-12-06 05:13:11.202775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:33.031 [2024-12-06 05:13:11.202783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:33.031 [2024-12-06 05:13:11.202790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:33.031 [2024-12-06 05:13:11.202798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:33.031 [2024-12-06 05:13:11.202807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:33.031 [2024-12-06 05:13:11.202815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:33.031 [2024-12-06 05:13:11.202823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:33.031 [2024-12-06 05:13:11.202831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:33.031 [2024-12-06 05:13:11.202839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:33.031 [2024-12-06 05:13:11.202847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:33.031 [2024-12-06 05:13:11.202855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:33.031 [2024-12-06 05:13:11.202863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:33.031 [2024-12-06 05:13:11.202870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:33.031 [2024-12-06 05:13:11.202878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:33.031 [2024-12-06 05:13:11.202885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:33.031 [2024-12-06 05:13:11.202892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:33.031 [2024-12-06 05:13:11.202900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:33.031 [2024-12-06 05:13:11.202908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:33.031 [2024-12-06 05:13:11.202916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:33.031 [2024-12-06 05:13:11.202924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:33.031 [2024-12-06 05:13:11.202932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:33.031 [2024-12-06 05:13:11.202940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:33.031 [2024-12-06 05:13:11.202948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:33.031 [2024-12-06 05:13:11.202955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:33.031 [2024-12-06 05:13:11.202963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:33.031 [2024-12-06 05:13:11.202971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:33.031 [2024-12-06 05:13:11.202978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:33.031 [2024-12-06 05:13:11.202986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:33.031 [2024-12-06 05:13:11.203002] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:33.031 [2024-12-06 05:13:11.203014] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 84366e27-4363-4609-9553-2ee9629251d7 00:25:33.031 [2024-12-06 05:13:11.203035] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 104192 00:25:33.031 [2024-12-06 05:13:11.203043] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 105152 00:25:33.032 [2024-12-06 05:13:11.203051] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 104192 00:25:33.032 [2024-12-06 05:13:11.203060] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0092 00:25:33.032 [2024-12-06 05:13:11.203068] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:33.032 [2024-12-06 05:13:11.203076] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:33.032 [2024-12-06 05:13:11.203084] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:33.032 [2024-12-06 05:13:11.203091] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:33.032 [2024-12-06 05:13:11.203097] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:33.032 [2024-12-06 05:13:11.203104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.032 [2024-12-06 05:13:11.203112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:33.032 [2024-12-06 05:13:11.203127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.959 ms 00:25:33.032 [2024-12-06 05:13:11.203135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.032 [2024-12-06 05:13:11.205376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.032 [2024-12-06 05:13:11.205408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:33.032 [2024-12-06 05:13:11.205419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.222 ms 00:25:33.032 [2024-12-06 05:13:11.205428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.032 [2024-12-06 05:13:11.205548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.032 [2024-12-06 05:13:11.205557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:33.032 [2024-12-06 05:13:11.205573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:25:33.032 [2024-12-06 05:13:11.205581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.032 [2024-12-06 05:13:11.212394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:33.032 [2024-12-06 05:13:11.212443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:33.032 [2024-12-06 05:13:11.212454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:33.032 [2024-12-06 05:13:11.212462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.032 [2024-12-06 05:13:11.212526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:33.032 [2024-12-06 05:13:11.212534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:33.032 [2024-12-06 05:13:11.212548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:33.032 [2024-12-06 05:13:11.212556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.032 [2024-12-06 05:13:11.212634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:33.032 [2024-12-06 05:13:11.212644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:33.032 [2024-12-06 05:13:11.212653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:33.032 [2024-12-06 05:13:11.212660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.032 [2024-12-06 05:13:11.212723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:33.032 [2024-12-06 05:13:11.212732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:33.032 [2024-12-06 05:13:11.212741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:33.032 [2024-12-06 05:13:11.212754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.032 [2024-12-06 05:13:11.226905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:33.032 [2024-12-06 05:13:11.227110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:33.032 [2024-12-06 05:13:11.227131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:33.032 [2024-12-06 05:13:11.227139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.032 [2024-12-06 05:13:11.238577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:33.032 [2024-12-06 05:13:11.238776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:33.032 [2024-12-06 05:13:11.238808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:33.032 [2024-12-06 05:13:11.238817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.032 [2024-12-06 05:13:11.238878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:33.032 [2024-12-06 05:13:11.238889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:33.032 [2024-12-06 05:13:11.238897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:33.032 [2024-12-06 05:13:11.238905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.032 [2024-12-06 05:13:11.238949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:33.032 [2024-12-06 05:13:11.238959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:33.032 [2024-12-06 05:13:11.238968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:33.032 [2024-12-06 05:13:11.238977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.032 [2024-12-06 05:13:11.239057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:33.032 [2024-12-06 05:13:11.239066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:33.032 [2024-12-06 05:13:11.239076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:33.032 [2024-12-06 05:13:11.239085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.032 [2024-12-06 05:13:11.239127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:33.032 [2024-12-06 05:13:11.239137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:33.032 [2024-12-06 05:13:11.239145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:33.032 [2024-12-06 05:13:11.239154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.032 [2024-12-06 05:13:11.239200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:33.032 [2024-12-06 05:13:11.239211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:33.032 [2024-12-06 05:13:11.239219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:33.032 [2024-12-06 05:13:11.239227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.032 [2024-12-06 05:13:11.239280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:33.032 [2024-12-06 05:13:11.239292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:33.032 [2024-12-06 05:13:11.239303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:33.032 [2024-12-06 05:13:11.239312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.032 [2024-12-06 05:13:11.239457] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 389.614 ms, result 0 00:25:33.974 00:25:33.974 00:25:34.235 05:13:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:25:36.152 05:13:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:36.152 [2024-12-06 05:13:14.362949] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:25:36.152 [2024-12-06 05:13:14.363039] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91023 ] 00:25:36.413 [2024-12-06 05:13:14.497479] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:36.413 [2024-12-06 05:13:14.539391] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:25:36.674 [2024-12-06 05:13:14.648636] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:36.674 [2024-12-06 05:13:14.648736] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:36.674 [2024-12-06 05:13:14.809783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.674 [2024-12-06 05:13:14.809847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:36.674 [2024-12-06 05:13:14.809865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:36.674 [2024-12-06 05:13:14.809874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.674 [2024-12-06 05:13:14.809938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.674 [2024-12-06 05:13:14.809949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:36.674 [2024-12-06 05:13:14.809961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:25:36.674 [2024-12-06 05:13:14.809976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.674 [2024-12-06 05:13:14.810003] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:36.674 [2024-12-06 05:13:14.810287] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:36.674 [2024-12-06 05:13:14.810304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.674 [2024-12-06 05:13:14.810319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:36.674 [2024-12-06 05:13:14.810328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.313 ms 00:25:36.674 [2024-12-06 05:13:14.810340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.674 [2024-12-06 05:13:14.812188] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:36.674 [2024-12-06 05:13:14.816266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.674 [2024-12-06 05:13:14.816329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:36.674 [2024-12-06 05:13:14.816342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.080 ms 00:25:36.674 [2024-12-06 05:13:14.816350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.674 [2024-12-06 05:13:14.816436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.674 [2024-12-06 05:13:14.816454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:36.674 [2024-12-06 05:13:14.816462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:25:36.674 [2024-12-06 05:13:14.816470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.674 [2024-12-06 05:13:14.824948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.674 [2024-12-06 05:13:14.825002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:36.674 [2024-12-06 05:13:14.825014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.432 ms 00:25:36.674 [2024-12-06 05:13:14.825022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.674 [2024-12-06 05:13:14.825143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.674 [2024-12-06 05:13:14.825154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:36.674 [2024-12-06 05:13:14.825163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:25:36.674 [2024-12-06 05:13:14.825171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.674 [2024-12-06 05:13:14.825231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.674 [2024-12-06 05:13:14.825241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:36.674 [2024-12-06 05:13:14.825249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:36.674 [2024-12-06 05:13:14.825257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.674 [2024-12-06 05:13:14.825280] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:36.674 [2024-12-06 05:13:14.827465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.674 [2024-12-06 05:13:14.827513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:36.674 [2024-12-06 05:13:14.827530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.190 ms 00:25:36.674 [2024-12-06 05:13:14.827538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.674 [2024-12-06 05:13:14.827575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.674 [2024-12-06 05:13:14.827585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:36.674 [2024-12-06 05:13:14.827593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:25:36.675 [2024-12-06 05:13:14.827600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.675 [2024-12-06 05:13:14.827623] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:36.675 [2024-12-06 05:13:14.827648] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:36.675 [2024-12-06 05:13:14.827715] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:36.675 [2024-12-06 05:13:14.827733] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:36.675 [2024-12-06 05:13:14.827842] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:36.675 [2024-12-06 05:13:14.827853] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:36.675 [2024-12-06 05:13:14.827864] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:36.675 [2024-12-06 05:13:14.827875] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:36.675 [2024-12-06 05:13:14.827888] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:36.675 [2024-12-06 05:13:14.827897] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:36.675 [2024-12-06 05:13:14.827908] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:36.675 [2024-12-06 05:13:14.827916] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:36.675 [2024-12-06 05:13:14.827924] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:36.675 [2024-12-06 05:13:14.827937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.675 [2024-12-06 05:13:14.827945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:36.675 [2024-12-06 05:13:14.827958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.316 ms 00:25:36.675 [2024-12-06 05:13:14.827965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.675 [2024-12-06 05:13:14.828055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.675 [2024-12-06 05:13:14.828064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:36.675 [2024-12-06 05:13:14.828075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:25:36.675 [2024-12-06 05:13:14.828086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.675 [2024-12-06 05:13:14.828189] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:36.675 [2024-12-06 05:13:14.828205] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:36.675 [2024-12-06 05:13:14.828215] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:36.675 [2024-12-06 05:13:14.828225] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:36.675 [2024-12-06 05:13:14.828240] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:36.675 [2024-12-06 05:13:14.828248] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:36.675 [2024-12-06 05:13:14.828256] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:36.675 [2024-12-06 05:13:14.828264] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:36.675 [2024-12-06 05:13:14.828272] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:36.675 [2024-12-06 05:13:14.828281] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:36.675 [2024-12-06 05:13:14.828289] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:36.675 [2024-12-06 05:13:14.828297] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:36.675 [2024-12-06 05:13:14.828305] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:36.675 [2024-12-06 05:13:14.828313] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:36.675 [2024-12-06 05:13:14.828320] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:36.675 [2024-12-06 05:13:14.828328] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:36.675 [2024-12-06 05:13:14.828337] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:36.675 [2024-12-06 05:13:14.828345] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:36.675 [2024-12-06 05:13:14.828353] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:36.675 [2024-12-06 05:13:14.828364] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:36.675 [2024-12-06 05:13:14.828375] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:36.675 [2024-12-06 05:13:14.828384] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:36.675 [2024-12-06 05:13:14.828392] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:36.675 [2024-12-06 05:13:14.828400] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:36.675 [2024-12-06 05:13:14.828408] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:36.675 [2024-12-06 05:13:14.828416] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:36.675 [2024-12-06 05:13:14.828423] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:36.675 [2024-12-06 05:13:14.828431] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:36.675 [2024-12-06 05:13:14.828439] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:36.675 [2024-12-06 05:13:14.828448] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:36.675 [2024-12-06 05:13:14.828456] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:36.675 [2024-12-06 05:13:14.828463] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:36.675 [2024-12-06 05:13:14.828471] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:36.675 [2024-12-06 05:13:14.828479] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:36.675 [2024-12-06 05:13:14.828487] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:36.675 [2024-12-06 05:13:14.828495] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:36.675 [2024-12-06 05:13:14.828505] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:36.675 [2024-12-06 05:13:14.828514] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:36.675 [2024-12-06 05:13:14.828522] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:36.675 [2024-12-06 05:13:14.828530] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:36.675 [2024-12-06 05:13:14.828537] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:36.675 [2024-12-06 05:13:14.828545] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:36.675 [2024-12-06 05:13:14.828553] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:36.675 [2024-12-06 05:13:14.828560] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:36.675 [2024-12-06 05:13:14.828569] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:36.675 [2024-12-06 05:13:14.828576] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:36.675 [2024-12-06 05:13:14.828590] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:36.675 [2024-12-06 05:13:14.828602] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:36.675 [2024-12-06 05:13:14.828609] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:36.675 [2024-12-06 05:13:14.828616] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:36.675 [2024-12-06 05:13:14.828622] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:36.675 [2024-12-06 05:13:14.828630] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:36.675 [2024-12-06 05:13:14.828640] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:36.675 [2024-12-06 05:13:14.828648] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:36.675 [2024-12-06 05:13:14.828657] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:36.675 [2024-12-06 05:13:14.828679] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:36.675 [2024-12-06 05:13:14.828688] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:36.675 [2024-12-06 05:13:14.828695] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:36.675 [2024-12-06 05:13:14.828702] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:36.675 [2024-12-06 05:13:14.828709] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:36.675 [2024-12-06 05:13:14.828717] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:36.675 [2024-12-06 05:13:14.828725] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:36.675 [2024-12-06 05:13:14.828732] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:36.675 [2024-12-06 05:13:14.828740] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:36.675 [2024-12-06 05:13:14.828754] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:36.675 [2024-12-06 05:13:14.828762] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:36.675 [2024-12-06 05:13:14.828769] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:36.675 [2024-12-06 05:13:14.828778] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:36.675 [2024-12-06 05:13:14.828788] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:36.675 [2024-12-06 05:13:14.828795] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:36.675 [2024-12-06 05:13:14.828805] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:36.675 [2024-12-06 05:13:14.828813] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:36.675 [2024-12-06 05:13:14.828821] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:36.675 [2024-12-06 05:13:14.828828] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:36.675 [2024-12-06 05:13:14.828835] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:36.675 [2024-12-06 05:13:14.828843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.676 [2024-12-06 05:13:14.828851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:36.676 [2024-12-06 05:13:14.828859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.726 ms 00:25:36.676 [2024-12-06 05:13:14.828867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.676 [2024-12-06 05:13:14.851495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.676 [2024-12-06 05:13:14.851761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:36.676 [2024-12-06 05:13:14.852275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.580 ms 00:25:36.676 [2024-12-06 05:13:14.852346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.676 [2024-12-06 05:13:14.852707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.676 [2024-12-06 05:13:14.852946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:36.676 [2024-12-06 05:13:14.853036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:25:36.676 [2024-12-06 05:13:14.853069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.676 [2024-12-06 05:13:14.864959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.676 [2024-12-06 05:13:14.865117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:36.676 [2024-12-06 05:13:14.865180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.767 ms 00:25:36.676 [2024-12-06 05:13:14.865209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.676 [2024-12-06 05:13:14.865258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.676 [2024-12-06 05:13:14.865280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:36.676 [2024-12-06 05:13:14.865300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:36.676 [2024-12-06 05:13:14.865309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.676 [2024-12-06 05:13:14.865869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.676 [2024-12-06 05:13:14.865904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:36.676 [2024-12-06 05:13:14.865915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.501 ms 00:25:36.676 [2024-12-06 05:13:14.865923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.676 [2024-12-06 05:13:14.866076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.676 [2024-12-06 05:13:14.866087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:36.676 [2024-12-06 05:13:14.866096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:25:36.676 [2024-12-06 05:13:14.866106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.676 [2024-12-06 05:13:14.872512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.676 [2024-12-06 05:13:14.872559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:36.676 [2024-12-06 05:13:14.872573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.379 ms 00:25:36.676 [2024-12-06 05:13:14.872581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.676 [2024-12-06 05:13:14.876184] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:25:36.676 [2024-12-06 05:13:14.876231] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:36.676 [2024-12-06 05:13:14.876242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.676 [2024-12-06 05:13:14.876251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:36.676 [2024-12-06 05:13:14.876260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.546 ms 00:25:36.676 [2024-12-06 05:13:14.876267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.676 [2024-12-06 05:13:14.891928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.676 [2024-12-06 05:13:14.891976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:36.676 [2024-12-06 05:13:14.891995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.607 ms 00:25:36.676 [2024-12-06 05:13:14.892002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.676 [2024-12-06 05:13:14.895173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.676 [2024-12-06 05:13:14.895338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:36.676 [2024-12-06 05:13:14.895356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.120 ms 00:25:36.676 [2024-12-06 05:13:14.895371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.676 [2024-12-06 05:13:14.898161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.676 [2024-12-06 05:13:14.898208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:36.676 [2024-12-06 05:13:14.898217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.747 ms 00:25:36.676 [2024-12-06 05:13:14.898225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.676 [2024-12-06 05:13:14.898576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.676 [2024-12-06 05:13:14.898587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:36.676 [2024-12-06 05:13:14.898597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.270 ms 00:25:36.676 [2024-12-06 05:13:14.898605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.936 [2024-12-06 05:13:14.920756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.936 [2024-12-06 05:13:14.920963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:36.936 [2024-12-06 05:13:14.921022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.133 ms 00:25:36.936 [2024-12-06 05:13:14.921052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.936 [2024-12-06 05:13:14.929227] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:36.936 [2024-12-06 05:13:14.932308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.936 [2024-12-06 05:13:14.932436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:36.936 [2024-12-06 05:13:14.932464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.133 ms 00:25:36.936 [2024-12-06 05:13:14.932472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.936 [2024-12-06 05:13:14.932547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.936 [2024-12-06 05:13:14.932559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:36.936 [2024-12-06 05:13:14.932568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:25:36.936 [2024-12-06 05:13:14.932576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.936 [2024-12-06 05:13:14.934251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.936 [2024-12-06 05:13:14.934389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:36.936 [2024-12-06 05:13:14.934407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.636 ms 00:25:36.936 [2024-12-06 05:13:14.934421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.936 [2024-12-06 05:13:14.934469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.936 [2024-12-06 05:13:14.934489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:36.936 [2024-12-06 05:13:14.934501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:36.936 [2024-12-06 05:13:14.934509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.936 [2024-12-06 05:13:14.934547] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:36.936 [2024-12-06 05:13:14.934557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.936 [2024-12-06 05:13:14.934564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:36.936 [2024-12-06 05:13:14.934572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:25:36.936 [2024-12-06 05:13:14.934580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.936 [2024-12-06 05:13:14.939733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.936 [2024-12-06 05:13:14.939774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:36.936 [2024-12-06 05:13:14.939791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.132 ms 00:25:36.936 [2024-12-06 05:13:14.939799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.936 [2024-12-06 05:13:14.939880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.936 [2024-12-06 05:13:14.939889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:36.936 [2024-12-06 05:13:14.939898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:25:36.936 [2024-12-06 05:13:14.939906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.936 [2024-12-06 05:13:14.940979] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 130.745 ms, result 0 00:25:38.315  [2024-12-06T05:13:17.490Z] Copying: 1128/1048576 [kB] (1128 kBps) [2024-12-06T05:13:18.435Z] Copying: 4452/1048576 [kB] (3324 kBps) [2024-12-06T05:13:19.380Z] Copying: 16/1024 [MB] (12 MBps) [2024-12-06T05:13:20.321Z] Copying: 43/1024 [MB] (26 MBps) [2024-12-06T05:13:21.263Z] Copying: 72/1024 [MB] (28 MBps) [2024-12-06T05:13:22.208Z] Copying: 99/1024 [MB] (27 MBps) [2024-12-06T05:13:23.150Z] Copying: 129/1024 [MB] (30 MBps) [2024-12-06T05:13:24.536Z] Copying: 156/1024 [MB] (26 MBps) [2024-12-06T05:13:25.476Z] Copying: 186/1024 [MB] (29 MBps) [2024-12-06T05:13:26.412Z] Copying: 209/1024 [MB] (22 MBps) [2024-12-06T05:13:27.354Z] Copying: 239/1024 [MB] (30 MBps) [2024-12-06T05:13:28.299Z] Copying: 268/1024 [MB] (29 MBps) [2024-12-06T05:13:29.240Z] Copying: 297/1024 [MB] (28 MBps) [2024-12-06T05:13:30.177Z] Copying: 327/1024 [MB] (30 MBps) [2024-12-06T05:13:31.170Z] Copying: 359/1024 [MB] (32 MBps) [2024-12-06T05:13:32.147Z] Copying: 390/1024 [MB] (30 MBps) [2024-12-06T05:13:33.535Z] Copying: 415/1024 [MB] (24 MBps) [2024-12-06T05:13:34.480Z] Copying: 440/1024 [MB] (25 MBps) [2024-12-06T05:13:35.425Z] Copying: 457/1024 [MB] (17 MBps) [2024-12-06T05:13:36.365Z] Copying: 484/1024 [MB] (26 MBps) [2024-12-06T05:13:37.305Z] Copying: 512/1024 [MB] (28 MBps) [2024-12-06T05:13:38.245Z] Copying: 540/1024 [MB] (27 MBps) [2024-12-06T05:13:39.188Z] Copying: 566/1024 [MB] (26 MBps) [2024-12-06T05:13:40.132Z] Copying: 597/1024 [MB] (30 MBps) [2024-12-06T05:13:41.519Z] Copying: 626/1024 [MB] (29 MBps) [2024-12-06T05:13:42.462Z] Copying: 655/1024 [MB] (29 MBps) [2024-12-06T05:13:43.405Z] Copying: 688/1024 [MB] (33 MBps) [2024-12-06T05:13:44.340Z] Copying: 715/1024 [MB] (27 MBps) [2024-12-06T05:13:45.287Z] Copying: 746/1024 [MB] (30 MBps) [2024-12-06T05:13:46.231Z] Copying: 780/1024 [MB] (33 MBps) [2024-12-06T05:13:47.176Z] Copying: 808/1024 [MB] (27 MBps) [2024-12-06T05:13:48.563Z] Copying: 837/1024 [MB] (29 MBps) [2024-12-06T05:13:49.137Z] Copying: 864/1024 [MB] (26 MBps) [2024-12-06T05:13:50.525Z] Copying: 889/1024 [MB] (24 MBps) [2024-12-06T05:13:51.473Z] Copying: 911/1024 [MB] (21 MBps) [2024-12-06T05:13:52.416Z] Copying: 937/1024 [MB] (25 MBps) [2024-12-06T05:13:53.358Z] Copying: 963/1024 [MB] (26 MBps) [2024-12-06T05:13:54.301Z] Copying: 989/1024 [MB] (25 MBps) [2024-12-06T05:13:54.874Z] Copying: 1013/1024 [MB] (23 MBps) [2024-12-06T05:13:55.138Z] Copying: 1024/1024 [MB] (average 25 MBps)[2024-12-06 05:13:55.044384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.906 [2024-12-06 05:13:55.044449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:16.906 [2024-12-06 05:13:55.044463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:16.906 [2024-12-06 05:13:55.044471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.906 [2024-12-06 05:13:55.044489] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:16.906 [2024-12-06 05:13:55.045094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.906 [2024-12-06 05:13:55.045116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:16.906 [2024-12-06 05:13:55.045129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.593 ms 00:26:16.906 [2024-12-06 05:13:55.045136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.906 [2024-12-06 05:13:55.045384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.906 [2024-12-06 05:13:55.045394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:16.906 [2024-12-06 05:13:55.045403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.228 ms 00:26:16.907 [2024-12-06 05:13:55.045412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.907 [2024-12-06 05:13:55.056822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.907 [2024-12-06 05:13:55.056855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:16.907 [2024-12-06 05:13:55.056864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.394 ms 00:26:16.907 [2024-12-06 05:13:55.056871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.907 [2024-12-06 05:13:55.061410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.907 [2024-12-06 05:13:55.061434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:16.907 [2024-12-06 05:13:55.061443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.510 ms 00:26:16.907 [2024-12-06 05:13:55.061457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.907 [2024-12-06 05:13:55.063636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.907 [2024-12-06 05:13:55.063661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:16.907 [2024-12-06 05:13:55.063680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.147 ms 00:26:16.907 [2024-12-06 05:13:55.063686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.907 [2024-12-06 05:13:55.067720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.907 [2024-12-06 05:13:55.067822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:16.907 [2024-12-06 05:13:55.067933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.009 ms 00:26:16.907 [2024-12-06 05:13:55.067969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.907 [2024-12-06 05:13:55.070347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.907 [2024-12-06 05:13:55.070464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:16.907 [2024-12-06 05:13:55.070516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.337 ms 00:26:16.907 [2024-12-06 05:13:55.070535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.907 [2024-12-06 05:13:55.072631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.907 [2024-12-06 05:13:55.072689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:16.907 [2024-12-06 05:13:55.072709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.062 ms 00:26:16.907 [2024-12-06 05:13:55.072724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.907 [2024-12-06 05:13:55.074915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.907 [2024-12-06 05:13:55.075010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:16.907 [2024-12-06 05:13:55.075053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.157 ms 00:26:16.907 [2024-12-06 05:13:55.075069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.907 [2024-12-06 05:13:55.076981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.907 [2024-12-06 05:13:55.077207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:16.907 [2024-12-06 05:13:55.077317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.875 ms 00:26:16.907 [2024-12-06 05:13:55.077364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.907 [2024-12-06 05:13:55.079706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.907 [2024-12-06 05:13:55.079880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:16.907 [2024-12-06 05:13:55.079974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.212 ms 00:26:16.907 [2024-12-06 05:13:55.080019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.907 [2024-12-06 05:13:55.080094] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:16.907 [2024-12-06 05:13:55.080150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:26:16.907 [2024-12-06 05:13:55.080217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:26:16.907 [2024-12-06 05:13:55.080366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:16.907 [2024-12-06 05:13:55.080487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:16.907 [2024-12-06 05:13:55.080592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:16.907 [2024-12-06 05:13:55.080660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:16.907 [2024-12-06 05:13:55.080747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:16.907 [2024-12-06 05:13:55.080884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:16.907 [2024-12-06 05:13:55.080954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:16.907 [2024-12-06 05:13:55.081016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:16.907 [2024-12-06 05:13:55.081077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:16.907 [2024-12-06 05:13:55.081187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:16.907 [2024-12-06 05:13:55.081328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:16.907 [2024-12-06 05:13:55.081393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:16.907 [2024-12-06 05:13:55.081454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:16.907 [2024-12-06 05:13:55.081514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:16.907 [2024-12-06 05:13:55.081575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:16.907 [2024-12-06 05:13:55.081770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:16.907 [2024-12-06 05:13:55.081949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:16.907 [2024-12-06 05:13:55.082063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:16.907 [2024-12-06 05:13:55.082131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:16.907 [2024-12-06 05:13:55.082194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:16.907 [2024-12-06 05:13:55.082554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:16.907 [2024-12-06 05:13:55.082619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:16.907 [2024-12-06 05:13:55.082697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:16.907 [2024-12-06 05:13:55.082881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:16.907 [2024-12-06 05:13:55.083355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:16.907 [2024-12-06 05:13:55.083393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:16.907 [2024-12-06 05:13:55.083411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:16.907 [2024-12-06 05:13:55.083428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:16.907 [2024-12-06 05:13:55.083444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:16.907 [2024-12-06 05:13:55.083460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:16.907 [2024-12-06 05:13:55.083477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:16.907 [2024-12-06 05:13:55.083493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:16.907 [2024-12-06 05:13:55.083509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:16.907 [2024-12-06 05:13:55.083525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:16.907 [2024-12-06 05:13:55.083541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:16.907 [2024-12-06 05:13:55.083557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:16.907 [2024-12-06 05:13:55.083572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:16.907 [2024-12-06 05:13:55.083588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:16.907 [2024-12-06 05:13:55.083604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:16.907 [2024-12-06 05:13:55.083620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:16.907 [2024-12-06 05:13:55.083636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:16.907 [2024-12-06 05:13:55.083652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:16.907 [2024-12-06 05:13:55.083692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:16.907 [2024-12-06 05:13:55.083710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:16.907 [2024-12-06 05:13:55.083725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:16.907 [2024-12-06 05:13:55.083742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:16.907 [2024-12-06 05:13:55.083758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:16.907 [2024-12-06 05:13:55.083798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:16.907 [2024-12-06 05:13:55.083814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:16.907 [2024-12-06 05:13:55.083831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:16.907 [2024-12-06 05:13:55.083847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:16.907 [2024-12-06 05:13:55.083864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:16.907 [2024-12-06 05:13:55.083881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:16.907 [2024-12-06 05:13:55.083897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:16.907 [2024-12-06 05:13:55.083914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:16.908 [2024-12-06 05:13:55.083938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:16.908 [2024-12-06 05:13:55.083953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:16.908 [2024-12-06 05:13:55.083970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:16.908 [2024-12-06 05:13:55.083986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:16.908 [2024-12-06 05:13:55.084001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:16.908 [2024-12-06 05:13:55.084018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:16.908 [2024-12-06 05:13:55.084033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:16.908 [2024-12-06 05:13:55.084049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:16.908 [2024-12-06 05:13:55.084066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:16.908 [2024-12-06 05:13:55.084081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:16.908 [2024-12-06 05:13:55.084097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:16.908 [2024-12-06 05:13:55.084113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:16.908 [2024-12-06 05:13:55.084129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:16.908 [2024-12-06 05:13:55.084145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:16.908 [2024-12-06 05:13:55.084161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:16.908 [2024-12-06 05:13:55.084177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:16.908 [2024-12-06 05:13:55.084193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:16.908 [2024-12-06 05:13:55.084208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:16.908 [2024-12-06 05:13:55.084223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:16.908 [2024-12-06 05:13:55.084239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:16.908 [2024-12-06 05:13:55.084254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:16.908 [2024-12-06 05:13:55.084270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:16.908 [2024-12-06 05:13:55.084286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:16.908 [2024-12-06 05:13:55.084301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:16.908 [2024-12-06 05:13:55.084317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:16.908 [2024-12-06 05:13:55.084332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:16.908 [2024-12-06 05:13:55.084347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:16.908 [2024-12-06 05:13:55.084363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:16.908 [2024-12-06 05:13:55.084378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:16.908 [2024-12-06 05:13:55.084393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:16.908 [2024-12-06 05:13:55.084409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:16.908 [2024-12-06 05:13:55.084424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:16.908 [2024-12-06 05:13:55.084439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:16.908 [2024-12-06 05:13:55.084455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:16.908 [2024-12-06 05:13:55.084470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:16.908 [2024-12-06 05:13:55.084485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:16.908 [2024-12-06 05:13:55.084500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:16.908 [2024-12-06 05:13:55.084516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:16.908 [2024-12-06 05:13:55.084531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:16.908 [2024-12-06 05:13:55.084546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:16.908 [2024-12-06 05:13:55.084562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:16.908 [2024-12-06 05:13:55.084577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:16.908 [2024-12-06 05:13:55.084592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:16.908 [2024-12-06 05:13:55.084625] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:16.908 [2024-12-06 05:13:55.084643] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 84366e27-4363-4609-9553-2ee9629251d7 00:26:16.908 [2024-12-06 05:13:55.084660] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:26:16.908 [2024-12-06 05:13:55.084705] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 160448 00:26:16.908 [2024-12-06 05:13:55.084719] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 158464 00:26:16.908 [2024-12-06 05:13:55.084736] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0125 00:26:16.908 [2024-12-06 05:13:55.084752] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:16.908 [2024-12-06 05:13:55.084767] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:16.908 [2024-12-06 05:13:55.084784] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:16.908 [2024-12-06 05:13:55.084797] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:16.908 [2024-12-06 05:13:55.084810] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:16.908 [2024-12-06 05:13:55.084829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.908 [2024-12-06 05:13:55.084846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:16.908 [2024-12-06 05:13:55.084871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.737 ms 00:26:16.908 [2024-12-06 05:13:55.084886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.908 [2024-12-06 05:13:55.087338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.908 [2024-12-06 05:13:55.087381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:16.908 [2024-12-06 05:13:55.087399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.405 ms 00:26:16.908 [2024-12-06 05:13:55.087413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.908 [2024-12-06 05:13:55.087543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.908 [2024-12-06 05:13:55.087570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:16.908 [2024-12-06 05:13:55.087593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:26:16.908 [2024-12-06 05:13:55.087609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.908 [2024-12-06 05:13:55.094101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:16.908 [2024-12-06 05:13:55.094132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:16.908 [2024-12-06 05:13:55.094142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:16.908 [2024-12-06 05:13:55.094156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.908 [2024-12-06 05:13:55.094211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:16.908 [2024-12-06 05:13:55.094219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:16.908 [2024-12-06 05:13:55.094228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:16.908 [2024-12-06 05:13:55.094239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.908 [2024-12-06 05:13:55.094277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:16.908 [2024-12-06 05:13:55.094288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:16.908 [2024-12-06 05:13:55.094296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:16.908 [2024-12-06 05:13:55.094304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.908 [2024-12-06 05:13:55.094323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:16.908 [2024-12-06 05:13:55.094331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:16.908 [2024-12-06 05:13:55.094339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:16.908 [2024-12-06 05:13:55.094346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.908 [2024-12-06 05:13:55.105968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:16.908 [2024-12-06 05:13:55.106003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:16.908 [2024-12-06 05:13:55.106013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:16.908 [2024-12-06 05:13:55.106021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.908 [2024-12-06 05:13:55.115519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:16.908 [2024-12-06 05:13:55.115558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:16.908 [2024-12-06 05:13:55.115570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:16.908 [2024-12-06 05:13:55.115578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.908 [2024-12-06 05:13:55.115631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:16.908 [2024-12-06 05:13:55.115640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:16.908 [2024-12-06 05:13:55.115648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:16.908 [2024-12-06 05:13:55.115656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.908 [2024-12-06 05:13:55.115696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:16.908 [2024-12-06 05:13:55.115705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:16.908 [2024-12-06 05:13:55.115718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:16.908 [2024-12-06 05:13:55.115726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.908 [2024-12-06 05:13:55.115794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:16.908 [2024-12-06 05:13:55.115807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:16.908 [2024-12-06 05:13:55.115816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:16.908 [2024-12-06 05:13:55.115824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.909 [2024-12-06 05:13:55.115861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:16.909 [2024-12-06 05:13:55.115872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:16.909 [2024-12-06 05:13:55.115880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:16.909 [2024-12-06 05:13:55.115888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.909 [2024-12-06 05:13:55.115937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:16.909 [2024-12-06 05:13:55.115948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:16.909 [2024-12-06 05:13:55.115958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:16.909 [2024-12-06 05:13:55.115966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.909 [2024-12-06 05:13:55.116017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:16.909 [2024-12-06 05:13:55.116027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:16.909 [2024-12-06 05:13:55.116035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:16.909 [2024-12-06 05:13:55.116043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.909 [2024-12-06 05:13:55.116188] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 71.766 ms, result 0 00:26:17.169 00:26:17.169 00:26:17.169 05:13:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:19.730 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:26:19.730 05:13:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:19.730 [2024-12-06 05:13:57.596602] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:26:19.730 [2024-12-06 05:13:57.596775] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91464 ] 00:26:19.730 [2024-12-06 05:13:57.734965] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:19.730 [2024-12-06 05:13:57.784515] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:26:19.731 [2024-12-06 05:13:57.893509] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:19.731 [2024-12-06 05:13:57.893589] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:19.994 [2024-12-06 05:13:58.055516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.994 [2024-12-06 05:13:58.055584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:19.994 [2024-12-06 05:13:58.055603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:19.994 [2024-12-06 05:13:58.055612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.994 [2024-12-06 05:13:58.055686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.994 [2024-12-06 05:13:58.055697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:19.994 [2024-12-06 05:13:58.055707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:26:19.994 [2024-12-06 05:13:58.055723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.994 [2024-12-06 05:13:58.055749] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:19.994 [2024-12-06 05:13:58.056044] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:19.994 [2024-12-06 05:13:58.056067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.994 [2024-12-06 05:13:58.056076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:19.994 [2024-12-06 05:13:58.056086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.324 ms 00:26:19.994 [2024-12-06 05:13:58.056094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.994 [2024-12-06 05:13:58.058317] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:19.994 [2024-12-06 05:13:58.062229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.994 [2024-12-06 05:13:58.062283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:19.994 [2024-12-06 05:13:58.062295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.916 ms 00:26:19.994 [2024-12-06 05:13:58.062304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.994 [2024-12-06 05:13:58.062385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.994 [2024-12-06 05:13:58.062400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:19.994 [2024-12-06 05:13:58.062409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:26:19.994 [2024-12-06 05:13:58.062417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.994 [2024-12-06 05:13:58.070650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.994 [2024-12-06 05:13:58.070721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:19.994 [2024-12-06 05:13:58.070733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.178 ms 00:26:19.994 [2024-12-06 05:13:58.070741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.994 [2024-12-06 05:13:58.070853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.994 [2024-12-06 05:13:58.070864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:19.994 [2024-12-06 05:13:58.070873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:26:19.994 [2024-12-06 05:13:58.070887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.994 [2024-12-06 05:13:58.070946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.995 [2024-12-06 05:13:58.070957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:19.995 [2024-12-06 05:13:58.070970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:26:19.995 [2024-12-06 05:13:58.070977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.995 [2024-12-06 05:13:58.071001] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:19.995 [2024-12-06 05:13:58.073169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.995 [2024-12-06 05:13:58.073204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:19.995 [2024-12-06 05:13:58.073214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.174 ms 00:26:19.995 [2024-12-06 05:13:58.073222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.995 [2024-12-06 05:13:58.073257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.995 [2024-12-06 05:13:58.073266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:19.995 [2024-12-06 05:13:58.073275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:26:19.995 [2024-12-06 05:13:58.073283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.995 [2024-12-06 05:13:58.073311] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:19.995 [2024-12-06 05:13:58.073338] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:19.995 [2024-12-06 05:13:58.073381] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:19.995 [2024-12-06 05:13:58.073399] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:19.995 [2024-12-06 05:13:58.073505] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:19.995 [2024-12-06 05:13:58.073516] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:19.995 [2024-12-06 05:13:58.073527] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:19.995 [2024-12-06 05:13:58.073541] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:19.995 [2024-12-06 05:13:58.073554] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:19.995 [2024-12-06 05:13:58.073565] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:19.995 [2024-12-06 05:13:58.073574] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:19.995 [2024-12-06 05:13:58.073584] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:19.995 [2024-12-06 05:13:58.073592] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:19.995 [2024-12-06 05:13:58.073601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.995 [2024-12-06 05:13:58.073608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:19.995 [2024-12-06 05:13:58.073616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.298 ms 00:26:19.995 [2024-12-06 05:13:58.073626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.995 [2024-12-06 05:13:58.073748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.995 [2024-12-06 05:13:58.073764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:19.995 [2024-12-06 05:13:58.073775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:26:19.995 [2024-12-06 05:13:58.073782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.995 [2024-12-06 05:13:58.073882] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:19.995 [2024-12-06 05:13:58.073902] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:19.995 [2024-12-06 05:13:58.073912] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:19.995 [2024-12-06 05:13:58.073922] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:19.995 [2024-12-06 05:13:58.073932] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:19.995 [2024-12-06 05:13:58.073940] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:19.995 [2024-12-06 05:13:58.073950] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:19.995 [2024-12-06 05:13:58.073958] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:19.995 [2024-12-06 05:13:58.073966] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:19.995 [2024-12-06 05:13:58.073975] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:19.995 [2024-12-06 05:13:58.073986] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:19.995 [2024-12-06 05:13:58.073995] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:19.995 [2024-12-06 05:13:58.074006] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:19.995 [2024-12-06 05:13:58.074014] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:19.995 [2024-12-06 05:13:58.074023] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:19.995 [2024-12-06 05:13:58.074031] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:19.995 [2024-12-06 05:13:58.074039] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:19.995 [2024-12-06 05:13:58.074047] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:19.995 [2024-12-06 05:13:58.074055] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:19.995 [2024-12-06 05:13:58.074063] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:19.995 [2024-12-06 05:13:58.074072] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:19.995 [2024-12-06 05:13:58.074080] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:19.995 [2024-12-06 05:13:58.074088] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:19.995 [2024-12-06 05:13:58.074096] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:19.995 [2024-12-06 05:13:58.074104] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:19.995 [2024-12-06 05:13:58.074112] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:19.995 [2024-12-06 05:13:58.074128] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:19.995 [2024-12-06 05:13:58.074136] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:19.995 [2024-12-06 05:13:58.074144] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:19.995 [2024-12-06 05:13:58.074152] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:19.995 [2024-12-06 05:13:58.074160] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:19.995 [2024-12-06 05:13:58.074168] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:19.995 [2024-12-06 05:13:58.074176] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:19.995 [2024-12-06 05:13:58.074185] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:19.995 [2024-12-06 05:13:58.074192] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:19.995 [2024-12-06 05:13:58.074200] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:19.995 [2024-12-06 05:13:58.074207] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:19.995 [2024-12-06 05:13:58.074215] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:19.995 [2024-12-06 05:13:58.074224] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:19.995 [2024-12-06 05:13:58.074231] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:19.995 [2024-12-06 05:13:58.074239] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:19.995 [2024-12-06 05:13:58.074246] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:19.995 [2024-12-06 05:13:58.074256] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:19.995 [2024-12-06 05:13:58.074263] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:19.995 [2024-12-06 05:13:58.074275] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:19.995 [2024-12-06 05:13:58.074286] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:19.995 [2024-12-06 05:13:58.074296] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:19.995 [2024-12-06 05:13:58.074304] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:19.995 [2024-12-06 05:13:58.074312] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:19.995 [2024-12-06 05:13:58.074319] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:19.995 [2024-12-06 05:13:58.074326] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:19.995 [2024-12-06 05:13:58.074332] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:19.995 [2024-12-06 05:13:58.074340] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:19.996 [2024-12-06 05:13:58.074348] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:19.996 [2024-12-06 05:13:58.074359] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:19.996 [2024-12-06 05:13:58.074368] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:19.996 [2024-12-06 05:13:58.074375] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:19.996 [2024-12-06 05:13:58.074382] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:19.996 [2024-12-06 05:13:58.074391] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:19.996 [2024-12-06 05:13:58.074399] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:19.996 [2024-12-06 05:13:58.074406] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:19.996 [2024-12-06 05:13:58.074413] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:19.996 [2024-12-06 05:13:58.074420] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:19.996 [2024-12-06 05:13:58.074428] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:19.996 [2024-12-06 05:13:58.074442] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:19.996 [2024-12-06 05:13:58.074449] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:19.996 [2024-12-06 05:13:58.074455] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:19.996 [2024-12-06 05:13:58.074462] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:19.996 [2024-12-06 05:13:58.074469] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:19.996 [2024-12-06 05:13:58.074476] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:19.996 [2024-12-06 05:13:58.074484] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:19.996 [2024-12-06 05:13:58.074493] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:19.996 [2024-12-06 05:13:58.074501] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:19.996 [2024-12-06 05:13:58.074507] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:19.996 [2024-12-06 05:13:58.074517] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:19.996 [2024-12-06 05:13:58.074524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.996 [2024-12-06 05:13:58.074536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:19.996 [2024-12-06 05:13:58.074544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.711 ms 00:26:19.996 [2024-12-06 05:13:58.074552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.996 [2024-12-06 05:13:58.103014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.996 [2024-12-06 05:13:58.103111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:19.996 [2024-12-06 05:13:58.103138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.405 ms 00:26:19.996 [2024-12-06 05:13:58.103157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.996 [2024-12-06 05:13:58.103355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.996 [2024-12-06 05:13:58.103376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:19.996 [2024-12-06 05:13:58.103396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.136 ms 00:26:19.996 [2024-12-06 05:13:58.103412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.996 [2024-12-06 05:13:58.117741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.996 [2024-12-06 05:13:58.117787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:19.996 [2024-12-06 05:13:58.117798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.206 ms 00:26:19.996 [2024-12-06 05:13:58.117812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.996 [2024-12-06 05:13:58.117850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.996 [2024-12-06 05:13:58.117858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:19.996 [2024-12-06 05:13:58.117868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:19.996 [2024-12-06 05:13:58.117879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.996 [2024-12-06 05:13:58.118472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.996 [2024-12-06 05:13:58.118521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:19.996 [2024-12-06 05:13:58.118533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.539 ms 00:26:19.996 [2024-12-06 05:13:58.118542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.996 [2024-12-06 05:13:58.118724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.996 [2024-12-06 05:13:58.118737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:19.996 [2024-12-06 05:13:58.118747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.159 ms 00:26:19.996 [2024-12-06 05:13:58.118756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.996 [2024-12-06 05:13:58.126348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.996 [2024-12-06 05:13:58.126395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:19.996 [2024-12-06 05:13:58.126412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.567 ms 00:26:19.996 [2024-12-06 05:13:58.126425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.996 [2024-12-06 05:13:58.130486] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:19.996 [2024-12-06 05:13:58.130539] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:19.996 [2024-12-06 05:13:58.130557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.996 [2024-12-06 05:13:58.130564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:19.996 [2024-12-06 05:13:58.130573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.031 ms 00:26:19.996 [2024-12-06 05:13:58.130580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.996 [2024-12-06 05:13:58.146695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.996 [2024-12-06 05:13:58.146741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:19.996 [2024-12-06 05:13:58.146756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.053 ms 00:26:19.996 [2024-12-06 05:13:58.146770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.996 [2024-12-06 05:13:58.149644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.996 [2024-12-06 05:13:58.149719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:19.996 [2024-12-06 05:13:58.149730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.818 ms 00:26:19.996 [2024-12-06 05:13:58.149737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.996 [2024-12-06 05:13:58.152597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.996 [2024-12-06 05:13:58.152658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:19.996 [2024-12-06 05:13:58.152688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.810 ms 00:26:19.996 [2024-12-06 05:13:58.152697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.996 [2024-12-06 05:13:58.153060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.996 [2024-12-06 05:13:58.153096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:19.996 [2024-12-06 05:13:58.153106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.269 ms 00:26:19.996 [2024-12-06 05:13:58.153115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.996 [2024-12-06 05:13:58.176390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.996 [2024-12-06 05:13:58.176460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:19.996 [2024-12-06 05:13:58.176474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.255 ms 00:26:19.996 [2024-12-06 05:13:58.176482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.996 [2024-12-06 05:13:58.184625] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:19.996 [2024-12-06 05:13:58.187575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.997 [2024-12-06 05:13:58.187616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:19.997 [2024-12-06 05:13:58.187634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.043 ms 00:26:19.997 [2024-12-06 05:13:58.187650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.997 [2024-12-06 05:13:58.187746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.997 [2024-12-06 05:13:58.187758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:19.997 [2024-12-06 05:13:58.187768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:26:19.997 [2024-12-06 05:13:58.187776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.997 [2024-12-06 05:13:58.188514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.997 [2024-12-06 05:13:58.188559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:19.997 [2024-12-06 05:13:58.188570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.702 ms 00:26:19.997 [2024-12-06 05:13:58.188581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.997 [2024-12-06 05:13:58.188620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.997 [2024-12-06 05:13:58.188629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:19.997 [2024-12-06 05:13:58.188638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:26:19.997 [2024-12-06 05:13:58.188646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.997 [2024-12-06 05:13:58.188699] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:19.997 [2024-12-06 05:13:58.188710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.997 [2024-12-06 05:13:58.188719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:19.997 [2024-12-06 05:13:58.188728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:26:19.997 [2024-12-06 05:13:58.188740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.997 [2024-12-06 05:13:58.194077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.997 [2024-12-06 05:13:58.194120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:19.997 [2024-12-06 05:13:58.194131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.315 ms 00:26:19.997 [2024-12-06 05:13:58.194140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.997 [2024-12-06 05:13:58.194222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.997 [2024-12-06 05:13:58.194232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:19.997 [2024-12-06 05:13:58.194241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:26:19.997 [2024-12-06 05:13:58.194250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.997 [2024-12-06 05:13:58.195365] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 139.401 ms, result 0 00:26:21.387  [2024-12-06T05:14:00.564Z] Copying: 11/1024 [MB] (11 MBps) [2024-12-06T05:14:01.516Z] Copying: 22/1024 [MB] (11 MBps) [2024-12-06T05:14:02.498Z] Copying: 49/1024 [MB] (26 MBps) [2024-12-06T05:14:03.479Z] Copying: 68/1024 [MB] (19 MBps) [2024-12-06T05:14:04.424Z] Copying: 83/1024 [MB] (15 MBps) [2024-12-06T05:14:05.810Z] Copying: 94/1024 [MB] (10 MBps) [2024-12-06T05:14:06.382Z] Copying: 105/1024 [MB] (11 MBps) [2024-12-06T05:14:07.767Z] Copying: 120/1024 [MB] (14 MBps) [2024-12-06T05:14:08.710Z] Copying: 134/1024 [MB] (14 MBps) [2024-12-06T05:14:09.656Z] Copying: 148/1024 [MB] (13 MBps) [2024-12-06T05:14:10.599Z] Copying: 162/1024 [MB] (13 MBps) [2024-12-06T05:14:11.539Z] Copying: 183/1024 [MB] (21 MBps) [2024-12-06T05:14:12.482Z] Copying: 199/1024 [MB] (16 MBps) [2024-12-06T05:14:13.423Z] Copying: 217/1024 [MB] (17 MBps) [2024-12-06T05:14:14.807Z] Copying: 241/1024 [MB] (24 MBps) [2024-12-06T05:14:15.380Z] Copying: 272/1024 [MB] (31 MBps) [2024-12-06T05:14:16.767Z] Copying: 292/1024 [MB] (20 MBps) [2024-12-06T05:14:17.714Z] Copying: 308/1024 [MB] (15 MBps) [2024-12-06T05:14:18.660Z] Copying: 323/1024 [MB] (15 MBps) [2024-12-06T05:14:19.605Z] Copying: 346/1024 [MB] (23 MBps) [2024-12-06T05:14:20.549Z] Copying: 365/1024 [MB] (18 MBps) [2024-12-06T05:14:21.494Z] Copying: 378/1024 [MB] (13 MBps) [2024-12-06T05:14:22.441Z] Copying: 389/1024 [MB] (10 MBps) [2024-12-06T05:14:23.386Z] Copying: 400/1024 [MB] (11 MBps) [2024-12-06T05:14:24.777Z] Copying: 411/1024 [MB] (10 MBps) [2024-12-06T05:14:25.721Z] Copying: 422/1024 [MB] (10 MBps) [2024-12-06T05:14:26.664Z] Copying: 432/1024 [MB] (10 MBps) [2024-12-06T05:14:27.607Z] Copying: 443/1024 [MB] (10 MBps) [2024-12-06T05:14:28.553Z] Copying: 454/1024 [MB] (10 MBps) [2024-12-06T05:14:29.501Z] Copying: 465/1024 [MB] (11 MBps) [2024-12-06T05:14:30.448Z] Copying: 476/1024 [MB] (10 MBps) [2024-12-06T05:14:31.393Z] Copying: 486/1024 [MB] (10 MBps) [2024-12-06T05:14:32.781Z] Copying: 497/1024 [MB] (10 MBps) [2024-12-06T05:14:33.723Z] Copying: 508/1024 [MB] (10 MBps) [2024-12-06T05:14:34.740Z] Copying: 532/1024 [MB] (23 MBps) [2024-12-06T05:14:35.697Z] Copying: 543/1024 [MB] (10 MBps) [2024-12-06T05:14:36.638Z] Copying: 553/1024 [MB] (10 MBps) [2024-12-06T05:14:37.581Z] Copying: 564/1024 [MB] (10 MBps) [2024-12-06T05:14:38.558Z] Copying: 575/1024 [MB] (10 MBps) [2024-12-06T05:14:39.498Z] Copying: 594/1024 [MB] (19 MBps) [2024-12-06T05:14:40.439Z] Copying: 613/1024 [MB] (19 MBps) [2024-12-06T05:14:41.380Z] Copying: 627/1024 [MB] (13 MBps) [2024-12-06T05:14:42.761Z] Copying: 643/1024 [MB] (16 MBps) [2024-12-06T05:14:43.726Z] Copying: 663/1024 [MB] (19 MBps) [2024-12-06T05:14:44.672Z] Copying: 684/1024 [MB] (21 MBps) [2024-12-06T05:14:45.616Z] Copying: 697/1024 [MB] (12 MBps) [2024-12-06T05:14:46.558Z] Copying: 715/1024 [MB] (17 MBps) [2024-12-06T05:14:47.505Z] Copying: 734/1024 [MB] (18 MBps) [2024-12-06T05:14:48.453Z] Copying: 749/1024 [MB] (15 MBps) [2024-12-06T05:14:49.399Z] Copying: 759/1024 [MB] (10 MBps) [2024-12-06T05:14:50.789Z] Copying: 769/1024 [MB] (10 MBps) [2024-12-06T05:14:51.735Z] Copying: 780/1024 [MB] (10 MBps) [2024-12-06T05:14:52.680Z] Copying: 791/1024 [MB] (11 MBps) [2024-12-06T05:14:53.625Z] Copying: 801/1024 [MB] (10 MBps) [2024-12-06T05:14:54.570Z] Copying: 812/1024 [MB] (10 MBps) [2024-12-06T05:14:55.515Z] Copying: 823/1024 [MB] (11 MBps) [2024-12-06T05:14:56.457Z] Copying: 835/1024 [MB] (11 MBps) [2024-12-06T05:14:57.402Z] Copying: 846/1024 [MB] (11 MBps) [2024-12-06T05:14:58.788Z] Copying: 859/1024 [MB] (13 MBps) [2024-12-06T05:14:59.731Z] Copying: 875/1024 [MB] (15 MBps) [2024-12-06T05:15:00.672Z] Copying: 893/1024 [MB] (17 MBps) [2024-12-06T05:15:01.618Z] Copying: 914/1024 [MB] (20 MBps) [2024-12-06T05:15:02.565Z] Copying: 935/1024 [MB] (21 MBps) [2024-12-06T05:15:03.511Z] Copying: 955/1024 [MB] (19 MBps) [2024-12-06T05:15:04.458Z] Copying: 969/1024 [MB] (13 MBps) [2024-12-06T05:15:05.402Z] Copying: 982/1024 [MB] (13 MBps) [2024-12-06T05:15:06.398Z] Copying: 1000/1024 [MB] (17 MBps) [2024-12-06T05:15:06.970Z] Copying: 1014/1024 [MB] (13 MBps) [2024-12-06T05:15:06.970Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-12-06 05:15:06.879816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.738 [2024-12-06 05:15:06.879913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:28.738 [2024-12-06 05:15:06.879934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:28.738 [2024-12-06 05:15:06.879946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.738 [2024-12-06 05:15:06.879982] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:28.738 [2024-12-06 05:15:06.880946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.738 [2024-12-06 05:15:06.880997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:28.738 [2024-12-06 05:15:06.881012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.944 ms 00:27:28.738 [2024-12-06 05:15:06.881024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.738 [2024-12-06 05:15:06.881302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.738 [2024-12-06 05:15:06.881318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:28.738 [2024-12-06 05:15:06.881329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.251 ms 00:27:28.738 [2024-12-06 05:15:06.881339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.738 [2024-12-06 05:15:06.885645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.738 [2024-12-06 05:15:06.885713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:28.738 [2024-12-06 05:15:06.885731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.282 ms 00:27:28.738 [2024-12-06 05:15:06.885741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.738 [2024-12-06 05:15:06.893541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.738 [2024-12-06 05:15:06.893592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:28.738 [2024-12-06 05:15:06.893604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.778 ms 00:27:28.738 [2024-12-06 05:15:06.893612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.738 [2024-12-06 05:15:06.896053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.738 [2024-12-06 05:15:06.896109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:28.738 [2024-12-06 05:15:06.896121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.342 ms 00:27:28.738 [2024-12-06 05:15:06.896130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.738 [2024-12-06 05:15:06.901132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.738 [2024-12-06 05:15:06.901184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:28.738 [2024-12-06 05:15:06.901197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.955 ms 00:27:28.738 [2024-12-06 05:15:06.901207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.738 [2024-12-06 05:15:06.906345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.738 [2024-12-06 05:15:06.906395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:28.738 [2024-12-06 05:15:06.906407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.085 ms 00:27:28.738 [2024-12-06 05:15:06.906416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.738 [2024-12-06 05:15:06.910036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.738 [2024-12-06 05:15:06.910084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:28.738 [2024-12-06 05:15:06.910095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.602 ms 00:27:28.738 [2024-12-06 05:15:06.910102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.738 [2024-12-06 05:15:06.913043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.738 [2024-12-06 05:15:06.913088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:28.738 [2024-12-06 05:15:06.913100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.898 ms 00:27:28.738 [2024-12-06 05:15:06.913107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.738 [2024-12-06 05:15:06.915626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.738 [2024-12-06 05:15:06.915687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:28.738 [2024-12-06 05:15:06.915698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.477 ms 00:27:28.738 [2024-12-06 05:15:06.915706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.738 [2024-12-06 05:15:06.918085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.738 [2024-12-06 05:15:06.918133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:28.739 [2024-12-06 05:15:06.918142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.305 ms 00:27:28.739 [2024-12-06 05:15:06.918150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.739 [2024-12-06 05:15:06.918190] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:28.739 [2024-12-06 05:15:06.918215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:28.739 [2024-12-06 05:15:06.918227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:27:28.739 [2024-12-06 05:15:06.918236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:28.739 [2024-12-06 05:15:06.918244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:28.739 [2024-12-06 05:15:06.918252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:28.739 [2024-12-06 05:15:06.918260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:28.739 [2024-12-06 05:15:06.918267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:28.739 [2024-12-06 05:15:06.918275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:28.739 [2024-12-06 05:15:06.918283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:28.739 [2024-12-06 05:15:06.918292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:28.739 [2024-12-06 05:15:06.918301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:28.739 [2024-12-06 05:15:06.918314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:28.739 [2024-12-06 05:15:06.918322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:28.739 [2024-12-06 05:15:06.918330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:28.739 [2024-12-06 05:15:06.918338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:28.739 [2024-12-06 05:15:06.918345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:28.739 [2024-12-06 05:15:06.918352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:28.739 [2024-12-06 05:15:06.918359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:28.739 [2024-12-06 05:15:06.918366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:28.739 [2024-12-06 05:15:06.918374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:28.739 [2024-12-06 05:15:06.918382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:28.739 [2024-12-06 05:15:06.918389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:28.739 [2024-12-06 05:15:06.918397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:28.739 [2024-12-06 05:15:06.918404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:28.739 [2024-12-06 05:15:06.918412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:28.739 [2024-12-06 05:15:06.918419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:28.739 [2024-12-06 05:15:06.918427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:28.739 [2024-12-06 05:15:06.918434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:28.739 [2024-12-06 05:15:06.918442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:28.739 [2024-12-06 05:15:06.918452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:28.739 [2024-12-06 05:15:06.918461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:28.739 [2024-12-06 05:15:06.918469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:28.739 [2024-12-06 05:15:06.918477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:28.739 [2024-12-06 05:15:06.918484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:28.739 [2024-12-06 05:15:06.918492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:28.739 [2024-12-06 05:15:06.918500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:28.739 [2024-12-06 05:15:06.918508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:28.739 [2024-12-06 05:15:06.918516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:28.739 [2024-12-06 05:15:06.918523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:28.739 [2024-12-06 05:15:06.918531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:28.739 [2024-12-06 05:15:06.918538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:28.739 [2024-12-06 05:15:06.918546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:28.739 [2024-12-06 05:15:06.918553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:28.739 [2024-12-06 05:15:06.918561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:28.739 [2024-12-06 05:15:06.918569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:28.739 [2024-12-06 05:15:06.918577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:28.739 [2024-12-06 05:15:06.918584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:28.739 [2024-12-06 05:15:06.918592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:28.739 [2024-12-06 05:15:06.918612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:28.739 [2024-12-06 05:15:06.918621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:28.739 [2024-12-06 05:15:06.918629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:28.739 [2024-12-06 05:15:06.918637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:28.739 [2024-12-06 05:15:06.918644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:28.739 [2024-12-06 05:15:06.918652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:28.739 [2024-12-06 05:15:06.918659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:28.739 [2024-12-06 05:15:06.918691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:28.739 [2024-12-06 05:15:06.918699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:28.739 [2024-12-06 05:15:06.918707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:28.739 [2024-12-06 05:15:06.918715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:28.739 [2024-12-06 05:15:06.918723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:28.739 [2024-12-06 05:15:06.918731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:28.739 [2024-12-06 05:15:06.918739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:28.739 [2024-12-06 05:15:06.918751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:28.739 [2024-12-06 05:15:06.918760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:28.739 [2024-12-06 05:15:06.918767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:28.739 [2024-12-06 05:15:06.918776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:28.740 [2024-12-06 05:15:06.918785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:28.740 [2024-12-06 05:15:06.918795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:28.740 [2024-12-06 05:15:06.918802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:28.740 [2024-12-06 05:15:06.918810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:28.740 [2024-12-06 05:15:06.918819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:28.740 [2024-12-06 05:15:06.918827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:28.740 [2024-12-06 05:15:06.918835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:28.740 [2024-12-06 05:15:06.918843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:28.740 [2024-12-06 05:15:06.918851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:28.740 [2024-12-06 05:15:06.918859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:28.740 [2024-12-06 05:15:06.918866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:28.740 [2024-12-06 05:15:06.918873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:28.740 [2024-12-06 05:15:06.918881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:28.740 [2024-12-06 05:15:06.918889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:28.740 [2024-12-06 05:15:06.918897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:28.740 [2024-12-06 05:15:06.918905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:28.740 [2024-12-06 05:15:06.918912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:28.740 [2024-12-06 05:15:06.918920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:28.740 [2024-12-06 05:15:06.918928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:28.740 [2024-12-06 05:15:06.918937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:28.740 [2024-12-06 05:15:06.918945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:28.740 [2024-12-06 05:15:06.918952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:28.740 [2024-12-06 05:15:06.918961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:28.740 [2024-12-06 05:15:06.918969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:28.740 [2024-12-06 05:15:06.918977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:28.740 [2024-12-06 05:15:06.918985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:28.740 [2024-12-06 05:15:06.918993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:28.740 [2024-12-06 05:15:06.919001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:28.740 [2024-12-06 05:15:06.919013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:28.740 [2024-12-06 05:15:06.919021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:28.740 [2024-12-06 05:15:06.919030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:28.740 [2024-12-06 05:15:06.919038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:28.740 [2024-12-06 05:15:06.919045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:28.740 [2024-12-06 05:15:06.919053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:28.740 [2024-12-06 05:15:06.919071] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:28.740 [2024-12-06 05:15:06.919086] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 84366e27-4363-4609-9553-2ee9629251d7 00:27:28.740 [2024-12-06 05:15:06.919095] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:27:28.740 [2024-12-06 05:15:06.919104] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:28.740 [2024-12-06 05:15:06.919113] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:28.740 [2024-12-06 05:15:06.919122] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:28.740 [2024-12-06 05:15:06.919130] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:28.740 [2024-12-06 05:15:06.919139] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:28.740 [2024-12-06 05:15:06.919147] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:28.740 [2024-12-06 05:15:06.919155] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:28.740 [2024-12-06 05:15:06.919162] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:28.740 [2024-12-06 05:15:06.919171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.740 [2024-12-06 05:15:06.919182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:28.740 [2024-12-06 05:15:06.919200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.982 ms 00:27:28.740 [2024-12-06 05:15:06.919208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.740 [2024-12-06 05:15:06.922216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.740 [2024-12-06 05:15:06.922255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:28.740 [2024-12-06 05:15:06.922266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.987 ms 00:27:28.740 [2024-12-06 05:15:06.922275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.740 [2024-12-06 05:15:06.922430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.740 [2024-12-06 05:15:06.922442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:28.740 [2024-12-06 05:15:06.922451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:27:28.740 [2024-12-06 05:15:06.922458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.740 [2024-12-06 05:15:06.931090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:28.740 [2024-12-06 05:15:06.931141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:28.740 [2024-12-06 05:15:06.931152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:28.740 [2024-12-06 05:15:06.931168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.740 [2024-12-06 05:15:06.931241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:28.740 [2024-12-06 05:15:06.931251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:28.740 [2024-12-06 05:15:06.931259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:28.740 [2024-12-06 05:15:06.931268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.740 [2024-12-06 05:15:06.931334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:28.740 [2024-12-06 05:15:06.931346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:28.740 [2024-12-06 05:15:06.931356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:28.740 [2024-12-06 05:15:06.931364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.740 [2024-12-06 05:15:06.931381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:28.740 [2024-12-06 05:15:06.931399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:28.740 [2024-12-06 05:15:06.931407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:28.740 [2024-12-06 05:15:06.931415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.740 [2024-12-06 05:15:06.948993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:28.740 [2024-12-06 05:15:06.949054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:28.740 [2024-12-06 05:15:06.949066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:28.740 [2024-12-06 05:15:06.949076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.740 [2024-12-06 05:15:06.962475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:28.741 [2024-12-06 05:15:06.962536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:28.741 [2024-12-06 05:15:06.962549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:28.741 [2024-12-06 05:15:06.962559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.741 [2024-12-06 05:15:06.962617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:28.741 [2024-12-06 05:15:06.962627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:28.741 [2024-12-06 05:15:06.962636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:28.741 [2024-12-06 05:15:06.962650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.741 [2024-12-06 05:15:06.962707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:28.741 [2024-12-06 05:15:06.962719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:28.741 [2024-12-06 05:15:06.962731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:28.741 [2024-12-06 05:15:06.962740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.741 [2024-12-06 05:15:06.962816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:28.741 [2024-12-06 05:15:06.962829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:28.741 [2024-12-06 05:15:06.962839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:28.741 [2024-12-06 05:15:06.962848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.741 [2024-12-06 05:15:06.962881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:28.741 [2024-12-06 05:15:06.962892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:28.741 [2024-12-06 05:15:06.962903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:28.741 [2024-12-06 05:15:06.962917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.741 [2024-12-06 05:15:06.962965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:28.741 [2024-12-06 05:15:06.962977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:28.741 [2024-12-06 05:15:06.962988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:28.741 [2024-12-06 05:15:06.962996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.741 [2024-12-06 05:15:06.963051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:28.741 [2024-12-06 05:15:06.963063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:28.741 [2024-12-06 05:15:06.963077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:28.741 [2024-12-06 05:15:06.963088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.741 [2024-12-06 05:15:06.963239] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 83.391 ms, result 0 00:27:29.001 00:27:29.001 00:27:29.001 05:15:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:27:31.547 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:27:31.547 05:15:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:27:31.547 05:15:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:27:31.547 05:15:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:31.547 05:15:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:27:31.547 05:15:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:27:31.547 05:15:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:31.547 05:15:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:27:31.547 Process with pid 89408 is not found 00:27:31.547 05:15:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 89408 00:27:31.547 05:15:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@950 -- # '[' -z 89408 ']' 00:27:31.547 05:15:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # kill -0 89408 00:27:31.547 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (89408) - No such process 00:27:31.547 05:15:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@977 -- # echo 'Process with pid 89408 is not found' 00:27:31.547 05:15:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:27:31.808 Remove shared memory files 00:27:31.808 05:15:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:27:31.808 05:15:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:27:31.808 05:15:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:27:31.808 05:15:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:27:31.808 05:15:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:27:31.808 05:15:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:27:31.808 05:15:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:27:31.808 00:27:31.808 real 4m26.231s 00:27:31.808 user 4m52.271s 00:27:31.808 sys 0m26.833s 00:27:31.808 05:15:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:31.808 ************************************ 00:27:31.808 END TEST ftl_dirty_shutdown 00:27:31.808 ************************************ 00:27:31.808 05:15:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:31.808 05:15:09 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:27:31.808 05:15:09 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:27:31.808 05:15:09 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:31.808 05:15:09 ftl -- common/autotest_common.sh@10 -- # set +x 00:27:31.808 ************************************ 00:27:31.808 START TEST ftl_upgrade_shutdown 00:27:31.808 ************************************ 00:27:31.808 05:15:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:27:32.069 * Looking for test storage... 00:27:32.069 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:27:32.069 05:15:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:27:32.069 05:15:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1681 -- # lcov --version 00:27:32.069 05:15:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:27:32.069 05:15:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:27:32.069 05:15:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:32.069 05:15:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:32.069 05:15:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:32.069 05:15:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:27:32.069 05:15:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:27:32.069 05:15:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:27:32.069 05:15:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:27:32.069 05:15:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:27:32.069 05:15:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:27:32.069 05:15:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:27:32.069 05:15:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:32.069 05:15:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:27:32.069 05:15:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:27:32.069 05:15:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:32.069 05:15:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:32.069 05:15:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:27:32.069 05:15:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:27:32.069 05:15:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:32.069 05:15:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:27:32.069 05:15:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:27:32.069 05:15:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:27:32.069 05:15:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:27:32.069 05:15:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:32.069 05:15:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:27:32.069 05:15:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:27:32.069 05:15:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:32.069 05:15:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:32.069 05:15:10 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:27:32.069 05:15:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:32.069 05:15:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:27:32.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:32.069 --rc genhtml_branch_coverage=1 00:27:32.069 --rc genhtml_function_coverage=1 00:27:32.069 --rc genhtml_legend=1 00:27:32.069 --rc geninfo_all_blocks=1 00:27:32.069 --rc geninfo_unexecuted_blocks=1 00:27:32.069 00:27:32.069 ' 00:27:32.069 05:15:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:27:32.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:32.069 --rc genhtml_branch_coverage=1 00:27:32.069 --rc genhtml_function_coverage=1 00:27:32.069 --rc genhtml_legend=1 00:27:32.069 --rc geninfo_all_blocks=1 00:27:32.069 --rc geninfo_unexecuted_blocks=1 00:27:32.069 00:27:32.069 ' 00:27:32.069 05:15:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:27:32.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:32.069 --rc genhtml_branch_coverage=1 00:27:32.069 --rc genhtml_function_coverage=1 00:27:32.069 --rc genhtml_legend=1 00:27:32.069 --rc geninfo_all_blocks=1 00:27:32.069 --rc geninfo_unexecuted_blocks=1 00:27:32.069 00:27:32.069 ' 00:27:32.069 05:15:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:27:32.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:32.069 --rc genhtml_branch_coverage=1 00:27:32.069 --rc genhtml_function_coverage=1 00:27:32.069 --rc genhtml_legend=1 00:27:32.069 --rc geninfo_all_blocks=1 00:27:32.069 --rc geninfo_unexecuted_blocks=1 00:27:32.069 00:27:32.069 ' 00:27:32.069 05:15:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:27:32.069 05:15:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:27:32.069 05:15:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:27:32.069 05:15:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:27:32.069 05:15:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:27:32.069 05:15:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:27:32.069 05:15:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:32.069 05:15:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:27:32.069 05:15:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:27:32.069 05:15:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:32.069 05:15:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:32.069 05:15:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:27:32.069 05:15:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:27:32.069 05:15:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:32.070 05:15:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:32.070 05:15:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:27:32.070 05:15:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:27:32.070 05:15:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:32.070 05:15:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:32.070 05:15:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:27:32.070 05:15:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:27:32.070 05:15:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:32.070 05:15:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:32.070 05:15:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:32.070 05:15:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:32.070 05:15:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:27:32.070 05:15:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:27:32.070 05:15:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:32.070 05:15:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:32.070 05:15:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:32.070 05:15:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:27:32.070 05:15:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:27:32.070 05:15:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:27:32.070 05:15:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:27:32.070 05:15:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:27:32.070 05:15:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:27:32.070 05:15:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:27:32.070 05:15:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:27:32.070 05:15:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:27:32.070 05:15:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:27:32.070 05:15:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:27:32.070 05:15:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:27:32.070 05:15:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:27:32.070 05:15:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:27:32.070 05:15:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:27:32.070 05:15:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:32.070 05:15:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=92278 00:27:32.070 05:15:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:27:32.070 05:15:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:32.070 05:15:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 92278 00:27:32.070 05:15:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 92278 ']' 00:27:32.070 05:15:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:32.070 05:15:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:32.070 05:15:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:32.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:32.070 05:15:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:32.070 05:15:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:32.070 [2024-12-06 05:15:10.250130] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:27:32.070 [2024-12-06 05:15:10.250544] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92278 ] 00:27:32.330 [2024-12-06 05:15:10.388119] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:32.331 [2024-12-06 05:15:10.437822] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:27:32.902 05:15:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:32.902 05:15:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:27:32.902 05:15:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:32.902 05:15:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:27:32.902 05:15:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:27:32.902 05:15:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:32.902 05:15:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:27:32.902 05:15:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:32.902 05:15:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:27:32.902 05:15:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:32.902 05:15:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:27:32.902 05:15:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:32.902 05:15:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:27:32.902 05:15:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:32.902 05:15:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:27:32.902 05:15:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:32.902 05:15:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:27:32.902 05:15:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:27:32.902 05:15:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:27:32.902 05:15:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:27:32.902 05:15:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:27:32.902 05:15:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:27:32.902 05:15:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:27:33.474 05:15:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:27:33.474 05:15:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:27:33.474 05:15:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:27:33.474 05:15:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=basen1 00:27:33.474 05:15:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:27:33.474 05:15:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:27:33.474 05:15:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:27:33.474 05:15:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:27:33.474 05:15:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:27:33.474 { 00:27:33.474 "name": "basen1", 00:27:33.475 "aliases": [ 00:27:33.475 "87a6ca96-16f9-4f0d-9a79-ddb01acced06" 00:27:33.475 ], 00:27:33.475 "product_name": "NVMe disk", 00:27:33.475 "block_size": 4096, 00:27:33.475 "num_blocks": 1310720, 00:27:33.475 "uuid": "87a6ca96-16f9-4f0d-9a79-ddb01acced06", 00:27:33.475 "numa_id": -1, 00:27:33.475 "assigned_rate_limits": { 00:27:33.475 "rw_ios_per_sec": 0, 00:27:33.475 "rw_mbytes_per_sec": 0, 00:27:33.475 "r_mbytes_per_sec": 0, 00:27:33.475 "w_mbytes_per_sec": 0 00:27:33.475 }, 00:27:33.475 "claimed": true, 00:27:33.475 "claim_type": "read_many_write_one", 00:27:33.475 "zoned": false, 00:27:33.475 "supported_io_types": { 00:27:33.475 "read": true, 00:27:33.475 "write": true, 00:27:33.475 "unmap": true, 00:27:33.475 "flush": true, 00:27:33.475 "reset": true, 00:27:33.475 "nvme_admin": true, 00:27:33.475 "nvme_io": true, 00:27:33.475 "nvme_io_md": false, 00:27:33.475 "write_zeroes": true, 00:27:33.475 "zcopy": false, 00:27:33.475 "get_zone_info": false, 00:27:33.475 "zone_management": false, 00:27:33.475 "zone_append": false, 00:27:33.475 "compare": true, 00:27:33.475 "compare_and_write": false, 00:27:33.475 "abort": true, 00:27:33.475 "seek_hole": false, 00:27:33.475 "seek_data": false, 00:27:33.475 "copy": true, 00:27:33.475 "nvme_iov_md": false 00:27:33.475 }, 00:27:33.475 "driver_specific": { 00:27:33.475 "nvme": [ 00:27:33.475 { 00:27:33.475 "pci_address": "0000:00:11.0", 00:27:33.475 "trid": { 00:27:33.475 "trtype": "PCIe", 00:27:33.475 "traddr": "0000:00:11.0" 00:27:33.475 }, 00:27:33.475 "ctrlr_data": { 00:27:33.475 "cntlid": 0, 00:27:33.475 "vendor_id": "0x1b36", 00:27:33.475 "model_number": "QEMU NVMe Ctrl", 00:27:33.475 "serial_number": "12341", 00:27:33.475 "firmware_revision": "8.0.0", 00:27:33.475 "subnqn": "nqn.2019-08.org.qemu:12341", 00:27:33.475 "oacs": { 00:27:33.475 "security": 0, 00:27:33.475 "format": 1, 00:27:33.475 "firmware": 0, 00:27:33.475 "ns_manage": 1 00:27:33.475 }, 00:27:33.475 "multi_ctrlr": false, 00:27:33.475 "ana_reporting": false 00:27:33.475 }, 00:27:33.475 "vs": { 00:27:33.475 "nvme_version": "1.4" 00:27:33.475 }, 00:27:33.475 "ns_data": { 00:27:33.475 "id": 1, 00:27:33.475 "can_share": false 00:27:33.475 } 00:27:33.475 } 00:27:33.475 ], 00:27:33.475 "mp_policy": "active_passive" 00:27:33.475 } 00:27:33.475 } 00:27:33.475 ]' 00:27:33.475 05:15:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:27:33.475 05:15:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:27:33.475 05:15:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:27:33.475 05:15:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:27:33.475 05:15:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:27:33.475 05:15:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:27:33.475 05:15:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:27:33.475 05:15:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:27:33.475 05:15:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:27:33.475 05:15:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:33.475 05:15:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:27:33.737 05:15:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=a1687b15-5d96-461f-9a22-45c7b42bb861 00:27:33.737 05:15:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:27:33.737 05:15:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a1687b15-5d96-461f-9a22-45c7b42bb861 00:27:33.998 05:15:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:27:34.260 05:15:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=787b3e1e-d1aa-4135-8163-4f921cff71f4 00:27:34.260 05:15:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 787b3e1e-d1aa-4135-8163-4f921cff71f4 00:27:34.522 05:15:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=17e1f9ff-92f2-4b21-8aae-75877f35158b 00:27:34.522 05:15:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 17e1f9ff-92f2-4b21-8aae-75877f35158b ]] 00:27:34.522 05:15:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 17e1f9ff-92f2-4b21-8aae-75877f35158b 5120 00:27:34.522 05:15:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:27:34.522 05:15:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:27:34.522 05:15:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=17e1f9ff-92f2-4b21-8aae-75877f35158b 00:27:34.522 05:15:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:27:34.522 05:15:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 17e1f9ff-92f2-4b21-8aae-75877f35158b 00:27:34.522 05:15:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=17e1f9ff-92f2-4b21-8aae-75877f35158b 00:27:34.522 05:15:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:27:34.522 05:15:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:27:34.522 05:15:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:27:34.522 05:15:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 17e1f9ff-92f2-4b21-8aae-75877f35158b 00:27:34.783 05:15:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:27:34.783 { 00:27:34.783 "name": "17e1f9ff-92f2-4b21-8aae-75877f35158b", 00:27:34.783 "aliases": [ 00:27:34.783 "lvs/basen1p0" 00:27:34.783 ], 00:27:34.783 "product_name": "Logical Volume", 00:27:34.783 "block_size": 4096, 00:27:34.783 "num_blocks": 5242880, 00:27:34.783 "uuid": "17e1f9ff-92f2-4b21-8aae-75877f35158b", 00:27:34.783 "assigned_rate_limits": { 00:27:34.783 "rw_ios_per_sec": 0, 00:27:34.783 "rw_mbytes_per_sec": 0, 00:27:34.783 "r_mbytes_per_sec": 0, 00:27:34.783 "w_mbytes_per_sec": 0 00:27:34.783 }, 00:27:34.783 "claimed": false, 00:27:34.783 "zoned": false, 00:27:34.783 "supported_io_types": { 00:27:34.783 "read": true, 00:27:34.783 "write": true, 00:27:34.783 "unmap": true, 00:27:34.783 "flush": false, 00:27:34.783 "reset": true, 00:27:34.783 "nvme_admin": false, 00:27:34.783 "nvme_io": false, 00:27:34.783 "nvme_io_md": false, 00:27:34.783 "write_zeroes": true, 00:27:34.783 "zcopy": false, 00:27:34.783 "get_zone_info": false, 00:27:34.783 "zone_management": false, 00:27:34.783 "zone_append": false, 00:27:34.783 "compare": false, 00:27:34.783 "compare_and_write": false, 00:27:34.784 "abort": false, 00:27:34.784 "seek_hole": true, 00:27:34.784 "seek_data": true, 00:27:34.784 "copy": false, 00:27:34.784 "nvme_iov_md": false 00:27:34.784 }, 00:27:34.784 "driver_specific": { 00:27:34.784 "lvol": { 00:27:34.784 "lvol_store_uuid": "787b3e1e-d1aa-4135-8163-4f921cff71f4", 00:27:34.784 "base_bdev": "basen1", 00:27:34.784 "thin_provision": true, 00:27:34.784 "num_allocated_clusters": 0, 00:27:34.784 "snapshot": false, 00:27:34.784 "clone": false, 00:27:34.784 "esnap_clone": false 00:27:34.784 } 00:27:34.784 } 00:27:34.784 } 00:27:34.784 ]' 00:27:34.784 05:15:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:27:34.784 05:15:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:27:34.784 05:15:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:27:34.784 05:15:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=5242880 00:27:34.784 05:15:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=20480 00:27:34.784 05:15:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 20480 00:27:34.784 05:15:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:27:34.784 05:15:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:27:34.784 05:15:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:27:35.045 05:15:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:27:35.045 05:15:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:27:35.045 05:15:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:27:35.307 05:15:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:27:35.307 05:15:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:27:35.307 05:15:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 17e1f9ff-92f2-4b21-8aae-75877f35158b -c cachen1p0 --l2p_dram_limit 2 00:27:35.570 [2024-12-06 05:15:13.568554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:35.570 [2024-12-06 05:15:13.568628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:35.570 [2024-12-06 05:15:13.568644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:35.570 [2024-12-06 05:15:13.568656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:35.570 [2024-12-06 05:15:13.568773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:35.570 [2024-12-06 05:15:13.568788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:35.570 [2024-12-06 05:15:13.568798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.061 ms 00:27:35.570 [2024-12-06 05:15:13.568814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:35.570 [2024-12-06 05:15:13.568838] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:35.570 [2024-12-06 05:15:13.569154] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:35.570 [2024-12-06 05:15:13.569179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:35.570 [2024-12-06 05:15:13.569189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:35.570 [2024-12-06 05:15:13.569202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.346 ms 00:27:35.570 [2024-12-06 05:15:13.569212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:35.570 [2024-12-06 05:15:13.569401] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 6336acd9-9725-4447-9284-3cf0abd57840 00:27:35.570 [2024-12-06 05:15:13.571500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:35.570 [2024-12-06 05:15:13.571683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:27:35.570 [2024-12-06 05:15:13.571755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.040 ms 00:27:35.570 [2024-12-06 05:15:13.571780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:35.570 [2024-12-06 05:15:13.580848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:35.570 [2024-12-06 05:15:13.581009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:35.570 [2024-12-06 05:15:13.581031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.980 ms 00:27:35.570 [2024-12-06 05:15:13.581040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:35.570 [2024-12-06 05:15:13.581097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:35.570 [2024-12-06 05:15:13.581106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:35.570 [2024-12-06 05:15:13.581122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:27:35.570 [2024-12-06 05:15:13.581132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:35.570 [2024-12-06 05:15:13.581195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:35.570 [2024-12-06 05:15:13.581205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:35.570 [2024-12-06 05:15:13.581216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:27:35.570 [2024-12-06 05:15:13.581224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:35.570 [2024-12-06 05:15:13.581252] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:35.570 [2024-12-06 05:15:13.583531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:35.570 [2024-12-06 05:15:13.583579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:35.570 [2024-12-06 05:15:13.583593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.289 ms 00:27:35.570 [2024-12-06 05:15:13.583604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:35.570 [2024-12-06 05:15:13.583638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:35.570 [2024-12-06 05:15:13.583650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:35.570 [2024-12-06 05:15:13.583660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:27:35.570 [2024-12-06 05:15:13.583691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:35.570 [2024-12-06 05:15:13.583721] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:27:35.570 [2024-12-06 05:15:13.583881] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:27:35.570 [2024-12-06 05:15:13.583895] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:35.570 [2024-12-06 05:15:13.583911] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:27:35.570 [2024-12-06 05:15:13.583921] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:35.570 [2024-12-06 05:15:13.583939] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:35.570 [2024-12-06 05:15:13.583949] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:35.570 [2024-12-06 05:15:13.583962] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:35.570 [2024-12-06 05:15:13.583969] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:27:35.570 [2024-12-06 05:15:13.583979] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:27:35.570 [2024-12-06 05:15:13.583993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:35.570 [2024-12-06 05:15:13.584003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:35.570 [2024-12-06 05:15:13.584012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.274 ms 00:27:35.570 [2024-12-06 05:15:13.584021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:35.570 [2024-12-06 05:15:13.584106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:35.570 [2024-12-06 05:15:13.584120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:35.570 [2024-12-06 05:15:13.584128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.068 ms 00:27:35.570 [2024-12-06 05:15:13.584138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:35.570 [2024-12-06 05:15:13.584233] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:35.570 [2024-12-06 05:15:13.584252] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:35.570 [2024-12-06 05:15:13.584260] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:35.570 [2024-12-06 05:15:13.584270] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:35.570 [2024-12-06 05:15:13.584282] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:35.570 [2024-12-06 05:15:13.584290] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:35.570 [2024-12-06 05:15:13.584298] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:35.570 [2024-12-06 05:15:13.584307] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:35.570 [2024-12-06 05:15:13.584314] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:35.570 [2024-12-06 05:15:13.584323] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:35.570 [2024-12-06 05:15:13.584329] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:35.570 [2024-12-06 05:15:13.584339] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:35.570 [2024-12-06 05:15:13.584346] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:35.570 [2024-12-06 05:15:13.584357] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:35.570 [2024-12-06 05:15:13.584365] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:27:35.570 [2024-12-06 05:15:13.584375] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:35.570 [2024-12-06 05:15:13.584381] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:35.570 [2024-12-06 05:15:13.584390] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:27:35.570 [2024-12-06 05:15:13.584396] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:35.570 [2024-12-06 05:15:13.584406] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:35.570 [2024-12-06 05:15:13.584414] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:35.570 [2024-12-06 05:15:13.584423] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:35.570 [2024-12-06 05:15:13.584430] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:35.570 [2024-12-06 05:15:13.584439] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:35.570 [2024-12-06 05:15:13.584446] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:35.570 [2024-12-06 05:15:13.584455] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:35.570 [2024-12-06 05:15:13.584462] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:35.570 [2024-12-06 05:15:13.584471] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:35.570 [2024-12-06 05:15:13.584478] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:35.570 [2024-12-06 05:15:13.584490] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:27:35.570 [2024-12-06 05:15:13.584496] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:35.570 [2024-12-06 05:15:13.584506] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:35.570 [2024-12-06 05:15:13.584513] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:27:35.570 [2024-12-06 05:15:13.584521] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:35.570 [2024-12-06 05:15:13.584528] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:35.570 [2024-12-06 05:15:13.584536] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:27:35.570 [2024-12-06 05:15:13.584543] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:35.570 [2024-12-06 05:15:13.584552] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:27:35.570 [2024-12-06 05:15:13.584558] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:27:35.570 [2024-12-06 05:15:13.584566] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:35.570 [2024-12-06 05:15:13.584573] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:27:35.570 [2024-12-06 05:15:13.584581] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:27:35.571 [2024-12-06 05:15:13.584588] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:35.571 [2024-12-06 05:15:13.584597] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:35.571 [2024-12-06 05:15:13.584605] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:35.571 [2024-12-06 05:15:13.584617] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:35.571 [2024-12-06 05:15:13.584624] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:35.571 [2024-12-06 05:15:13.584635] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:35.571 [2024-12-06 05:15:13.584648] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:35.571 [2024-12-06 05:15:13.584657] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:35.571 [2024-12-06 05:15:13.584688] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:35.571 [2024-12-06 05:15:13.584698] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:35.571 [2024-12-06 05:15:13.584706] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:35.571 [2024-12-06 05:15:13.584721] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:35.571 [2024-12-06 05:15:13.584732] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:35.571 [2024-12-06 05:15:13.584744] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:35.571 [2024-12-06 05:15:13.584751] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:27:35.571 [2024-12-06 05:15:13.584761] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:27:35.571 [2024-12-06 05:15:13.584768] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:27:35.571 [2024-12-06 05:15:13.584777] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:27:35.571 [2024-12-06 05:15:13.584785] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:27:35.571 [2024-12-06 05:15:13.584797] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:27:35.571 [2024-12-06 05:15:13.584805] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:27:35.571 [2024-12-06 05:15:13.584815] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:27:35.571 [2024-12-06 05:15:13.584822] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:27:35.571 [2024-12-06 05:15:13.584832] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:27:35.571 [2024-12-06 05:15:13.584839] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:27:35.571 [2024-12-06 05:15:13.584848] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:27:35.571 [2024-12-06 05:15:13.584856] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:27:35.571 [2024-12-06 05:15:13.584866] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:35.571 [2024-12-06 05:15:13.584876] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:35.571 [2024-12-06 05:15:13.584887] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:35.571 [2024-12-06 05:15:13.584895] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:35.571 [2024-12-06 05:15:13.584905] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:35.571 [2024-12-06 05:15:13.584913] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:35.571 [2024-12-06 05:15:13.584922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:35.571 [2024-12-06 05:15:13.584930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:35.571 [2024-12-06 05:15:13.584943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.754 ms 00:27:35.571 [2024-12-06 05:15:13.584951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:35.571 [2024-12-06 05:15:13.584992] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:27:35.571 [2024-12-06 05:15:13.585003] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:27:39.780 [2024-12-06 05:15:17.713064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.780 [2024-12-06 05:15:17.713157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:27:39.780 [2024-12-06 05:15:17.713182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4128.048 ms 00:27:39.780 [2024-12-06 05:15:17.713192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.780 [2024-12-06 05:15:17.726794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.780 [2024-12-06 05:15:17.727024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:39.780 [2024-12-06 05:15:17.727051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.475 ms 00:27:39.780 [2024-12-06 05:15:17.727062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.780 [2024-12-06 05:15:17.727119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.780 [2024-12-06 05:15:17.727128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:39.780 [2024-12-06 05:15:17.727149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:27:39.780 [2024-12-06 05:15:17.727157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.780 [2024-12-06 05:15:17.738472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.780 [2024-12-06 05:15:17.738650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:39.780 [2024-12-06 05:15:17.738693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.255 ms 00:27:39.781 [2024-12-06 05:15:17.738703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.781 [2024-12-06 05:15:17.738741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.781 [2024-12-06 05:15:17.738749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:39.781 [2024-12-06 05:15:17.738764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:39.781 [2024-12-06 05:15:17.738772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.781 [2024-12-06 05:15:17.739286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.781 [2024-12-06 05:15:17.739313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:39.781 [2024-12-06 05:15:17.739327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.455 ms 00:27:39.781 [2024-12-06 05:15:17.739337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.781 [2024-12-06 05:15:17.739389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.781 [2024-12-06 05:15:17.739399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:39.781 [2024-12-06 05:15:17.739411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:27:39.781 [2024-12-06 05:15:17.739427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.781 [2024-12-06 05:15:17.754984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.781 [2024-12-06 05:15:17.755026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:39.781 [2024-12-06 05:15:17.755042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.531 ms 00:27:39.781 [2024-12-06 05:15:17.755051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.781 [2024-12-06 05:15:17.763791] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:39.781 [2024-12-06 05:15:17.764641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.781 [2024-12-06 05:15:17.764692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:39.781 [2024-12-06 05:15:17.764702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.491 ms 00:27:39.781 [2024-12-06 05:15:17.764711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.781 [2024-12-06 05:15:17.778132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.781 [2024-12-06 05:15:17.778172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:27:39.781 [2024-12-06 05:15:17.778184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.397 ms 00:27:39.781 [2024-12-06 05:15:17.778196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.781 [2024-12-06 05:15:17.778282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.781 [2024-12-06 05:15:17.778294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:39.781 [2024-12-06 05:15:17.778303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.047 ms 00:27:39.781 [2024-12-06 05:15:17.778312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.781 [2024-12-06 05:15:17.781912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.781 [2024-12-06 05:15:17.781948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:27:39.781 [2024-12-06 05:15:17.781958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.582 ms 00:27:39.781 [2024-12-06 05:15:17.781972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.781 [2024-12-06 05:15:17.785721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.781 [2024-12-06 05:15:17.785751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:27:39.781 [2024-12-06 05:15:17.785760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.709 ms 00:27:39.781 [2024-12-06 05:15:17.785768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.781 [2024-12-06 05:15:17.786056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.781 [2024-12-06 05:15:17.786067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:39.781 [2024-12-06 05:15:17.786075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.254 ms 00:27:39.781 [2024-12-06 05:15:17.786086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.781 [2024-12-06 05:15:17.820952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.781 [2024-12-06 05:15:17.820997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:27:39.781 [2024-12-06 05:15:17.821009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.848 ms 00:27:39.781 [2024-12-06 05:15:17.821019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.781 [2024-12-06 05:15:17.825958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.781 [2024-12-06 05:15:17.825998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:27:39.781 [2024-12-06 05:15:17.826008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.888 ms 00:27:39.781 [2024-12-06 05:15:17.826018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.781 [2024-12-06 05:15:17.829731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.781 [2024-12-06 05:15:17.829765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:27:39.781 [2024-12-06 05:15:17.829774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.679 ms 00:27:39.781 [2024-12-06 05:15:17.829783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.781 [2024-12-06 05:15:17.834579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.781 [2024-12-06 05:15:17.834620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:27:39.781 [2024-12-06 05:15:17.834630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.763 ms 00:27:39.781 [2024-12-06 05:15:17.834642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.781 [2024-12-06 05:15:17.834706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.781 [2024-12-06 05:15:17.834718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:39.781 [2024-12-06 05:15:17.834727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:27:39.781 [2024-12-06 05:15:17.834736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.781 [2024-12-06 05:15:17.834799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.781 [2024-12-06 05:15:17.834809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:39.781 [2024-12-06 05:15:17.834817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:27:39.781 [2024-12-06 05:15:17.834827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.781 [2024-12-06 05:15:17.835698] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4266.723 ms, result 0 00:27:39.781 { 00:27:39.781 "name": "ftl", 00:27:39.781 "uuid": "6336acd9-9725-4447-9284-3cf0abd57840" 00:27:39.781 } 00:27:39.781 05:15:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:27:40.042 [2024-12-06 05:15:18.044249] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:40.042 05:15:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:27:40.303 05:15:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:27:40.303 [2024-12-06 05:15:18.476732] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:27:40.303 05:15:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:27:40.565 [2024-12-06 05:15:18.713161] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:40.565 05:15:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:27:41.137 Fill FTL, iteration 1 00:27:41.137 05:15:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:27:41.137 05:15:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:27:41.137 05:15:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:27:41.137 05:15:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:27:41.137 05:15:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:27:41.137 05:15:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:27:41.137 05:15:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:27:41.137 05:15:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:27:41.137 05:15:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:27:41.137 05:15:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:41.138 05:15:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:27:41.138 05:15:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:27:41.138 05:15:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:41.138 05:15:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:41.138 05:15:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:41.138 05:15:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:27:41.138 05:15:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:27:41.138 05:15:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=92404 00:27:41.138 05:15:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:27:41.138 05:15:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 92404 /var/tmp/spdk.tgt.sock 00:27:41.138 05:15:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 92404 ']' 00:27:41.138 05:15:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:27:41.138 05:15:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:41.138 05:15:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:27:41.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:27:41.138 05:15:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:41.138 05:15:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:41.138 [2024-12-06 05:15:19.153057] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:27:41.138 [2024-12-06 05:15:19.153440] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92404 ] 00:27:41.138 [2024-12-06 05:15:19.294388] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:41.138 [2024-12-06 05:15:19.345629] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:42.079 05:15:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:42.079 05:15:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:27:42.079 05:15:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:27:42.079 ftln1 00:27:42.079 05:15:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:27:42.079 05:15:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:27:42.337 05:15:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:27:42.337 05:15:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 92404 00:27:42.337 05:15:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 92404 ']' 00:27:42.337 05:15:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 92404 00:27:42.337 05:15:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:27:42.337 05:15:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:42.337 05:15:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92404 00:27:42.337 killing process with pid 92404 00:27:42.337 05:15:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:42.337 05:15:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:42.337 05:15:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92404' 00:27:42.337 05:15:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 92404 00:27:42.337 05:15:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 92404 00:27:42.906 05:15:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:27:42.906 05:15:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:27:42.906 [2024-12-06 05:15:20.923887] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:27:42.906 [2024-12-06 05:15:20.923995] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92437 ] 00:27:42.906 [2024-12-06 05:15:21.059024] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:42.906 [2024-12-06 05:15:21.101131] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:44.292  [2024-12-06T05:15:23.468Z] Copying: 189/1024 [MB] (189 MBps) [2024-12-06T05:15:24.414Z] Copying: 417/1024 [MB] (228 MBps) [2024-12-06T05:15:25.355Z] Copying: 655/1024 [MB] (238 MBps) [2024-12-06T05:15:25.927Z] Copying: 892/1024 [MB] (237 MBps) [2024-12-06T05:15:26.187Z] Copying: 1024/1024 [MB] (average 225 MBps) 00:27:47.955 00:27:47.955 05:15:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:27:47.955 Calculate MD5 checksum, iteration 1 00:27:47.955 05:15:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:27:47.955 05:15:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:47.955 05:15:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:47.955 05:15:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:47.955 05:15:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:47.955 05:15:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:47.955 05:15:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:47.955 [2024-12-06 05:15:26.098860] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:27:47.955 [2024-12-06 05:15:26.098969] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92493 ] 00:27:48.212 [2024-12-06 05:15:26.232207] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:48.212 [2024-12-06 05:15:26.275119] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:49.582  [2024-12-06T05:15:28.073Z] Copying: 765/1024 [MB] (765 MBps) [2024-12-06T05:15:28.334Z] Copying: 1024/1024 [MB] (average 720 MBps) 00:27:50.102 00:27:50.102 05:15:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:27:50.102 05:15:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:52.012 05:15:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:27:52.012 Fill FTL, iteration 2 00:27:52.012 05:15:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=b768abbc159549b4f6b8673c2a228f5c 00:27:52.012 05:15:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:27:52.012 05:15:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:52.012 05:15:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:27:52.012 05:15:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:27:52.012 05:15:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:52.012 05:15:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:52.012 05:15:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:52.012 05:15:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:52.012 05:15:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:27:52.012 [2024-12-06 05:15:30.158190] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:27:52.012 [2024-12-06 05:15:30.158280] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92543 ] 00:27:52.270 [2024-12-06 05:15:30.288555] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:52.270 [2024-12-06 05:15:30.328755] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:53.654  [2024-12-06T05:15:32.830Z] Copying: 237/1024 [MB] (237 MBps) [2024-12-06T05:15:33.774Z] Copying: 473/1024 [MB] (236 MBps) [2024-12-06T05:15:34.719Z] Copying: 724/1024 [MB] (251 MBps) [2024-12-06T05:15:34.979Z] Copying: 956/1024 [MB] (232 MBps) [2024-12-06T05:15:35.240Z] Copying: 1024/1024 [MB] (average 234 MBps) 00:27:57.008 00:27:57.008 05:15:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:27:57.008 05:15:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:27:57.008 Calculate MD5 checksum, iteration 2 00:27:57.008 05:15:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:57.008 05:15:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:57.008 05:15:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:57.008 05:15:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:57.008 05:15:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:57.008 05:15:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:57.008 [2024-12-06 05:15:35.155681] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:27:57.008 [2024-12-06 05:15:35.155792] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92596 ] 00:27:57.266 [2024-12-06 05:15:35.288826] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:57.267 [2024-12-06 05:15:35.329559] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:58.645  [2024-12-06T05:15:37.443Z] Copying: 611/1024 [MB] (611 MBps) [2024-12-06T05:15:38.041Z] Copying: 1024/1024 [MB] (average 640 MBps) 00:27:59.809 00:27:59.809 05:15:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:27:59.809 05:15:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:02.349 05:15:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:28:02.349 05:15:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=1510113f3ae6e7dbb8996c0462948b82 00:28:02.349 05:15:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:28:02.349 05:15:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:28:02.349 05:15:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:28:02.349 [2024-12-06 05:15:40.223682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.349 [2024-12-06 05:15:40.223725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:28:02.349 [2024-12-06 05:15:40.223736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:28:02.349 [2024-12-06 05:15:40.223743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.349 [2024-12-06 05:15:40.223761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.349 [2024-12-06 05:15:40.223768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:28:02.349 [2024-12-06 05:15:40.223778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:02.349 [2024-12-06 05:15:40.223784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.349 [2024-12-06 05:15:40.223800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.349 [2024-12-06 05:15:40.223806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:28:02.349 [2024-12-06 05:15:40.223812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:28:02.349 [2024-12-06 05:15:40.223818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.349 [2024-12-06 05:15:40.223866] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.182 ms, result 0 00:28:02.349 true 00:28:02.349 05:15:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:02.349 { 00:28:02.349 "name": "ftl", 00:28:02.349 "properties": [ 00:28:02.349 { 00:28:02.349 "name": "superblock_version", 00:28:02.349 "value": 5, 00:28:02.349 "read-only": true 00:28:02.349 }, 00:28:02.349 { 00:28:02.349 "name": "base_device", 00:28:02.349 "bands": [ 00:28:02.349 { 00:28:02.349 "id": 0, 00:28:02.349 "state": "FREE", 00:28:02.349 "validity": 0.0 00:28:02.349 }, 00:28:02.349 { 00:28:02.349 "id": 1, 00:28:02.349 "state": "FREE", 00:28:02.349 "validity": 0.0 00:28:02.349 }, 00:28:02.349 { 00:28:02.349 "id": 2, 00:28:02.349 "state": "FREE", 00:28:02.349 "validity": 0.0 00:28:02.349 }, 00:28:02.349 { 00:28:02.349 "id": 3, 00:28:02.349 "state": "FREE", 00:28:02.349 "validity": 0.0 00:28:02.349 }, 00:28:02.349 { 00:28:02.349 "id": 4, 00:28:02.349 "state": "FREE", 00:28:02.349 "validity": 0.0 00:28:02.349 }, 00:28:02.349 { 00:28:02.349 "id": 5, 00:28:02.349 "state": "FREE", 00:28:02.350 "validity": 0.0 00:28:02.350 }, 00:28:02.350 { 00:28:02.350 "id": 6, 00:28:02.350 "state": "FREE", 00:28:02.350 "validity": 0.0 00:28:02.350 }, 00:28:02.350 { 00:28:02.350 "id": 7, 00:28:02.350 "state": "FREE", 00:28:02.350 "validity": 0.0 00:28:02.350 }, 00:28:02.350 { 00:28:02.350 "id": 8, 00:28:02.350 "state": "FREE", 00:28:02.350 "validity": 0.0 00:28:02.350 }, 00:28:02.350 { 00:28:02.350 "id": 9, 00:28:02.350 "state": "FREE", 00:28:02.350 "validity": 0.0 00:28:02.350 }, 00:28:02.350 { 00:28:02.350 "id": 10, 00:28:02.350 "state": "FREE", 00:28:02.350 "validity": 0.0 00:28:02.350 }, 00:28:02.350 { 00:28:02.350 "id": 11, 00:28:02.350 "state": "FREE", 00:28:02.350 "validity": 0.0 00:28:02.350 }, 00:28:02.350 { 00:28:02.350 "id": 12, 00:28:02.350 "state": "FREE", 00:28:02.350 "validity": 0.0 00:28:02.350 }, 00:28:02.350 { 00:28:02.350 "id": 13, 00:28:02.350 "state": "FREE", 00:28:02.350 "validity": 0.0 00:28:02.350 }, 00:28:02.350 { 00:28:02.350 "id": 14, 00:28:02.350 "state": "FREE", 00:28:02.350 "validity": 0.0 00:28:02.350 }, 00:28:02.350 { 00:28:02.350 "id": 15, 00:28:02.350 "state": "FREE", 00:28:02.350 "validity": 0.0 00:28:02.350 }, 00:28:02.350 { 00:28:02.350 "id": 16, 00:28:02.350 "state": "FREE", 00:28:02.350 "validity": 0.0 00:28:02.350 }, 00:28:02.350 { 00:28:02.350 "id": 17, 00:28:02.350 "state": "FREE", 00:28:02.350 "validity": 0.0 00:28:02.350 } 00:28:02.350 ], 00:28:02.350 "read-only": true 00:28:02.350 }, 00:28:02.350 { 00:28:02.350 "name": "cache_device", 00:28:02.350 "type": "bdev", 00:28:02.350 "chunks": [ 00:28:02.350 { 00:28:02.350 "id": 0, 00:28:02.350 "state": "INACTIVE", 00:28:02.350 "utilization": 0.0 00:28:02.350 }, 00:28:02.350 { 00:28:02.350 "id": 1, 00:28:02.350 "state": "CLOSED", 00:28:02.350 "utilization": 1.0 00:28:02.350 }, 00:28:02.350 { 00:28:02.350 "id": 2, 00:28:02.350 "state": "CLOSED", 00:28:02.350 "utilization": 1.0 00:28:02.350 }, 00:28:02.350 { 00:28:02.350 "id": 3, 00:28:02.350 "state": "OPEN", 00:28:02.350 "utilization": 0.001953125 00:28:02.350 }, 00:28:02.350 { 00:28:02.350 "id": 4, 00:28:02.350 "state": "OPEN", 00:28:02.350 "utilization": 0.0 00:28:02.350 } 00:28:02.350 ], 00:28:02.350 "read-only": true 00:28:02.350 }, 00:28:02.350 { 00:28:02.350 "name": "verbose_mode", 00:28:02.350 "value": true, 00:28:02.350 "unit": "", 00:28:02.350 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:28:02.350 }, 00:28:02.350 { 00:28:02.350 "name": "prep_upgrade_on_shutdown", 00:28:02.350 "value": false, 00:28:02.350 "unit": "", 00:28:02.350 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:28:02.350 } 00:28:02.350 ] 00:28:02.350 } 00:28:02.350 05:15:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:28:02.608 [2024-12-06 05:15:40.627155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.608 [2024-12-06 05:15:40.627293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:28:02.608 [2024-12-06 05:15:40.627344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:02.608 [2024-12-06 05:15:40.627362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.608 [2024-12-06 05:15:40.627396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.608 [2024-12-06 05:15:40.627413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:28:02.608 [2024-12-06 05:15:40.627428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:02.608 [2024-12-06 05:15:40.627442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.608 [2024-12-06 05:15:40.627466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.608 [2024-12-06 05:15:40.627482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:28:02.608 [2024-12-06 05:15:40.627497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:28:02.608 [2024-12-06 05:15:40.627540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.608 [2024-12-06 05:15:40.627600] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.433 ms, result 0 00:28:02.608 true 00:28:02.608 05:15:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:28:02.608 05:15:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:28:02.608 05:15:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:02.867 05:15:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:28:02.867 05:15:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:28:02.867 05:15:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:28:02.867 [2024-12-06 05:15:41.043505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.867 [2024-12-06 05:15:41.043535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:28:02.867 [2024-12-06 05:15:41.043543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:02.867 [2024-12-06 05:15:41.043549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.867 [2024-12-06 05:15:41.043565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.867 [2024-12-06 05:15:41.043571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:28:02.867 [2024-12-06 05:15:41.043577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:28:02.867 [2024-12-06 05:15:41.043583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.867 [2024-12-06 05:15:41.043597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.867 [2024-12-06 05:15:41.043603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:28:02.867 [2024-12-06 05:15:41.043609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:28:02.867 [2024-12-06 05:15:41.043614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.867 [2024-12-06 05:15:41.043655] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.139 ms, result 0 00:28:02.867 true 00:28:02.867 05:15:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:03.126 { 00:28:03.126 "name": "ftl", 00:28:03.126 "properties": [ 00:28:03.126 { 00:28:03.126 "name": "superblock_version", 00:28:03.126 "value": 5, 00:28:03.126 "read-only": true 00:28:03.126 }, 00:28:03.126 { 00:28:03.126 "name": "base_device", 00:28:03.126 "bands": [ 00:28:03.126 { 00:28:03.126 "id": 0, 00:28:03.126 "state": "FREE", 00:28:03.126 "validity": 0.0 00:28:03.126 }, 00:28:03.126 { 00:28:03.126 "id": 1, 00:28:03.126 "state": "FREE", 00:28:03.126 "validity": 0.0 00:28:03.126 }, 00:28:03.126 { 00:28:03.126 "id": 2, 00:28:03.126 "state": "FREE", 00:28:03.126 "validity": 0.0 00:28:03.126 }, 00:28:03.126 { 00:28:03.126 "id": 3, 00:28:03.126 "state": "FREE", 00:28:03.126 "validity": 0.0 00:28:03.126 }, 00:28:03.126 { 00:28:03.126 "id": 4, 00:28:03.126 "state": "FREE", 00:28:03.126 "validity": 0.0 00:28:03.126 }, 00:28:03.126 { 00:28:03.126 "id": 5, 00:28:03.126 "state": "FREE", 00:28:03.126 "validity": 0.0 00:28:03.126 }, 00:28:03.126 { 00:28:03.126 "id": 6, 00:28:03.126 "state": "FREE", 00:28:03.126 "validity": 0.0 00:28:03.126 }, 00:28:03.126 { 00:28:03.126 "id": 7, 00:28:03.126 "state": "FREE", 00:28:03.126 "validity": 0.0 00:28:03.126 }, 00:28:03.126 { 00:28:03.126 "id": 8, 00:28:03.126 "state": "FREE", 00:28:03.126 "validity": 0.0 00:28:03.126 }, 00:28:03.126 { 00:28:03.126 "id": 9, 00:28:03.126 "state": "FREE", 00:28:03.126 "validity": 0.0 00:28:03.126 }, 00:28:03.126 { 00:28:03.126 "id": 10, 00:28:03.126 "state": "FREE", 00:28:03.126 "validity": 0.0 00:28:03.126 }, 00:28:03.126 { 00:28:03.126 "id": 11, 00:28:03.126 "state": "FREE", 00:28:03.126 "validity": 0.0 00:28:03.126 }, 00:28:03.126 { 00:28:03.126 "id": 12, 00:28:03.126 "state": "FREE", 00:28:03.126 "validity": 0.0 00:28:03.126 }, 00:28:03.126 { 00:28:03.126 "id": 13, 00:28:03.126 "state": "FREE", 00:28:03.126 "validity": 0.0 00:28:03.126 }, 00:28:03.126 { 00:28:03.126 "id": 14, 00:28:03.126 "state": "FREE", 00:28:03.126 "validity": 0.0 00:28:03.126 }, 00:28:03.126 { 00:28:03.126 "id": 15, 00:28:03.126 "state": "FREE", 00:28:03.126 "validity": 0.0 00:28:03.126 }, 00:28:03.126 { 00:28:03.126 "id": 16, 00:28:03.126 "state": "FREE", 00:28:03.126 "validity": 0.0 00:28:03.126 }, 00:28:03.126 { 00:28:03.126 "id": 17, 00:28:03.126 "state": "FREE", 00:28:03.126 "validity": 0.0 00:28:03.126 } 00:28:03.126 ], 00:28:03.126 "read-only": true 00:28:03.126 }, 00:28:03.126 { 00:28:03.126 "name": "cache_device", 00:28:03.126 "type": "bdev", 00:28:03.126 "chunks": [ 00:28:03.126 { 00:28:03.126 "id": 0, 00:28:03.126 "state": "INACTIVE", 00:28:03.126 "utilization": 0.0 00:28:03.126 }, 00:28:03.126 { 00:28:03.126 "id": 1, 00:28:03.126 "state": "CLOSED", 00:28:03.126 "utilization": 1.0 00:28:03.126 }, 00:28:03.126 { 00:28:03.126 "id": 2, 00:28:03.126 "state": "CLOSED", 00:28:03.126 "utilization": 1.0 00:28:03.126 }, 00:28:03.126 { 00:28:03.126 "id": 3, 00:28:03.126 "state": "OPEN", 00:28:03.126 "utilization": 0.001953125 00:28:03.126 }, 00:28:03.126 { 00:28:03.126 "id": 4, 00:28:03.126 "state": "OPEN", 00:28:03.126 "utilization": 0.0 00:28:03.126 } 00:28:03.126 ], 00:28:03.126 "read-only": true 00:28:03.126 }, 00:28:03.126 { 00:28:03.126 "name": "verbose_mode", 00:28:03.126 "value": true, 00:28:03.126 "unit": "", 00:28:03.126 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:28:03.126 }, 00:28:03.126 { 00:28:03.126 "name": "prep_upgrade_on_shutdown", 00:28:03.126 "value": true, 00:28:03.126 "unit": "", 00:28:03.126 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:28:03.126 } 00:28:03.126 ] 00:28:03.126 } 00:28:03.126 05:15:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:28:03.126 05:15:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 92278 ]] 00:28:03.126 05:15:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 92278 00:28:03.126 05:15:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 92278 ']' 00:28:03.126 05:15:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 92278 00:28:03.126 05:15:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:28:03.126 05:15:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:03.126 05:15:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92278 00:28:03.126 05:15:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:03.126 05:15:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:03.126 05:15:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92278' 00:28:03.126 killing process with pid 92278 00:28:03.126 05:15:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 92278 00:28:03.126 05:15:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 92278 00:28:03.387 [2024-12-06 05:15:41.378437] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:28:03.387 [2024-12-06 05:15:41.381955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.387 [2024-12-06 05:15:41.382055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:28:03.387 [2024-12-06 05:15:41.382102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:03.387 [2024-12-06 05:15:41.382120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.387 [2024-12-06 05:15:41.382151] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:28:03.387 [2024-12-06 05:15:41.382535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.387 [2024-12-06 05:15:41.382610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:28:03.387 [2024-12-06 05:15:41.382762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.357 ms 00:28:03.387 [2024-12-06 05:15:41.382780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.381 [2024-12-06 05:15:49.849347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:13.381 [2024-12-06 05:15:49.849495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:28:13.381 [2024-12-06 05:15:49.849551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8466.483 ms 00:28:13.381 [2024-12-06 05:15:49.849566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.381 [2024-12-06 05:15:49.850529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:13.381 [2024-12-06 05:15:49.850544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:28:13.381 [2024-12-06 05:15:49.850552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.947 ms 00:28:13.381 [2024-12-06 05:15:49.850559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.381 [2024-12-06 05:15:49.851441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:13.381 [2024-12-06 05:15:49.851461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:28:13.381 [2024-12-06 05:15:49.851469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.861 ms 00:28:13.381 [2024-12-06 05:15:49.851480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.381 [2024-12-06 05:15:49.852966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:13.381 [2024-12-06 05:15:49.852996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:28:13.381 [2024-12-06 05:15:49.853003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.456 ms 00:28:13.381 [2024-12-06 05:15:49.853009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.381 [2024-12-06 05:15:49.855436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:13.381 [2024-12-06 05:15:49.855467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:28:13.381 [2024-12-06 05:15:49.855474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.401 ms 00:28:13.382 [2024-12-06 05:15:49.855480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.382 [2024-12-06 05:15:49.855534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:13.382 [2024-12-06 05:15:49.855541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:28:13.382 [2024-12-06 05:15:49.855552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:28:13.382 [2024-12-06 05:15:49.855558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.382 [2024-12-06 05:15:49.856790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:13.382 [2024-12-06 05:15:49.856817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:28:13.382 [2024-12-06 05:15:49.856824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.221 ms 00:28:13.382 [2024-12-06 05:15:49.856830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.382 [2024-12-06 05:15:49.857905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:13.382 [2024-12-06 05:15:49.858013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:28:13.382 [2024-12-06 05:15:49.858025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.051 ms 00:28:13.382 [2024-12-06 05:15:49.858031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.382 [2024-12-06 05:15:49.859124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:13.382 [2024-12-06 05:15:49.859144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:28:13.382 [2024-12-06 05:15:49.859151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.070 ms 00:28:13.382 [2024-12-06 05:15:49.859156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.382 [2024-12-06 05:15:49.860199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:13.382 [2024-12-06 05:15:49.860220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:28:13.382 [2024-12-06 05:15:49.860227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.997 ms 00:28:13.382 [2024-12-06 05:15:49.860232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.382 [2024-12-06 05:15:49.860256] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:28:13.382 [2024-12-06 05:15:49.860267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:13.382 [2024-12-06 05:15:49.860275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:28:13.382 [2024-12-06 05:15:49.860282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:28:13.382 [2024-12-06 05:15:49.860288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:13.382 [2024-12-06 05:15:49.860295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:13.382 [2024-12-06 05:15:49.860301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:13.382 [2024-12-06 05:15:49.860307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:13.382 [2024-12-06 05:15:49.860313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:13.382 [2024-12-06 05:15:49.860319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:13.382 [2024-12-06 05:15:49.860325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:13.382 [2024-12-06 05:15:49.860330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:13.382 [2024-12-06 05:15:49.860336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:13.382 [2024-12-06 05:15:49.860342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:13.382 [2024-12-06 05:15:49.860347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:13.382 [2024-12-06 05:15:49.860353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:13.382 [2024-12-06 05:15:49.860359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:13.382 [2024-12-06 05:15:49.860364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:13.382 [2024-12-06 05:15:49.860370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:13.382 [2024-12-06 05:15:49.860379] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:28:13.382 [2024-12-06 05:15:49.860385] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 6336acd9-9725-4447-9284-3cf0abd57840 00:28:13.382 [2024-12-06 05:15:49.860391] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:28:13.382 [2024-12-06 05:15:49.860397] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:28:13.382 [2024-12-06 05:15:49.860402] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:28:13.382 [2024-12-06 05:15:49.860408] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:28:13.382 [2024-12-06 05:15:49.860414] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:28:13.382 [2024-12-06 05:15:49.860425] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:28:13.382 [2024-12-06 05:15:49.860430] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:28:13.382 [2024-12-06 05:15:49.860435] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:28:13.382 [2024-12-06 05:15:49.860440] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:28:13.382 [2024-12-06 05:15:49.860447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:13.382 [2024-12-06 05:15:49.860453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:28:13.382 [2024-12-06 05:15:49.860460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.192 ms 00:28:13.382 [2024-12-06 05:15:49.860466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.382 [2024-12-06 05:15:49.861816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:13.382 [2024-12-06 05:15:49.861844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:28:13.382 [2024-12-06 05:15:49.861861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.338 ms 00:28:13.382 [2024-12-06 05:15:49.861879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.382 [2024-12-06 05:15:49.861954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:13.382 [2024-12-06 05:15:49.861971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:28:13.382 [2024-12-06 05:15:49.861991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.052 ms 00:28:13.382 [2024-12-06 05:15:49.862005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.382 [2024-12-06 05:15:49.866561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:13.382 [2024-12-06 05:15:49.866684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:13.382 [2024-12-06 05:15:49.866734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:13.382 [2024-12-06 05:15:49.866757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.382 [2024-12-06 05:15:49.866799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:13.382 [2024-12-06 05:15:49.866815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:13.382 [2024-12-06 05:15:49.866830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:13.382 [2024-12-06 05:15:49.866851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.382 [2024-12-06 05:15:49.866918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:13.382 [2024-12-06 05:15:49.866939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:13.382 [2024-12-06 05:15:49.866955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:13.382 [2024-12-06 05:15:49.867003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.382 [2024-12-06 05:15:49.867086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:13.382 [2024-12-06 05:15:49.867105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:13.382 [2024-12-06 05:15:49.867120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:13.382 [2024-12-06 05:15:49.867134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.382 [2024-12-06 05:15:49.875087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:13.382 [2024-12-06 05:15:49.875118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:13.382 [2024-12-06 05:15:49.875127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:13.382 [2024-12-06 05:15:49.875138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.382 [2024-12-06 05:15:49.881691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:13.382 [2024-12-06 05:15:49.881734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:13.382 [2024-12-06 05:15:49.881744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:13.382 [2024-12-06 05:15:49.881750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.382 [2024-12-06 05:15:49.881788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:13.382 [2024-12-06 05:15:49.881796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:13.382 [2024-12-06 05:15:49.881803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:13.382 [2024-12-06 05:15:49.881809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.382 [2024-12-06 05:15:49.881853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:13.382 [2024-12-06 05:15:49.881860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:13.382 [2024-12-06 05:15:49.881867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:13.382 [2024-12-06 05:15:49.881873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.382 [2024-12-06 05:15:49.881921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:13.382 [2024-12-06 05:15:49.881930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:13.382 [2024-12-06 05:15:49.881936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:13.382 [2024-12-06 05:15:49.881943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.382 [2024-12-06 05:15:49.881964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:13.382 [2024-12-06 05:15:49.881977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:28:13.382 [2024-12-06 05:15:49.881983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:13.382 [2024-12-06 05:15:49.881989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.382 [2024-12-06 05:15:49.882021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:13.382 [2024-12-06 05:15:49.882028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:13.382 [2024-12-06 05:15:49.882034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:13.382 [2024-12-06 05:15:49.882040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.382 [2024-12-06 05:15:49.882078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:13.383 [2024-12-06 05:15:49.882086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:13.383 [2024-12-06 05:15:49.882092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:13.383 [2024-12-06 05:15:49.882098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.383 [2024-12-06 05:15:49.882193] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8500.185 ms, result 0 00:28:13.383 05:15:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:28:13.383 05:15:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:28:13.383 05:15:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:28:13.383 05:15:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:28:13.383 05:15:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:13.383 05:15:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=92774 00:28:13.383 05:15:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:28:13.383 05:15:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 92774 00:28:13.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:13.383 05:15:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 92774 ']' 00:28:13.383 05:15:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:13.383 05:15:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:13.383 05:15:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:13.383 05:15:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:13.383 05:15:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:13.383 05:15:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:13.383 [2024-12-06 05:15:50.885067] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:28:13.383 [2024-12-06 05:15:50.885189] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92774 ] 00:28:13.383 [2024-12-06 05:15:51.019808] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:13.383 [2024-12-06 05:15:51.060657] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:13.383 [2024-12-06 05:15:51.309941] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:13.383 [2024-12-06 05:15:51.309994] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:13.383 [2024-12-06 05:15:51.447592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:13.383 [2024-12-06 05:15:51.447630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:28:13.383 [2024-12-06 05:15:51.447640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:13.383 [2024-12-06 05:15:51.447648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.383 [2024-12-06 05:15:51.447701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:13.383 [2024-12-06 05:15:51.447709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:13.383 [2024-12-06 05:15:51.447715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:28:13.383 [2024-12-06 05:15:51.447720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.383 [2024-12-06 05:15:51.447736] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:28:13.383 [2024-12-06 05:15:51.447907] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:28:13.383 [2024-12-06 05:15:51.447917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:13.383 [2024-12-06 05:15:51.447923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:13.383 [2024-12-06 05:15:51.447931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.184 ms 00:28:13.383 [2024-12-06 05:15:51.447936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.383 [2024-12-06 05:15:51.448867] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:28:13.383 [2024-12-06 05:15:51.451231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:13.383 [2024-12-06 05:15:51.451268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:28:13.383 [2024-12-06 05:15:51.451279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.366 ms 00:28:13.383 [2024-12-06 05:15:51.451287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.383 [2024-12-06 05:15:51.451330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:13.383 [2024-12-06 05:15:51.451338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:28:13.383 [2024-12-06 05:15:51.451344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:28:13.383 [2024-12-06 05:15:51.451350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.383 [2024-12-06 05:15:51.455753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:13.383 [2024-12-06 05:15:51.455858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:13.383 [2024-12-06 05:15:51.455872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.371 ms 00:28:13.383 [2024-12-06 05:15:51.455878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.383 [2024-12-06 05:15:51.455912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:13.383 [2024-12-06 05:15:51.455921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:13.383 [2024-12-06 05:15:51.455928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:28:13.383 [2024-12-06 05:15:51.455933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.383 [2024-12-06 05:15:51.455971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:13.383 [2024-12-06 05:15:51.455979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:28:13.383 [2024-12-06 05:15:51.455986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:13.383 [2024-12-06 05:15:51.455993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.383 [2024-12-06 05:15:51.456012] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:28:13.383 [2024-12-06 05:15:51.457171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:13.383 [2024-12-06 05:15:51.457194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:13.383 [2024-12-06 05:15:51.457201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.162 ms 00:28:13.383 [2024-12-06 05:15:51.457206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.383 [2024-12-06 05:15:51.457230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:13.383 [2024-12-06 05:15:51.457236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:28:13.383 [2024-12-06 05:15:51.457242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:13.383 [2024-12-06 05:15:51.457250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.383 [2024-12-06 05:15:51.457266] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:28:13.383 [2024-12-06 05:15:51.457282] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:28:13.383 [2024-12-06 05:15:51.457309] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:28:13.383 [2024-12-06 05:15:51.457322] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:28:13.383 [2024-12-06 05:15:51.457400] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:28:13.383 [2024-12-06 05:15:51.457408] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:28:13.383 [2024-12-06 05:15:51.457418] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:28:13.383 [2024-12-06 05:15:51.457427] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:28:13.383 [2024-12-06 05:15:51.457436] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:28:13.383 [2024-12-06 05:15:51.457442] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:28:13.383 [2024-12-06 05:15:51.457448] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:28:13.383 [2024-12-06 05:15:51.457454] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:28:13.383 [2024-12-06 05:15:51.457459] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:28:13.383 [2024-12-06 05:15:51.457464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:13.383 [2024-12-06 05:15:51.457470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:28:13.383 [2024-12-06 05:15:51.457476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.201 ms 00:28:13.383 [2024-12-06 05:15:51.457482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.383 [2024-12-06 05:15:51.457548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:13.383 [2024-12-06 05:15:51.457557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:28:13.383 [2024-12-06 05:15:51.457565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:28:13.383 [2024-12-06 05:15:51.457571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.383 [2024-12-06 05:15:51.457647] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:28:13.383 [2024-12-06 05:15:51.457654] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:28:13.383 [2024-12-06 05:15:51.458037] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:13.383 [2024-12-06 05:15:51.458048] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:13.383 [2024-12-06 05:15:51.458056] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:28:13.383 [2024-12-06 05:15:51.458062] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:28:13.383 [2024-12-06 05:15:51.458068] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:28:13.383 [2024-12-06 05:15:51.458074] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:28:13.383 [2024-12-06 05:15:51.458080] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:28:13.383 [2024-12-06 05:15:51.458086] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:13.383 [2024-12-06 05:15:51.458092] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:28:13.383 [2024-12-06 05:15:51.458098] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:28:13.383 [2024-12-06 05:15:51.458103] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:13.383 [2024-12-06 05:15:51.458109] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:28:13.383 [2024-12-06 05:15:51.458115] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:28:13.383 [2024-12-06 05:15:51.458121] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:13.384 [2024-12-06 05:15:51.458128] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:28:13.384 [2024-12-06 05:15:51.458133] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:28:13.384 [2024-12-06 05:15:51.458145] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:13.384 [2024-12-06 05:15:51.458151] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:28:13.384 [2024-12-06 05:15:51.458157] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:28:13.384 [2024-12-06 05:15:51.458162] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:13.384 [2024-12-06 05:15:51.458168] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:28:13.384 [2024-12-06 05:15:51.458174] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:28:13.384 [2024-12-06 05:15:51.458180] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:13.384 [2024-12-06 05:15:51.458185] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:28:13.384 [2024-12-06 05:15:51.458191] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:28:13.384 [2024-12-06 05:15:51.458197] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:13.384 [2024-12-06 05:15:51.458202] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:28:13.384 [2024-12-06 05:15:51.458208] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:28:13.384 [2024-12-06 05:15:51.458213] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:13.384 [2024-12-06 05:15:51.458217] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:28:13.384 [2024-12-06 05:15:51.458222] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:28:13.384 [2024-12-06 05:15:51.458227] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:13.384 [2024-12-06 05:15:51.458233] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:28:13.384 [2024-12-06 05:15:51.458238] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:28:13.384 [2024-12-06 05:15:51.458242] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:13.384 [2024-12-06 05:15:51.458247] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:28:13.384 [2024-12-06 05:15:51.458253] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:28:13.384 [2024-12-06 05:15:51.458257] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:13.384 [2024-12-06 05:15:51.458263] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:28:13.384 [2024-12-06 05:15:51.458267] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:28:13.384 [2024-12-06 05:15:51.458272] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:13.384 [2024-12-06 05:15:51.458277] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:28:13.384 [2024-12-06 05:15:51.458284] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:28:13.384 [2024-12-06 05:15:51.458289] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:13.384 [2024-12-06 05:15:51.458294] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:13.384 [2024-12-06 05:15:51.458303] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:28:13.384 [2024-12-06 05:15:51.458314] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:28:13.384 [2024-12-06 05:15:51.458320] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:28:13.384 [2024-12-06 05:15:51.458326] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:28:13.384 [2024-12-06 05:15:51.458331] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:28:13.384 [2024-12-06 05:15:51.458336] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:28:13.384 [2024-12-06 05:15:51.458342] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:28:13.384 [2024-12-06 05:15:51.458349] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:13.384 [2024-12-06 05:15:51.458357] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:28:13.384 [2024-12-06 05:15:51.458362] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:28:13.384 [2024-12-06 05:15:51.458367] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:28:13.384 [2024-12-06 05:15:51.458373] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:28:13.384 [2024-12-06 05:15:51.458378] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:28:13.384 [2024-12-06 05:15:51.458383] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:28:13.384 [2024-12-06 05:15:51.458390] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:28:13.384 [2024-12-06 05:15:51.458395] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:28:13.384 [2024-12-06 05:15:51.458401] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:28:13.384 [2024-12-06 05:15:51.458406] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:28:13.384 [2024-12-06 05:15:51.458411] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:28:13.384 [2024-12-06 05:15:51.458418] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:28:13.384 [2024-12-06 05:15:51.458423] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:28:13.384 [2024-12-06 05:15:51.458428] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:28:13.384 [2024-12-06 05:15:51.458434] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:28:13.384 [2024-12-06 05:15:51.458439] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:13.384 [2024-12-06 05:15:51.458445] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:13.384 [2024-12-06 05:15:51.458450] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:28:13.384 [2024-12-06 05:15:51.458456] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:28:13.384 [2024-12-06 05:15:51.458461] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:28:13.384 [2024-12-06 05:15:51.458471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:13.384 [2024-12-06 05:15:51.458476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:28:13.384 [2024-12-06 05:15:51.458481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.875 ms 00:28:13.384 [2024-12-06 05:15:51.458487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.384 [2024-12-06 05:15:51.458522] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:28:13.384 [2024-12-06 05:15:51.458536] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:28:16.686 [2024-12-06 05:15:54.805504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:16.686 [2024-12-06 05:15:54.805586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:28:16.686 [2024-12-06 05:15:54.805604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3346.967 ms 00:28:16.686 [2024-12-06 05:15:54.805614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:16.687 [2024-12-06 05:15:54.818753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:16.687 [2024-12-06 05:15:54.819041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:16.687 [2024-12-06 05:15:54.819064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.971 ms 00:28:16.687 [2024-12-06 05:15:54.819074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:16.687 [2024-12-06 05:15:54.819132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:16.687 [2024-12-06 05:15:54.819143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:28:16.687 [2024-12-06 05:15:54.819152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:28:16.687 [2024-12-06 05:15:54.819170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:16.687 [2024-12-06 05:15:54.841362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:16.687 [2024-12-06 05:15:54.841467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:16.687 [2024-12-06 05:15:54.841495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.130 ms 00:28:16.687 [2024-12-06 05:15:54.841513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:16.687 [2024-12-06 05:15:54.841625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:16.687 [2024-12-06 05:15:54.841649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:16.687 [2024-12-06 05:15:54.841746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:28:16.687 [2024-12-06 05:15:54.841766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:16.687 [2024-12-06 05:15:54.842528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:16.687 [2024-12-06 05:15:54.842600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:16.687 [2024-12-06 05:15:54.842621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.602 ms 00:28:16.687 [2024-12-06 05:15:54.842639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:16.687 [2024-12-06 05:15:54.842773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:16.687 [2024-12-06 05:15:54.842796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:16.687 [2024-12-06 05:15:54.842814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.044 ms 00:28:16.687 [2024-12-06 05:15:54.842830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:16.687 [2024-12-06 05:15:54.853854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:16.687 [2024-12-06 05:15:54.853929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:16.687 [2024-12-06 05:15:54.853945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.977 ms 00:28:16.687 [2024-12-06 05:15:54.853953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:16.687 [2024-12-06 05:15:54.857887] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:28:16.687 [2024-12-06 05:15:54.857937] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:28:16.687 [2024-12-06 05:15:54.857956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:16.687 [2024-12-06 05:15:54.857972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:28:16.687 [2024-12-06 05:15:54.857981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.886 ms 00:28:16.687 [2024-12-06 05:15:54.857989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:16.687 [2024-12-06 05:15:54.862905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:16.687 [2024-12-06 05:15:54.862946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:28:16.687 [2024-12-06 05:15:54.862964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.862 ms 00:28:16.687 [2024-12-06 05:15:54.862972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:16.687 [2024-12-06 05:15:54.865507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:16.687 [2024-12-06 05:15:54.865552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:28:16.687 [2024-12-06 05:15:54.865562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.480 ms 00:28:16.687 [2024-12-06 05:15:54.865569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:16.687 [2024-12-06 05:15:54.868098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:16.687 [2024-12-06 05:15:54.868297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:28:16.687 [2024-12-06 05:15:54.868317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.480 ms 00:28:16.687 [2024-12-06 05:15:54.868325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:16.687 [2024-12-06 05:15:54.868835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:16.687 [2024-12-06 05:15:54.868858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:28:16.687 [2024-12-06 05:15:54.868870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.426 ms 00:28:16.687 [2024-12-06 05:15:54.868879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:16.687 [2024-12-06 05:15:54.892964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:16.687 [2024-12-06 05:15:54.893030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:28:16.687 [2024-12-06 05:15:54.893049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.063 ms 00:28:16.687 [2024-12-06 05:15:54.893058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:16.687 [2024-12-06 05:15:54.901196] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:28:16.687 [2024-12-06 05:15:54.902229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:16.687 [2024-12-06 05:15:54.902269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:28:16.687 [2024-12-06 05:15:54.902280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.116 ms 00:28:16.687 [2024-12-06 05:15:54.902293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:16.687 [2024-12-06 05:15:54.902375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:16.687 [2024-12-06 05:15:54.902387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:28:16.687 [2024-12-06 05:15:54.902396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:28:16.687 [2024-12-06 05:15:54.902404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:16.687 [2024-12-06 05:15:54.902461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:16.687 [2024-12-06 05:15:54.902471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:28:16.687 [2024-12-06 05:15:54.902481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:28:16.687 [2024-12-06 05:15:54.902489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:16.687 [2024-12-06 05:15:54.902517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:16.687 [2024-12-06 05:15:54.902526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:28:16.687 [2024-12-06 05:15:54.902535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:28:16.687 [2024-12-06 05:15:54.902543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:16.687 [2024-12-06 05:15:54.902580] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:28:16.687 [2024-12-06 05:15:54.902591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:16.687 [2024-12-06 05:15:54.902599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:28:16.687 [2024-12-06 05:15:54.902609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:28:16.687 [2024-12-06 05:15:54.902616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:16.687 [2024-12-06 05:15:54.907151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:16.687 [2024-12-06 05:15:54.907205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:28:16.687 [2024-12-06 05:15:54.907215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.515 ms 00:28:16.688 [2024-12-06 05:15:54.907223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:16.688 [2024-12-06 05:15:54.907309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:16.688 [2024-12-06 05:15:54.907319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:28:16.688 [2024-12-06 05:15:54.907329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:28:16.688 [2024-12-06 05:15:54.907336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:16.688 [2024-12-06 05:15:54.908517] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3460.429 ms, result 0 00:28:16.949 [2024-12-06 05:15:54.923566] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:16.949 [2024-12-06 05:15:54.939562] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:28:16.949 [2024-12-06 05:15:54.947701] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:16.949 05:15:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:16.949 05:15:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:28:16.949 05:15:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:16.949 05:15:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:28:16.949 05:15:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:28:17.209 [2024-12-06 05:15:55.351978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:17.209 [2024-12-06 05:15:55.352039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:28:17.209 [2024-12-06 05:15:55.352055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:28:17.209 [2024-12-06 05:15:55.352064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:17.209 [2024-12-06 05:15:55.352087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:17.209 [2024-12-06 05:15:55.352097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:28:17.209 [2024-12-06 05:15:55.352114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:17.209 [2024-12-06 05:15:55.352122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:17.209 [2024-12-06 05:15:55.352149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:17.209 [2024-12-06 05:15:55.352157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:28:17.209 [2024-12-06 05:15:55.352166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:28:17.209 [2024-12-06 05:15:55.352173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:17.209 [2024-12-06 05:15:55.352233] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.254 ms, result 0 00:28:17.209 true 00:28:17.209 05:15:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:17.470 { 00:28:17.470 "name": "ftl", 00:28:17.470 "properties": [ 00:28:17.470 { 00:28:17.470 "name": "superblock_version", 00:28:17.470 "value": 5, 00:28:17.470 "read-only": true 00:28:17.470 }, 00:28:17.470 { 00:28:17.470 "name": "base_device", 00:28:17.470 "bands": [ 00:28:17.470 { 00:28:17.470 "id": 0, 00:28:17.470 "state": "CLOSED", 00:28:17.470 "validity": 1.0 00:28:17.470 }, 00:28:17.470 { 00:28:17.470 "id": 1, 00:28:17.470 "state": "CLOSED", 00:28:17.470 "validity": 1.0 00:28:17.470 }, 00:28:17.470 { 00:28:17.470 "id": 2, 00:28:17.470 "state": "CLOSED", 00:28:17.470 "validity": 0.007843137254901933 00:28:17.470 }, 00:28:17.470 { 00:28:17.470 "id": 3, 00:28:17.470 "state": "FREE", 00:28:17.470 "validity": 0.0 00:28:17.470 }, 00:28:17.470 { 00:28:17.470 "id": 4, 00:28:17.470 "state": "FREE", 00:28:17.470 "validity": 0.0 00:28:17.470 }, 00:28:17.470 { 00:28:17.470 "id": 5, 00:28:17.470 "state": "FREE", 00:28:17.470 "validity": 0.0 00:28:17.470 }, 00:28:17.470 { 00:28:17.470 "id": 6, 00:28:17.470 "state": "FREE", 00:28:17.470 "validity": 0.0 00:28:17.470 }, 00:28:17.470 { 00:28:17.470 "id": 7, 00:28:17.470 "state": "FREE", 00:28:17.470 "validity": 0.0 00:28:17.470 }, 00:28:17.470 { 00:28:17.470 "id": 8, 00:28:17.470 "state": "FREE", 00:28:17.470 "validity": 0.0 00:28:17.470 }, 00:28:17.470 { 00:28:17.470 "id": 9, 00:28:17.470 "state": "FREE", 00:28:17.470 "validity": 0.0 00:28:17.470 }, 00:28:17.470 { 00:28:17.470 "id": 10, 00:28:17.470 "state": "FREE", 00:28:17.470 "validity": 0.0 00:28:17.470 }, 00:28:17.470 { 00:28:17.470 "id": 11, 00:28:17.470 "state": "FREE", 00:28:17.470 "validity": 0.0 00:28:17.470 }, 00:28:17.470 { 00:28:17.470 "id": 12, 00:28:17.470 "state": "FREE", 00:28:17.470 "validity": 0.0 00:28:17.470 }, 00:28:17.470 { 00:28:17.470 "id": 13, 00:28:17.470 "state": "FREE", 00:28:17.470 "validity": 0.0 00:28:17.470 }, 00:28:17.470 { 00:28:17.470 "id": 14, 00:28:17.470 "state": "FREE", 00:28:17.470 "validity": 0.0 00:28:17.470 }, 00:28:17.470 { 00:28:17.470 "id": 15, 00:28:17.470 "state": "FREE", 00:28:17.470 "validity": 0.0 00:28:17.470 }, 00:28:17.470 { 00:28:17.470 "id": 16, 00:28:17.470 "state": "FREE", 00:28:17.470 "validity": 0.0 00:28:17.470 }, 00:28:17.470 { 00:28:17.470 "id": 17, 00:28:17.470 "state": "FREE", 00:28:17.470 "validity": 0.0 00:28:17.470 } 00:28:17.470 ], 00:28:17.470 "read-only": true 00:28:17.470 }, 00:28:17.470 { 00:28:17.470 "name": "cache_device", 00:28:17.470 "type": "bdev", 00:28:17.470 "chunks": [ 00:28:17.470 { 00:28:17.470 "id": 0, 00:28:17.470 "state": "INACTIVE", 00:28:17.470 "utilization": 0.0 00:28:17.470 }, 00:28:17.470 { 00:28:17.470 "id": 1, 00:28:17.470 "state": "OPEN", 00:28:17.470 "utilization": 0.0 00:28:17.470 }, 00:28:17.470 { 00:28:17.470 "id": 2, 00:28:17.470 "state": "OPEN", 00:28:17.470 "utilization": 0.0 00:28:17.470 }, 00:28:17.470 { 00:28:17.470 "id": 3, 00:28:17.470 "state": "FREE", 00:28:17.470 "utilization": 0.0 00:28:17.470 }, 00:28:17.470 { 00:28:17.470 "id": 4, 00:28:17.470 "state": "FREE", 00:28:17.470 "utilization": 0.0 00:28:17.470 } 00:28:17.470 ], 00:28:17.470 "read-only": true 00:28:17.470 }, 00:28:17.470 { 00:28:17.470 "name": "verbose_mode", 00:28:17.470 "value": true, 00:28:17.470 "unit": "", 00:28:17.470 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:28:17.470 }, 00:28:17.470 { 00:28:17.470 "name": "prep_upgrade_on_shutdown", 00:28:17.470 "value": false, 00:28:17.470 "unit": "", 00:28:17.470 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:28:17.470 } 00:28:17.470 ] 00:28:17.470 } 00:28:17.470 05:15:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:28:17.470 05:15:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:17.470 05:15:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:28:17.731 05:15:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:28:17.731 05:15:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:28:17.731 05:15:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:28:17.731 05:15:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:28:17.731 05:15:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:17.990 Validate MD5 checksum, iteration 1 00:28:17.990 05:15:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:28:17.990 05:15:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:28:17.990 05:15:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:28:17.990 05:15:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:28:17.990 05:15:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:28:17.990 05:15:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:17.990 05:15:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:28:17.990 05:15:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:17.990 05:15:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:17.990 05:15:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:17.990 05:15:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:17.990 05:15:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:17.990 05:15:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:17.990 [2024-12-06 05:15:56.104743] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:28:17.990 [2024-12-06 05:15:56.105122] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92848 ] 00:28:18.248 [2024-12-06 05:15:56.242838] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:18.248 [2024-12-06 05:15:56.300033] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:28:19.637  [2024-12-06T05:15:58.813Z] Copying: 503/1024 [MB] (503 MBps) [2024-12-06T05:15:59.385Z] Copying: 1024/1024 [MB] (average 530 MBps) 00:28:21.153 00:28:21.153 05:15:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:28:21.153 05:15:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:23.699 05:16:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:23.699 05:16:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=b768abbc159549b4f6b8673c2a228f5c 00:28:23.699 05:16:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ b768abbc159549b4f6b8673c2a228f5c != \b\7\6\8\a\b\b\c\1\5\9\5\4\9\b\4\f\6\b\8\6\7\3\c\2\a\2\2\8\f\5\c ]] 00:28:23.699 05:16:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:23.699 05:16:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:23.699 05:16:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:28:23.699 Validate MD5 checksum, iteration 2 00:28:23.699 05:16:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:23.699 05:16:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:23.699 05:16:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:23.699 05:16:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:23.699 05:16:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:23.699 05:16:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:23.699 [2024-12-06 05:16:01.562145] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:28:23.699 [2024-12-06 05:16:01.562374] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92904 ] 00:28:23.699 [2024-12-06 05:16:01.692679] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:23.699 [2024-12-06 05:16:01.733083] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:28:25.083  [2024-12-06T05:16:03.887Z] Copying: 628/1024 [MB] (628 MBps) [2024-12-06T05:16:08.095Z] Copying: 1024/1024 [MB] (average 628 MBps) 00:28:29.863 00:28:29.863 05:16:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:28:29.863 05:16:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:31.771 05:16:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:31.771 05:16:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=1510113f3ae6e7dbb8996c0462948b82 00:28:31.771 05:16:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 1510113f3ae6e7dbb8996c0462948b82 != \1\5\1\0\1\1\3\f\3\a\e\6\e\7\d\b\b\8\9\9\6\c\0\4\6\2\9\4\8\b\8\2 ]] 00:28:31.771 05:16:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:31.771 05:16:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:31.771 05:16:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:28:31.771 05:16:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 92774 ]] 00:28:31.771 05:16:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 92774 00:28:31.771 05:16:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:28:31.771 05:16:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:28:31.771 05:16:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:28:31.771 05:16:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:28:31.771 05:16:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:31.771 05:16:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=92993 00:28:31.771 05:16:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:31.771 05:16:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:28:31.771 05:16:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 92993 00:28:31.771 05:16:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 92993 ']' 00:28:31.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:31.772 05:16:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:31.772 05:16:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:31.772 05:16:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:31.772 05:16:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:31.772 05:16:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:31.772 [2024-12-06 05:16:09.553000] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:28:31.772 [2024-12-06 05:16:09.553122] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92993 ] 00:28:31.772 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 830: 92774 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:28:31.772 [2024-12-06 05:16:09.685039] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:31.772 [2024-12-06 05:16:09.725558] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:32.035 [2024-12-06 05:16:10.026210] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:32.035 [2024-12-06 05:16:10.026272] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:32.035 [2024-12-06 05:16:10.168642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:32.035 [2024-12-06 05:16:10.168695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:28:32.035 [2024-12-06 05:16:10.168709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:32.035 [2024-12-06 05:16:10.168715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:32.035 [2024-12-06 05:16:10.168757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:32.035 [2024-12-06 05:16:10.168765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:32.035 [2024-12-06 05:16:10.168772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:28:32.035 [2024-12-06 05:16:10.168777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:32.035 [2024-12-06 05:16:10.168799] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:28:32.035 [2024-12-06 05:16:10.168980] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:28:32.035 [2024-12-06 05:16:10.168991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:32.035 [2024-12-06 05:16:10.168998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:32.035 [2024-12-06 05:16:10.169010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.199 ms 00:28:32.035 [2024-12-06 05:16:10.169015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:32.035 [2024-12-06 05:16:10.169245] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:28:32.035 [2024-12-06 05:16:10.173447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:32.035 [2024-12-06 05:16:10.173565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:28:32.035 [2024-12-06 05:16:10.173610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.202 ms 00:28:32.035 [2024-12-06 05:16:10.173633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:32.035 [2024-12-06 05:16:10.174602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:32.035 [2024-12-06 05:16:10.174707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:28:32.035 [2024-12-06 05:16:10.174750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:28:32.035 [2024-12-06 05:16:10.174769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:32.035 [2024-12-06 05:16:10.174998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:32.035 [2024-12-06 05:16:10.175050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:32.035 [2024-12-06 05:16:10.175073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.178 ms 00:28:32.035 [2024-12-06 05:16:10.175088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:32.035 [2024-12-06 05:16:10.175149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:32.035 [2024-12-06 05:16:10.175167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:32.035 [2024-12-06 05:16:10.175182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:28:32.035 [2024-12-06 05:16:10.175197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:32.035 [2024-12-06 05:16:10.175229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:32.035 [2024-12-06 05:16:10.175284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:28:32.035 [2024-12-06 05:16:10.175302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:28:32.035 [2024-12-06 05:16:10.175317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:32.035 [2024-12-06 05:16:10.175350] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:28:32.035 [2024-12-06 05:16:10.176135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:32.035 [2024-12-06 05:16:10.176217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:32.035 [2024-12-06 05:16:10.176257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.792 ms 00:28:32.035 [2024-12-06 05:16:10.176274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:32.035 [2024-12-06 05:16:10.176314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:32.035 [2024-12-06 05:16:10.176426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:28:32.035 [2024-12-06 05:16:10.176446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:32.035 [2024-12-06 05:16:10.176467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:32.035 [2024-12-06 05:16:10.176530] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:28:32.035 [2024-12-06 05:16:10.176560] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:28:32.035 [2024-12-06 05:16:10.176606] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:28:32.035 [2024-12-06 05:16:10.176679] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:28:32.035 [2024-12-06 05:16:10.176824] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:28:32.035 [2024-12-06 05:16:10.176854] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:28:32.035 [2024-12-06 05:16:10.176885] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:28:32.035 [2024-12-06 05:16:10.176941] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:28:32.036 [2024-12-06 05:16:10.176961] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:28:32.036 [2024-12-06 05:16:10.176968] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:28:32.036 [2024-12-06 05:16:10.176978] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:28:32.036 [2024-12-06 05:16:10.176984] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:28:32.036 [2024-12-06 05:16:10.176990] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:28:32.036 [2024-12-06 05:16:10.176997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:32.036 [2024-12-06 05:16:10.177004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:28:32.036 [2024-12-06 05:16:10.177010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.470 ms 00:28:32.036 [2024-12-06 05:16:10.177018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:32.036 [2024-12-06 05:16:10.177102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:32.036 [2024-12-06 05:16:10.177111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:28:32.036 [2024-12-06 05:16:10.177118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:28:32.036 [2024-12-06 05:16:10.177128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:32.036 [2024-12-06 05:16:10.177204] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:28:32.036 [2024-12-06 05:16:10.177213] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:28:32.036 [2024-12-06 05:16:10.177220] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:32.036 [2024-12-06 05:16:10.177227] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:32.036 [2024-12-06 05:16:10.177234] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:28:32.036 [2024-12-06 05:16:10.177239] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:28:32.036 [2024-12-06 05:16:10.177245] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:28:32.036 [2024-12-06 05:16:10.177251] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:28:32.036 [2024-12-06 05:16:10.177256] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:28:32.036 [2024-12-06 05:16:10.177261] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:32.036 [2024-12-06 05:16:10.177266] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:28:32.036 [2024-12-06 05:16:10.177274] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:28:32.036 [2024-12-06 05:16:10.177279] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:32.036 [2024-12-06 05:16:10.177285] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:28:32.036 [2024-12-06 05:16:10.177290] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:28:32.036 [2024-12-06 05:16:10.177301] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:32.036 [2024-12-06 05:16:10.177307] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:28:32.036 [2024-12-06 05:16:10.177313] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:28:32.036 [2024-12-06 05:16:10.177318] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:32.036 [2024-12-06 05:16:10.177324] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:28:32.036 [2024-12-06 05:16:10.177329] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:28:32.036 [2024-12-06 05:16:10.177335] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:32.036 [2024-12-06 05:16:10.177340] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:28:32.036 [2024-12-06 05:16:10.177345] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:28:32.036 [2024-12-06 05:16:10.177350] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:32.036 [2024-12-06 05:16:10.177355] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:28:32.036 [2024-12-06 05:16:10.177360] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:28:32.036 [2024-12-06 05:16:10.177366] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:32.036 [2024-12-06 05:16:10.177371] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:28:32.036 [2024-12-06 05:16:10.177376] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:28:32.036 [2024-12-06 05:16:10.177381] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:32.036 [2024-12-06 05:16:10.177388] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:28:32.036 [2024-12-06 05:16:10.177393] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:28:32.036 [2024-12-06 05:16:10.177398] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:32.036 [2024-12-06 05:16:10.177404] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:28:32.036 [2024-12-06 05:16:10.177408] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:28:32.036 [2024-12-06 05:16:10.177413] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:32.036 [2024-12-06 05:16:10.177418] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:28:32.036 [2024-12-06 05:16:10.177424] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:28:32.036 [2024-12-06 05:16:10.177429] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:32.036 [2024-12-06 05:16:10.177434] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:28:32.036 [2024-12-06 05:16:10.177439] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:28:32.036 [2024-12-06 05:16:10.177444] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:32.036 [2024-12-06 05:16:10.177451] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:28:32.036 [2024-12-06 05:16:10.177458] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:28:32.036 [2024-12-06 05:16:10.177465] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:32.036 [2024-12-06 05:16:10.177471] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:32.036 [2024-12-06 05:16:10.177479] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:28:32.036 [2024-12-06 05:16:10.177484] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:28:32.036 [2024-12-06 05:16:10.177489] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:28:32.036 [2024-12-06 05:16:10.177495] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:28:32.036 [2024-12-06 05:16:10.177500] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:28:32.036 [2024-12-06 05:16:10.177506] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:28:32.036 [2024-12-06 05:16:10.177513] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:28:32.036 [2024-12-06 05:16:10.177523] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:32.036 [2024-12-06 05:16:10.177530] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:28:32.036 [2024-12-06 05:16:10.177535] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:28:32.036 [2024-12-06 05:16:10.177541] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:28:32.036 [2024-12-06 05:16:10.177546] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:28:32.036 [2024-12-06 05:16:10.177552] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:28:32.036 [2024-12-06 05:16:10.177557] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:28:32.036 [2024-12-06 05:16:10.177563] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:28:32.036 [2024-12-06 05:16:10.177568] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:28:32.036 [2024-12-06 05:16:10.177576] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:28:32.036 [2024-12-06 05:16:10.177583] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:28:32.036 [2024-12-06 05:16:10.177589] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:28:32.036 [2024-12-06 05:16:10.177594] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:28:32.036 [2024-12-06 05:16:10.177600] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:28:32.036 [2024-12-06 05:16:10.177605] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:28:32.036 [2024-12-06 05:16:10.177611] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:28:32.036 [2024-12-06 05:16:10.177618] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:32.036 [2024-12-06 05:16:10.177625] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:32.036 [2024-12-06 05:16:10.177631] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:28:32.036 [2024-12-06 05:16:10.177642] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:28:32.036 [2024-12-06 05:16:10.177649] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:28:32.036 [2024-12-06 05:16:10.177656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:32.036 [2024-12-06 05:16:10.177835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:28:32.036 [2024-12-06 05:16:10.177863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.504 ms 00:28:32.036 [2024-12-06 05:16:10.177880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:32.036 [2024-12-06 05:16:10.186381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:32.036 [2024-12-06 05:16:10.186471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:32.036 [2024-12-06 05:16:10.186514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.440 ms 00:28:32.036 [2024-12-06 05:16:10.186532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:32.036 [2024-12-06 05:16:10.186572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:32.036 [2024-12-06 05:16:10.186594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:28:32.037 [2024-12-06 05:16:10.186610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:28:32.037 [2024-12-06 05:16:10.186625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:32.037 [2024-12-06 05:16:10.204119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:32.037 [2024-12-06 05:16:10.204243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:32.037 [2024-12-06 05:16:10.204287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.431 ms 00:28:32.037 [2024-12-06 05:16:10.204306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:32.037 [2024-12-06 05:16:10.204354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:32.037 [2024-12-06 05:16:10.204372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:32.037 [2024-12-06 05:16:10.204388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:32.037 [2024-12-06 05:16:10.204404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:32.037 [2024-12-06 05:16:10.204503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:32.037 [2024-12-06 05:16:10.204542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:32.037 [2024-12-06 05:16:10.204559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:28:32.037 [2024-12-06 05:16:10.204577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:32.037 [2024-12-06 05:16:10.204629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:32.037 [2024-12-06 05:16:10.204698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:32.037 [2024-12-06 05:16:10.204719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:28:32.037 [2024-12-06 05:16:10.204735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:32.037 [2024-12-06 05:16:10.212160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:32.037 [2024-12-06 05:16:10.212292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:32.037 [2024-12-06 05:16:10.212356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.395 ms 00:28:32.037 [2024-12-06 05:16:10.212371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:32.037 [2024-12-06 05:16:10.212488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:32.037 [2024-12-06 05:16:10.212504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:28:32.037 [2024-12-06 05:16:10.212515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:28:32.037 [2024-12-06 05:16:10.212526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:32.037 [2024-12-06 05:16:10.218057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:32.037 [2024-12-06 05:16:10.218104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:28:32.037 [2024-12-06 05:16:10.218117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.507 ms 00:28:32.037 [2024-12-06 05:16:10.218128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:32.037 [2024-12-06 05:16:10.219287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:32.037 [2024-12-06 05:16:10.219383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:28:32.037 [2024-12-06 05:16:10.219396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.199 ms 00:28:32.037 [2024-12-06 05:16:10.219403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:32.037 [2024-12-06 05:16:10.236252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:32.037 [2024-12-06 05:16:10.236295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:28:32.037 [2024-12-06 05:16:10.236308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.826 ms 00:28:32.037 [2024-12-06 05:16:10.236317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:32.037 [2024-12-06 05:16:10.236435] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:28:32.037 [2024-12-06 05:16:10.236524] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:28:32.037 [2024-12-06 05:16:10.236614] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:28:32.037 [2024-12-06 05:16:10.236719] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:28:32.037 [2024-12-06 05:16:10.236726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:32.037 [2024-12-06 05:16:10.236733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:28:32.037 [2024-12-06 05:16:10.236741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.372 ms 00:28:32.037 [2024-12-06 05:16:10.236748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:32.037 [2024-12-06 05:16:10.236779] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:28:32.037 [2024-12-06 05:16:10.236792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:32.037 [2024-12-06 05:16:10.236798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:28:32.037 [2024-12-06 05:16:10.236804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:28:32.037 [2024-12-06 05:16:10.236811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:32.037 [2024-12-06 05:16:10.239540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:32.037 [2024-12-06 05:16:10.239570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:28:32.037 [2024-12-06 05:16:10.239579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.710 ms 00:28:32.037 [2024-12-06 05:16:10.239585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:32.037 [2024-12-06 05:16:10.240219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:32.037 [2024-12-06 05:16:10.240250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:28:32.037 [2024-12-06 05:16:10.240258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:28:32.037 [2024-12-06 05:16:10.240265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:32.037 [2024-12-06 05:16:10.240309] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:28:32.037 [2024-12-06 05:16:10.240468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:32.037 [2024-12-06 05:16:10.240477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:28:32.037 [2024-12-06 05:16:10.240484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.160 ms 00:28:32.037 [2024-12-06 05:16:10.240490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:32.609 [2024-12-06 05:16:10.771790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:32.609 [2024-12-06 05:16:10.771856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:28:32.609 [2024-12-06 05:16:10.771869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 531.012 ms 00:28:32.609 [2024-12-06 05:16:10.771877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:32.609 [2024-12-06 05:16:10.773423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:32.609 [2024-12-06 05:16:10.773452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:28:32.609 [2024-12-06 05:16:10.773462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.139 ms 00:28:32.609 [2024-12-06 05:16:10.773468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:32.609 [2024-12-06 05:16:10.773905] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:28:32.609 [2024-12-06 05:16:10.773925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:32.609 [2024-12-06 05:16:10.773933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:28:32.609 [2024-12-06 05:16:10.773940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.429 ms 00:28:32.609 [2024-12-06 05:16:10.773954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:32.609 [2024-12-06 05:16:10.773983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:32.609 [2024-12-06 05:16:10.774006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:28:32.609 [2024-12-06 05:16:10.774013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:32.609 [2024-12-06 05:16:10.774024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:32.609 [2024-12-06 05:16:10.774056] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 533.745 ms, result 0 00:28:32.609 [2024-12-06 05:16:10.774095] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:28:32.609 [2024-12-06 05:16:10.774175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:32.609 [2024-12-06 05:16:10.774184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:28:32.609 [2024-12-06 05:16:10.774191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.082 ms 00:28:32.609 [2024-12-06 05:16:10.774198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.554 [2024-12-06 05:16:11.582512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:33.554 [2024-12-06 05:16:11.582591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:28:33.554 [2024-12-06 05:16:11.582607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 807.884 ms 00:28:33.554 [2024-12-06 05:16:11.582615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.554 [2024-12-06 05:16:11.584408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:33.554 [2024-12-06 05:16:11.584445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:28:33.554 [2024-12-06 05:16:11.584456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.385 ms 00:28:33.554 [2024-12-06 05:16:11.584463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.554 [2024-12-06 05:16:11.584989] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:28:33.554 [2024-12-06 05:16:11.585013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:33.554 [2024-12-06 05:16:11.585023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:28:33.554 [2024-12-06 05:16:11.585033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.523 ms 00:28:33.554 [2024-12-06 05:16:11.585041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.554 [2024-12-06 05:16:11.585071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:33.554 [2024-12-06 05:16:11.585079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:28:33.554 [2024-12-06 05:16:11.585088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:33.554 [2024-12-06 05:16:11.585097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.554 [2024-12-06 05:16:11.585132] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 811.035 ms, result 0 00:28:33.554 [2024-12-06 05:16:11.585179] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:33.554 [2024-12-06 05:16:11.585197] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:28:33.554 [2024-12-06 05:16:11.585208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:33.554 [2024-12-06 05:16:11.585217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:28:33.554 [2024-12-06 05:16:11.585225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1344.911 ms 00:28:33.554 [2024-12-06 05:16:11.585237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.554 [2024-12-06 05:16:11.585267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:33.554 [2024-12-06 05:16:11.585276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:28:33.554 [2024-12-06 05:16:11.585287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:33.554 [2024-12-06 05:16:11.585295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.554 [2024-12-06 05:16:11.594242] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:28:33.554 [2024-12-06 05:16:11.594461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:33.554 [2024-12-06 05:16:11.594495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:28:33.554 [2024-12-06 05:16:11.594568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.151 ms 00:28:33.554 [2024-12-06 05:16:11.594591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.554 [2024-12-06 05:16:11.595314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:33.554 [2024-12-06 05:16:11.595412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:28:33.554 [2024-12-06 05:16:11.595469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.623 ms 00:28:33.554 [2024-12-06 05:16:11.595494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.554 [2024-12-06 05:16:11.597766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:33.554 [2024-12-06 05:16:11.597864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:28:33.554 [2024-12-06 05:16:11.597917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.236 ms 00:28:33.554 [2024-12-06 05:16:11.597939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.554 [2024-12-06 05:16:11.597998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:33.554 [2024-12-06 05:16:11.598021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:28:33.554 [2024-12-06 05:16:11.598041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:33.554 [2024-12-06 05:16:11.598060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.554 [2024-12-06 05:16:11.598179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:33.554 [2024-12-06 05:16:11.598216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:28:33.554 [2024-12-06 05:16:11.598238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:28:33.554 [2024-12-06 05:16:11.598297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.554 [2024-12-06 05:16:11.598338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:33.554 [2024-12-06 05:16:11.598396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:28:33.554 [2024-12-06 05:16:11.598420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:28:33.554 [2024-12-06 05:16:11.598835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.555 [2024-12-06 05:16:11.598935] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:28:33.555 [2024-12-06 05:16:11.599046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:33.555 [2024-12-06 05:16:11.599078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:28:33.555 [2024-12-06 05:16:11.599100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.112 ms 00:28:33.555 [2024-12-06 05:16:11.599120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.555 [2024-12-06 05:16:11.599205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:33.555 [2024-12-06 05:16:11.599220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:28:33.555 [2024-12-06 05:16:11.599229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.040 ms 00:28:33.555 [2024-12-06 05:16:11.599237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.555 [2024-12-06 05:16:11.600267] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1431.169 ms, result 0 00:28:33.555 [2024-12-06 05:16:11.615207] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:33.555 [2024-12-06 05:16:11.631213] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:28:33.555 [2024-12-06 05:16:11.639325] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:34.156 Validate MD5 checksum, iteration 1 00:28:34.156 05:16:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:34.156 05:16:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:28:34.156 05:16:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:34.156 05:16:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:28:34.156 05:16:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:28:34.156 05:16:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:28:34.156 05:16:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:28:34.156 05:16:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:34.156 05:16:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:28:34.156 05:16:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:34.156 05:16:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:34.156 05:16:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:34.156 05:16:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:34.156 05:16:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:34.156 05:16:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:34.156 [2024-12-06 05:16:12.148448] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:28:34.156 [2024-12-06 05:16:12.148647] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93022 ] 00:28:34.156 [2024-12-06 05:16:12.280503] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:34.156 [2024-12-06 05:16:12.313640] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:28:35.544  [2024-12-06T05:16:14.717Z] Copying: 540/1024 [MB] (540 MBps) [2024-12-06T05:16:15.288Z] Copying: 1024/1024 [MB] (average 579 MBps) 00:28:37.056 00:28:37.056 05:16:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:28:37.056 05:16:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:38.962 05:16:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:38.962 05:16:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=b768abbc159549b4f6b8673c2a228f5c 00:28:38.962 05:16:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ b768abbc159549b4f6b8673c2a228f5c != \b\7\6\8\a\b\b\c\1\5\9\5\4\9\b\4\f\6\b\8\6\7\3\c\2\a\2\2\8\f\5\c ]] 00:28:38.962 05:16:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:38.962 05:16:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:38.962 05:16:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:28:38.962 Validate MD5 checksum, iteration 2 00:28:38.962 05:16:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:38.962 05:16:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:38.962 05:16:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:38.962 05:16:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:38.962 05:16:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:38.962 05:16:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:38.962 [2024-12-06 05:16:17.138209] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:28:38.962 [2024-12-06 05:16:17.138316] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93082 ] 00:28:39.221 [2024-12-06 05:16:17.271413] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:39.221 [2024-12-06 05:16:17.310905] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:28:40.605  [2024-12-06T05:16:19.518Z] Copying: 648/1024 [MB] (648 MBps) [2024-12-06T05:16:19.784Z] Copying: 1024/1024 [MB] (average 620 MBps) 00:28:41.552 00:28:41.552 05:16:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:28:41.552 05:16:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:44.095 05:16:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:44.095 05:16:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=1510113f3ae6e7dbb8996c0462948b82 00:28:44.095 05:16:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 1510113f3ae6e7dbb8996c0462948b82 != \1\5\1\0\1\1\3\f\3\a\e\6\e\7\d\b\b\8\9\9\6\c\0\4\6\2\9\4\8\b\8\2 ]] 00:28:44.095 05:16:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:44.095 05:16:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:44.095 05:16:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:28:44.095 05:16:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:28:44.095 05:16:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:28:44.095 05:16:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:44.095 05:16:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:28:44.095 05:16:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:28:44.095 05:16:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:28:44.095 05:16:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:28:44.095 05:16:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 92993 ]] 00:28:44.095 05:16:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 92993 00:28:44.095 05:16:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 92993 ']' 00:28:44.095 05:16:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 92993 00:28:44.095 05:16:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:28:44.095 05:16:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:44.095 05:16:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92993 00:28:44.095 killing process with pid 92993 00:28:44.095 05:16:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:44.095 05:16:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:44.095 05:16:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92993' 00:28:44.095 05:16:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 92993 00:28:44.095 05:16:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 92993 00:28:44.095 [2024-12-06 05:16:22.148067] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:28:44.095 [2024-12-06 05:16:22.151987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:44.095 [2024-12-06 05:16:22.152020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:28:44.095 [2024-12-06 05:16:22.152030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:44.095 [2024-12-06 05:16:22.152036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:44.095 [2024-12-06 05:16:22.152053] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:28:44.095 [2024-12-06 05:16:22.152425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:44.095 [2024-12-06 05:16:22.152442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:28:44.095 [2024-12-06 05:16:22.152449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.362 ms 00:28:44.095 [2024-12-06 05:16:22.152455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:44.095 [2024-12-06 05:16:22.152636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:44.095 [2024-12-06 05:16:22.152643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:28:44.095 [2024-12-06 05:16:22.152649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.163 ms 00:28:44.095 [2024-12-06 05:16:22.152655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:44.095 [2024-12-06 05:16:22.153887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:44.096 [2024-12-06 05:16:22.153976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:28:44.096 [2024-12-06 05:16:22.154032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.087 ms 00:28:44.096 [2024-12-06 05:16:22.154049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:44.096 [2024-12-06 05:16:22.155022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:44.096 [2024-12-06 05:16:22.155102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:28:44.096 [2024-12-06 05:16:22.155149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.888 ms 00:28:44.096 [2024-12-06 05:16:22.155167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:44.096 [2024-12-06 05:16:22.156522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:44.096 [2024-12-06 05:16:22.156611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:28:44.096 [2024-12-06 05:16:22.156685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.316 ms 00:28:44.096 [2024-12-06 05:16:22.156731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:44.096 [2024-12-06 05:16:22.157972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:44.096 [2024-12-06 05:16:22.158062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:28:44.096 [2024-12-06 05:16:22.158103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.195 ms 00:28:44.096 [2024-12-06 05:16:22.158120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:44.096 [2024-12-06 05:16:22.158200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:44.096 [2024-12-06 05:16:22.158219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:28:44.096 [2024-12-06 05:16:22.158235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.045 ms 00:28:44.096 [2024-12-06 05:16:22.158282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:44.096 [2024-12-06 05:16:22.159469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:44.096 [2024-12-06 05:16:22.159498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:28:44.096 [2024-12-06 05:16:22.159504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.170 ms 00:28:44.096 [2024-12-06 05:16:22.159509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:44.096 [2024-12-06 05:16:22.160570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:44.096 [2024-12-06 05:16:22.160657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:28:44.096 [2024-12-06 05:16:22.160681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.035 ms 00:28:44.096 [2024-12-06 05:16:22.160686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:44.096 [2024-12-06 05:16:22.161772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:44.096 [2024-12-06 05:16:22.161795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:28:44.096 [2024-12-06 05:16:22.161801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.055 ms 00:28:44.096 [2024-12-06 05:16:22.161806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:44.096 [2024-12-06 05:16:22.162806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:44.096 [2024-12-06 05:16:22.162833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:28:44.096 [2024-12-06 05:16:22.162839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.954 ms 00:28:44.096 [2024-12-06 05:16:22.162845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:44.096 [2024-12-06 05:16:22.162869] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:28:44.096 [2024-12-06 05:16:22.162881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:44.096 [2024-12-06 05:16:22.162893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:28:44.096 [2024-12-06 05:16:22.162899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:28:44.096 [2024-12-06 05:16:22.162906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:44.096 [2024-12-06 05:16:22.162912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:44.096 [2024-12-06 05:16:22.162918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:44.096 [2024-12-06 05:16:22.162923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:44.096 [2024-12-06 05:16:22.162929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:44.096 [2024-12-06 05:16:22.162935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:44.096 [2024-12-06 05:16:22.162941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:44.096 [2024-12-06 05:16:22.162947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:44.096 [2024-12-06 05:16:22.162953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:44.096 [2024-12-06 05:16:22.162958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:44.096 [2024-12-06 05:16:22.162964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:44.096 [2024-12-06 05:16:22.162970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:44.096 [2024-12-06 05:16:22.162976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:44.096 [2024-12-06 05:16:22.162982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:44.096 [2024-12-06 05:16:22.162987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:44.096 [2024-12-06 05:16:22.162994] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:28:44.096 [2024-12-06 05:16:22.162999] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 6336acd9-9725-4447-9284-3cf0abd57840 00:28:44.096 [2024-12-06 05:16:22.163006] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:28:44.096 [2024-12-06 05:16:22.163011] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:28:44.096 [2024-12-06 05:16:22.163021] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:28:44.096 [2024-12-06 05:16:22.163027] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:28:44.096 [2024-12-06 05:16:22.163032] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:28:44.096 [2024-12-06 05:16:22.163037] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:28:44.096 [2024-12-06 05:16:22.163044] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:28:44.096 [2024-12-06 05:16:22.163048] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:28:44.096 [2024-12-06 05:16:22.163053] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:28:44.096 [2024-12-06 05:16:22.163060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:44.096 [2024-12-06 05:16:22.163069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:28:44.096 [2024-12-06 05:16:22.163075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.191 ms 00:28:44.096 [2024-12-06 05:16:22.163085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:44.096 [2024-12-06 05:16:22.164450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:44.096 [2024-12-06 05:16:22.164526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:28:44.096 [2024-12-06 05:16:22.164563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.352 ms 00:28:44.096 [2024-12-06 05:16:22.164581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:44.096 [2024-12-06 05:16:22.164710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:44.096 [2024-12-06 05:16:22.164790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:28:44.096 [2024-12-06 05:16:22.164905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.063 ms 00:28:44.096 [2024-12-06 05:16:22.164922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:44.096 [2024-12-06 05:16:22.169465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:44.096 [2024-12-06 05:16:22.169557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:44.096 [2024-12-06 05:16:22.169597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:44.096 [2024-12-06 05:16:22.169615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:44.096 [2024-12-06 05:16:22.169647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:44.096 [2024-12-06 05:16:22.169687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:44.096 [2024-12-06 05:16:22.169728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:44.096 [2024-12-06 05:16:22.169745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:44.096 [2024-12-06 05:16:22.169805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:44.096 [2024-12-06 05:16:22.169882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:44.096 [2024-12-06 05:16:22.169901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:44.096 [2024-12-06 05:16:22.169915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:44.096 [2024-12-06 05:16:22.169963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:44.096 [2024-12-06 05:16:22.169981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:44.096 [2024-12-06 05:16:22.169996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:44.096 [2024-12-06 05:16:22.170013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:44.096 [2024-12-06 05:16:22.177633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:44.096 [2024-12-06 05:16:22.178103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:44.096 [2024-12-06 05:16:22.178144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:44.096 [2024-12-06 05:16:22.178161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:44.096 [2024-12-06 05:16:22.184464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:44.096 [2024-12-06 05:16:22.184495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:44.096 [2024-12-06 05:16:22.184507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:44.096 [2024-12-06 05:16:22.184514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:44.096 [2024-12-06 05:16:22.184546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:44.096 [2024-12-06 05:16:22.184554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:44.096 [2024-12-06 05:16:22.184560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:44.096 [2024-12-06 05:16:22.184566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:44.096 [2024-12-06 05:16:22.184606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:44.096 [2024-12-06 05:16:22.184613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:44.097 [2024-12-06 05:16:22.184623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:44.097 [2024-12-06 05:16:22.184628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:44.097 [2024-12-06 05:16:22.184695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:44.097 [2024-12-06 05:16:22.184703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:44.097 [2024-12-06 05:16:22.184710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:44.097 [2024-12-06 05:16:22.184715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:44.097 [2024-12-06 05:16:22.184739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:44.097 [2024-12-06 05:16:22.184749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:28:44.097 [2024-12-06 05:16:22.184755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:44.097 [2024-12-06 05:16:22.184761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:44.097 [2024-12-06 05:16:22.184793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:44.097 [2024-12-06 05:16:22.184800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:44.097 [2024-12-06 05:16:22.184805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:44.097 [2024-12-06 05:16:22.184811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:44.097 [2024-12-06 05:16:22.184846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:44.097 [2024-12-06 05:16:22.184853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:44.097 [2024-12-06 05:16:22.184860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:44.097 [2024-12-06 05:16:22.184865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:44.097 [2024-12-06 05:16:22.184968] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 32.961 ms, result 0 00:28:44.356 05:16:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:28:44.356 05:16:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:44.356 05:16:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:28:44.356 05:16:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:28:44.356 05:16:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:28:44.356 05:16:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:44.356 05:16:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:28:44.356 Remove shared memory files 00:28:44.356 05:16:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:44.356 05:16:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:28:44.356 05:16:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:28:44.356 05:16:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid92774 00:28:44.356 05:16:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:44.356 05:16:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:28:44.356 ************************************ 00:28:44.356 END TEST ftl_upgrade_shutdown 00:28:44.356 ************************************ 00:28:44.356 00:28:44.356 real 1m12.382s 00:28:44.356 user 1m37.676s 00:28:44.356 sys 0m19.577s 00:28:44.356 05:16:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:44.356 05:16:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:44.356 05:16:22 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:28:44.356 05:16:22 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:28:44.356 05:16:22 ftl -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:28:44.356 05:16:22 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:44.356 05:16:22 ftl -- common/autotest_common.sh@10 -- # set +x 00:28:44.356 ************************************ 00:28:44.356 START TEST ftl_restore_fast 00:28:44.356 ************************************ 00:28:44.356 05:16:22 ftl.ftl_restore_fast -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:28:44.356 * Looking for test storage... 00:28:44.356 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:28:44.356 05:16:22 ftl.ftl_restore_fast -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:28:44.356 05:16:22 ftl.ftl_restore_fast -- common/autotest_common.sh@1681 -- # lcov --version 00:28:44.356 05:16:22 ftl.ftl_restore_fast -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:28:44.356 05:16:22 ftl.ftl_restore_fast -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:28:44.356 05:16:22 ftl.ftl_restore_fast -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:44.356 05:16:22 ftl.ftl_restore_fast -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:44.356 05:16:22 ftl.ftl_restore_fast -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:44.356 05:16:22 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # IFS=.-: 00:28:44.356 05:16:22 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # read -ra ver1 00:28:44.356 05:16:22 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # IFS=.-: 00:28:44.356 05:16:22 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # read -ra ver2 00:28:44.356 05:16:22 ftl.ftl_restore_fast -- scripts/common.sh@338 -- # local 'op=<' 00:28:44.356 05:16:22 ftl.ftl_restore_fast -- scripts/common.sh@340 -- # ver1_l=2 00:28:44.356 05:16:22 ftl.ftl_restore_fast -- scripts/common.sh@341 -- # ver2_l=1 00:28:44.356 05:16:22 ftl.ftl_restore_fast -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:44.356 05:16:22 ftl.ftl_restore_fast -- scripts/common.sh@344 -- # case "$op" in 00:28:44.356 05:16:22 ftl.ftl_restore_fast -- scripts/common.sh@345 -- # : 1 00:28:44.356 05:16:22 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:44.356 05:16:22 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:44.356 05:16:22 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # decimal 1 00:28:44.356 05:16:22 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=1 00:28:44.356 05:16:22 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:44.356 05:16:22 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 1 00:28:44.356 05:16:22 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # ver1[v]=1 00:28:44.356 05:16:22 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # decimal 2 00:28:44.356 05:16:22 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=2 00:28:44.356 05:16:22 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:44.356 05:16:22 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 2 00:28:44.356 05:16:22 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # ver2[v]=2 00:28:44.356 05:16:22 ftl.ftl_restore_fast -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:44.356 05:16:22 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:44.356 05:16:22 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # return 0 00:28:44.356 05:16:22 ftl.ftl_restore_fast -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:44.356 05:16:22 ftl.ftl_restore_fast -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:28:44.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:44.356 --rc genhtml_branch_coverage=1 00:28:44.356 --rc genhtml_function_coverage=1 00:28:44.356 --rc genhtml_legend=1 00:28:44.356 --rc geninfo_all_blocks=1 00:28:44.356 --rc geninfo_unexecuted_blocks=1 00:28:44.356 00:28:44.356 ' 00:28:44.356 05:16:22 ftl.ftl_restore_fast -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:28:44.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:44.356 --rc genhtml_branch_coverage=1 00:28:44.356 --rc genhtml_function_coverage=1 00:28:44.356 --rc genhtml_legend=1 00:28:44.356 --rc geninfo_all_blocks=1 00:28:44.356 --rc geninfo_unexecuted_blocks=1 00:28:44.356 00:28:44.356 ' 00:28:44.356 05:16:22 ftl.ftl_restore_fast -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:28:44.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:44.356 --rc genhtml_branch_coverage=1 00:28:44.356 --rc genhtml_function_coverage=1 00:28:44.356 --rc genhtml_legend=1 00:28:44.356 --rc geninfo_all_blocks=1 00:28:44.356 --rc geninfo_unexecuted_blocks=1 00:28:44.356 00:28:44.356 ' 00:28:44.356 05:16:22 ftl.ftl_restore_fast -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:28:44.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:44.356 --rc genhtml_branch_coverage=1 00:28:44.356 --rc genhtml_function_coverage=1 00:28:44.356 --rc genhtml_legend=1 00:28:44.357 --rc geninfo_all_blocks=1 00:28:44.357 --rc geninfo_unexecuted_blocks=1 00:28:44.357 00:28:44.357 ' 00:28:44.357 05:16:22 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:28:44.357 05:16:22 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:28:44.357 05:16:22 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:28:44.357 05:16:22 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:28:44.357 05:16:22 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:28:44.357 05:16:22 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:28:44.357 05:16:22 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:44.357 05:16:22 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:28:44.357 05:16:22 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:28:44.357 05:16:22 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:44.357 05:16:22 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:44.357 05:16:22 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:28:44.357 05:16:22 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:28:44.357 05:16:22 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:44.357 05:16:22 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:44.357 05:16:22 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:28:44.357 05:16:22 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:28:44.357 05:16:22 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:44.357 05:16:22 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:44.357 05:16:22 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:28:44.357 05:16:22 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:28:44.357 05:16:22 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:44.357 05:16:22 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:44.357 05:16:22 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:44.357 05:16:22 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:44.357 05:16:22 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:28:44.357 05:16:22 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:28:44.357 05:16:22 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:44.357 05:16:22 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:44.616 05:16:22 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:44.616 05:16:22 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:28:44.616 05:16:22 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.EIlYqDjOzp 00:28:44.616 05:16:22 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:28:44.616 05:16:22 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:28:44.616 05:16:22 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:28:44.616 05:16:22 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:28:44.616 05:16:22 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:28:44.616 05:16:22 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:28:44.616 05:16:22 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:28:44.616 05:16:22 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:28:44.616 05:16:22 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:28:44.616 05:16:22 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:28:44.616 05:16:22 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:28:44.616 05:16:22 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=93218 00:28:44.616 05:16:22 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:44.616 05:16:22 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 93218 00:28:44.616 05:16:22 ftl.ftl_restore_fast -- common/autotest_common.sh@831 -- # '[' -z 93218 ']' 00:28:44.616 05:16:22 ftl.ftl_restore_fast -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:44.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:44.616 05:16:22 ftl.ftl_restore_fast -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:44.616 05:16:22 ftl.ftl_restore_fast -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:44.616 05:16:22 ftl.ftl_restore_fast -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:44.616 05:16:22 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:28:44.616 [2024-12-06 05:16:22.666535] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:28:44.616 [2024-12-06 05:16:22.666652] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93218 ] 00:28:44.616 [2024-12-06 05:16:22.800223] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:44.616 [2024-12-06 05:16:22.837304] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:45.552 05:16:23 ftl.ftl_restore_fast -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:45.552 05:16:23 ftl.ftl_restore_fast -- common/autotest_common.sh@864 -- # return 0 00:28:45.552 05:16:23 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:28:45.552 05:16:23 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:28:45.552 05:16:23 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:28:45.552 05:16:23 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:28:45.552 05:16:23 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:28:45.552 05:16:23 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:28:45.552 05:16:23 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:28:45.552 05:16:23 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:28:45.552 05:16:23 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:28:45.552 05:16:23 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:28:45.552 05:16:23 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:28:45.552 05:16:23 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:28:45.552 05:16:23 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:28:45.552 05:16:23 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:28:45.811 05:16:23 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:28:45.811 { 00:28:45.811 "name": "nvme0n1", 00:28:45.811 "aliases": [ 00:28:45.811 "84f9d508-d7aa-48e2-b3ca-5f190ca49656" 00:28:45.811 ], 00:28:45.812 "product_name": "NVMe disk", 00:28:45.812 "block_size": 4096, 00:28:45.812 "num_blocks": 1310720, 00:28:45.812 "uuid": "84f9d508-d7aa-48e2-b3ca-5f190ca49656", 00:28:45.812 "numa_id": -1, 00:28:45.812 "assigned_rate_limits": { 00:28:45.812 "rw_ios_per_sec": 0, 00:28:45.812 "rw_mbytes_per_sec": 0, 00:28:45.812 "r_mbytes_per_sec": 0, 00:28:45.812 "w_mbytes_per_sec": 0 00:28:45.812 }, 00:28:45.812 "claimed": true, 00:28:45.812 "claim_type": "read_many_write_one", 00:28:45.812 "zoned": false, 00:28:45.812 "supported_io_types": { 00:28:45.812 "read": true, 00:28:45.812 "write": true, 00:28:45.812 "unmap": true, 00:28:45.812 "flush": true, 00:28:45.812 "reset": true, 00:28:45.812 "nvme_admin": true, 00:28:45.812 "nvme_io": true, 00:28:45.812 "nvme_io_md": false, 00:28:45.812 "write_zeroes": true, 00:28:45.812 "zcopy": false, 00:28:45.812 "get_zone_info": false, 00:28:45.812 "zone_management": false, 00:28:45.812 "zone_append": false, 00:28:45.812 "compare": true, 00:28:45.812 "compare_and_write": false, 00:28:45.812 "abort": true, 00:28:45.812 "seek_hole": false, 00:28:45.812 "seek_data": false, 00:28:45.812 "copy": true, 00:28:45.812 "nvme_iov_md": false 00:28:45.812 }, 00:28:45.812 "driver_specific": { 00:28:45.812 "nvme": [ 00:28:45.812 { 00:28:45.812 "pci_address": "0000:00:11.0", 00:28:45.812 "trid": { 00:28:45.812 "trtype": "PCIe", 00:28:45.812 "traddr": "0000:00:11.0" 00:28:45.812 }, 00:28:45.812 "ctrlr_data": { 00:28:45.812 "cntlid": 0, 00:28:45.812 "vendor_id": "0x1b36", 00:28:45.812 "model_number": "QEMU NVMe Ctrl", 00:28:45.812 "serial_number": "12341", 00:28:45.812 "firmware_revision": "8.0.0", 00:28:45.812 "subnqn": "nqn.2019-08.org.qemu:12341", 00:28:45.812 "oacs": { 00:28:45.812 "security": 0, 00:28:45.812 "format": 1, 00:28:45.812 "firmware": 0, 00:28:45.812 "ns_manage": 1 00:28:45.812 }, 00:28:45.812 "multi_ctrlr": false, 00:28:45.812 "ana_reporting": false 00:28:45.812 }, 00:28:45.812 "vs": { 00:28:45.812 "nvme_version": "1.4" 00:28:45.812 }, 00:28:45.812 "ns_data": { 00:28:45.812 "id": 1, 00:28:45.812 "can_share": false 00:28:45.812 } 00:28:45.812 } 00:28:45.812 ], 00:28:45.812 "mp_policy": "active_passive" 00:28:45.812 } 00:28:45.812 } 00:28:45.812 ]' 00:28:45.812 05:16:23 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:28:45.812 05:16:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:28:45.812 05:16:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:28:45.812 05:16:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=1310720 00:28:45.812 05:16:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:28:45.812 05:16:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 5120 00:28:46.070 05:16:24 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:28:46.070 05:16:24 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:28:46.070 05:16:24 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:28:46.070 05:16:24 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:28:46.070 05:16:24 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:46.070 05:16:24 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=787b3e1e-d1aa-4135-8163-4f921cff71f4 00:28:46.070 05:16:24 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:28:46.070 05:16:24 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 787b3e1e-d1aa-4135-8163-4f921cff71f4 00:28:46.329 05:16:24 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:28:46.587 05:16:24 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=b7d5758f-b336-4965-b704-a5bdcc8afb58 00:28:46.587 05:16:24 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u b7d5758f-b336-4965-b704-a5bdcc8afb58 00:28:46.587 05:16:24 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=2a1d8302-d667-4ae0-9fb1-25d68389ba03 00:28:46.587 05:16:24 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:28:46.587 05:16:24 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 2a1d8302-d667-4ae0-9fb1-25d68389ba03 00:28:46.587 05:16:24 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:28:46.587 05:16:24 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:28:46.587 05:16:24 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=2a1d8302-d667-4ae0-9fb1-25d68389ba03 00:28:46.587 05:16:24 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:28:46.587 05:16:24 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size 2a1d8302-d667-4ae0-9fb1-25d68389ba03 00:28:46.587 05:16:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=2a1d8302-d667-4ae0-9fb1-25d68389ba03 00:28:46.587 05:16:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:28:46.587 05:16:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:28:46.587 05:16:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:28:46.587 05:16:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2a1d8302-d667-4ae0-9fb1-25d68389ba03 00:28:46.846 05:16:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:28:46.846 { 00:28:46.846 "name": "2a1d8302-d667-4ae0-9fb1-25d68389ba03", 00:28:46.846 "aliases": [ 00:28:46.846 "lvs/nvme0n1p0" 00:28:46.846 ], 00:28:46.846 "product_name": "Logical Volume", 00:28:46.846 "block_size": 4096, 00:28:46.846 "num_blocks": 26476544, 00:28:46.846 "uuid": "2a1d8302-d667-4ae0-9fb1-25d68389ba03", 00:28:46.846 "assigned_rate_limits": { 00:28:46.846 "rw_ios_per_sec": 0, 00:28:46.846 "rw_mbytes_per_sec": 0, 00:28:46.846 "r_mbytes_per_sec": 0, 00:28:46.846 "w_mbytes_per_sec": 0 00:28:46.846 }, 00:28:46.846 "claimed": false, 00:28:46.846 "zoned": false, 00:28:46.846 "supported_io_types": { 00:28:46.846 "read": true, 00:28:46.846 "write": true, 00:28:46.846 "unmap": true, 00:28:46.846 "flush": false, 00:28:46.846 "reset": true, 00:28:46.846 "nvme_admin": false, 00:28:46.846 "nvme_io": false, 00:28:46.846 "nvme_io_md": false, 00:28:46.846 "write_zeroes": true, 00:28:46.846 "zcopy": false, 00:28:46.846 "get_zone_info": false, 00:28:46.846 "zone_management": false, 00:28:46.846 "zone_append": false, 00:28:46.846 "compare": false, 00:28:46.846 "compare_and_write": false, 00:28:46.846 "abort": false, 00:28:46.846 "seek_hole": true, 00:28:46.846 "seek_data": true, 00:28:46.846 "copy": false, 00:28:46.846 "nvme_iov_md": false 00:28:46.846 }, 00:28:46.846 "driver_specific": { 00:28:46.846 "lvol": { 00:28:46.846 "lvol_store_uuid": "b7d5758f-b336-4965-b704-a5bdcc8afb58", 00:28:46.846 "base_bdev": "nvme0n1", 00:28:46.846 "thin_provision": true, 00:28:46.846 "num_allocated_clusters": 0, 00:28:46.846 "snapshot": false, 00:28:46.846 "clone": false, 00:28:46.846 "esnap_clone": false 00:28:46.846 } 00:28:46.846 } 00:28:46.846 } 00:28:46.846 ]' 00:28:46.846 05:16:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:28:46.846 05:16:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:28:46.846 05:16:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:28:47.103 05:16:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:28:47.103 05:16:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:28:47.103 05:16:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:28:47.103 05:16:25 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:28:47.103 05:16:25 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:28:47.103 05:16:25 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:28:47.103 05:16:25 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:28:47.103 05:16:25 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:28:47.103 05:16:25 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size 2a1d8302-d667-4ae0-9fb1-25d68389ba03 00:28:47.103 05:16:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=2a1d8302-d667-4ae0-9fb1-25d68389ba03 00:28:47.103 05:16:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:28:47.103 05:16:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:28:47.103 05:16:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:28:47.103 05:16:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2a1d8302-d667-4ae0-9fb1-25d68389ba03 00:28:47.359 05:16:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:28:47.359 { 00:28:47.359 "name": "2a1d8302-d667-4ae0-9fb1-25d68389ba03", 00:28:47.359 "aliases": [ 00:28:47.359 "lvs/nvme0n1p0" 00:28:47.359 ], 00:28:47.359 "product_name": "Logical Volume", 00:28:47.359 "block_size": 4096, 00:28:47.359 "num_blocks": 26476544, 00:28:47.359 "uuid": "2a1d8302-d667-4ae0-9fb1-25d68389ba03", 00:28:47.359 "assigned_rate_limits": { 00:28:47.359 "rw_ios_per_sec": 0, 00:28:47.359 "rw_mbytes_per_sec": 0, 00:28:47.359 "r_mbytes_per_sec": 0, 00:28:47.359 "w_mbytes_per_sec": 0 00:28:47.359 }, 00:28:47.359 "claimed": false, 00:28:47.359 "zoned": false, 00:28:47.359 "supported_io_types": { 00:28:47.359 "read": true, 00:28:47.359 "write": true, 00:28:47.359 "unmap": true, 00:28:47.359 "flush": false, 00:28:47.359 "reset": true, 00:28:47.359 "nvme_admin": false, 00:28:47.359 "nvme_io": false, 00:28:47.359 "nvme_io_md": false, 00:28:47.359 "write_zeroes": true, 00:28:47.359 "zcopy": false, 00:28:47.359 "get_zone_info": false, 00:28:47.359 "zone_management": false, 00:28:47.359 "zone_append": false, 00:28:47.359 "compare": false, 00:28:47.359 "compare_and_write": false, 00:28:47.359 "abort": false, 00:28:47.359 "seek_hole": true, 00:28:47.360 "seek_data": true, 00:28:47.360 "copy": false, 00:28:47.360 "nvme_iov_md": false 00:28:47.360 }, 00:28:47.360 "driver_specific": { 00:28:47.360 "lvol": { 00:28:47.360 "lvol_store_uuid": "b7d5758f-b336-4965-b704-a5bdcc8afb58", 00:28:47.360 "base_bdev": "nvme0n1", 00:28:47.360 "thin_provision": true, 00:28:47.360 "num_allocated_clusters": 0, 00:28:47.360 "snapshot": false, 00:28:47.360 "clone": false, 00:28:47.360 "esnap_clone": false 00:28:47.360 } 00:28:47.360 } 00:28:47.360 } 00:28:47.360 ]' 00:28:47.360 05:16:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:28:47.360 05:16:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:28:47.360 05:16:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:28:47.360 05:16:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:28:47.360 05:16:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:28:47.360 05:16:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:28:47.360 05:16:25 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:28:47.360 05:16:25 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:28:47.617 05:16:25 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:28:47.617 05:16:25 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size 2a1d8302-d667-4ae0-9fb1-25d68389ba03 00:28:47.617 05:16:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=2a1d8302-d667-4ae0-9fb1-25d68389ba03 00:28:47.617 05:16:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:28:47.617 05:16:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:28:47.617 05:16:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:28:47.617 05:16:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2a1d8302-d667-4ae0-9fb1-25d68389ba03 00:28:47.874 05:16:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:28:47.874 { 00:28:47.874 "name": "2a1d8302-d667-4ae0-9fb1-25d68389ba03", 00:28:47.874 "aliases": [ 00:28:47.874 "lvs/nvme0n1p0" 00:28:47.874 ], 00:28:47.874 "product_name": "Logical Volume", 00:28:47.874 "block_size": 4096, 00:28:47.874 "num_blocks": 26476544, 00:28:47.874 "uuid": "2a1d8302-d667-4ae0-9fb1-25d68389ba03", 00:28:47.874 "assigned_rate_limits": { 00:28:47.874 "rw_ios_per_sec": 0, 00:28:47.874 "rw_mbytes_per_sec": 0, 00:28:47.874 "r_mbytes_per_sec": 0, 00:28:47.874 "w_mbytes_per_sec": 0 00:28:47.874 }, 00:28:47.874 "claimed": false, 00:28:47.874 "zoned": false, 00:28:47.874 "supported_io_types": { 00:28:47.874 "read": true, 00:28:47.874 "write": true, 00:28:47.874 "unmap": true, 00:28:47.874 "flush": false, 00:28:47.874 "reset": true, 00:28:47.874 "nvme_admin": false, 00:28:47.874 "nvme_io": false, 00:28:47.874 "nvme_io_md": false, 00:28:47.874 "write_zeroes": true, 00:28:47.874 "zcopy": false, 00:28:47.874 "get_zone_info": false, 00:28:47.874 "zone_management": false, 00:28:47.874 "zone_append": false, 00:28:47.874 "compare": false, 00:28:47.874 "compare_and_write": false, 00:28:47.874 "abort": false, 00:28:47.874 "seek_hole": true, 00:28:47.874 "seek_data": true, 00:28:47.874 "copy": false, 00:28:47.875 "nvme_iov_md": false 00:28:47.875 }, 00:28:47.875 "driver_specific": { 00:28:47.875 "lvol": { 00:28:47.875 "lvol_store_uuid": "b7d5758f-b336-4965-b704-a5bdcc8afb58", 00:28:47.875 "base_bdev": "nvme0n1", 00:28:47.875 "thin_provision": true, 00:28:47.875 "num_allocated_clusters": 0, 00:28:47.875 "snapshot": false, 00:28:47.875 "clone": false, 00:28:47.875 "esnap_clone": false 00:28:47.875 } 00:28:47.875 } 00:28:47.875 } 00:28:47.875 ]' 00:28:47.875 05:16:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:28:47.875 05:16:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:28:47.875 05:16:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:28:47.875 05:16:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:28:47.875 05:16:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:28:47.875 05:16:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:28:47.875 05:16:25 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:28:47.875 05:16:25 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 2a1d8302-d667-4ae0-9fb1-25d68389ba03 --l2p_dram_limit 10' 00:28:47.875 05:16:25 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:28:47.875 05:16:25 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:28:47.875 05:16:25 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:28:47.875 05:16:25 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:28:47.875 05:16:25 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:28:47.875 05:16:25 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 2a1d8302-d667-4ae0-9fb1-25d68389ba03 --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:28:48.136 [2024-12-06 05:16:26.185790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.136 [2024-12-06 05:16:26.185918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:48.136 [2024-12-06 05:16:26.185934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:48.136 [2024-12-06 05:16:26.185943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.136 [2024-12-06 05:16:26.185994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.136 [2024-12-06 05:16:26.186005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:48.136 [2024-12-06 05:16:26.186011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:28:48.136 [2024-12-06 05:16:26.186020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.136 [2024-12-06 05:16:26.186042] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:48.136 [2024-12-06 05:16:26.186241] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:48.136 [2024-12-06 05:16:26.186253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.136 [2024-12-06 05:16:26.186262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:48.136 [2024-12-06 05:16:26.186270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.217 ms 00:28:48.136 [2024-12-06 05:16:26.186278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.136 [2024-12-06 05:16:26.186328] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 7b4b71fc-81b6-42ef-bf87-4b87e3b52047 00:28:48.136 [2024-12-06 05:16:26.187296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.136 [2024-12-06 05:16:26.187318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:28:48.136 [2024-12-06 05:16:26.187333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:28:48.136 [2024-12-06 05:16:26.187340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.136 [2024-12-06 05:16:26.192165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.136 [2024-12-06 05:16:26.192193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:48.136 [2024-12-06 05:16:26.192202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.785 ms 00:28:48.136 [2024-12-06 05:16:26.192208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.136 [2024-12-06 05:16:26.192267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.136 [2024-12-06 05:16:26.192274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:48.136 [2024-12-06 05:16:26.192282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:28:48.136 [2024-12-06 05:16:26.192289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.136 [2024-12-06 05:16:26.192322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.136 [2024-12-06 05:16:26.192329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:48.136 [2024-12-06 05:16:26.192341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:28:48.136 [2024-12-06 05:16:26.192347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.136 [2024-12-06 05:16:26.192364] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:48.136 [2024-12-06 05:16:26.193653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.136 [2024-12-06 05:16:26.193689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:48.136 [2024-12-06 05:16:26.193698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.294 ms 00:28:48.136 [2024-12-06 05:16:26.193706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.136 [2024-12-06 05:16:26.193731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.136 [2024-12-06 05:16:26.193740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:48.136 [2024-12-06 05:16:26.193746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:28:48.136 [2024-12-06 05:16:26.193755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.136 [2024-12-06 05:16:26.193774] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:28:48.136 [2024-12-06 05:16:26.193883] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:48.136 [2024-12-06 05:16:26.193893] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:48.136 [2024-12-06 05:16:26.193904] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:48.136 [2024-12-06 05:16:26.193912] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:48.136 [2024-12-06 05:16:26.193921] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:48.136 [2024-12-06 05:16:26.193927] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:48.136 [2024-12-06 05:16:26.193936] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:48.136 [2024-12-06 05:16:26.193942] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:48.136 [2024-12-06 05:16:26.193950] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:48.136 [2024-12-06 05:16:26.193959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.136 [2024-12-06 05:16:26.193966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:48.136 [2024-12-06 05:16:26.193973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.186 ms 00:28:48.136 [2024-12-06 05:16:26.193980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.136 [2024-12-06 05:16:26.194047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.136 [2024-12-06 05:16:26.194056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:48.136 [2024-12-06 05:16:26.194063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:28:48.136 [2024-12-06 05:16:26.194070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.136 [2024-12-06 05:16:26.194142] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:48.136 [2024-12-06 05:16:26.194155] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:48.136 [2024-12-06 05:16:26.194161] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:48.136 [2024-12-06 05:16:26.194168] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:48.136 [2024-12-06 05:16:26.194174] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:48.136 [2024-12-06 05:16:26.194182] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:48.136 [2024-12-06 05:16:26.194187] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:48.136 [2024-12-06 05:16:26.194194] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:48.136 [2024-12-06 05:16:26.194199] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:48.136 [2024-12-06 05:16:26.194206] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:48.136 [2024-12-06 05:16:26.194210] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:48.136 [2024-12-06 05:16:26.194219] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:48.136 [2024-12-06 05:16:26.194225] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:48.136 [2024-12-06 05:16:26.194233] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:48.136 [2024-12-06 05:16:26.194239] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:48.136 [2024-12-06 05:16:26.194245] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:48.136 [2024-12-06 05:16:26.194250] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:48.136 [2024-12-06 05:16:26.194258] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:48.136 [2024-12-06 05:16:26.194264] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:48.136 [2024-12-06 05:16:26.194270] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:48.136 [2024-12-06 05:16:26.194277] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:48.136 [2024-12-06 05:16:26.194283] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:48.136 [2024-12-06 05:16:26.194288] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:48.136 [2024-12-06 05:16:26.194296] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:48.136 [2024-12-06 05:16:26.194302] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:48.136 [2024-12-06 05:16:26.194309] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:48.136 [2024-12-06 05:16:26.194315] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:48.136 [2024-12-06 05:16:26.194324] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:48.136 [2024-12-06 05:16:26.194331] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:48.136 [2024-12-06 05:16:26.194340] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:48.136 [2024-12-06 05:16:26.194345] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:48.136 [2024-12-06 05:16:26.194353] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:48.136 [2024-12-06 05:16:26.194358] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:48.136 [2024-12-06 05:16:26.194365] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:48.136 [2024-12-06 05:16:26.194371] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:48.136 [2024-12-06 05:16:26.194379] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:48.136 [2024-12-06 05:16:26.194384] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:48.136 [2024-12-06 05:16:26.194393] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:48.136 [2024-12-06 05:16:26.194399] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:48.137 [2024-12-06 05:16:26.194406] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:48.137 [2024-12-06 05:16:26.194412] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:48.137 [2024-12-06 05:16:26.194420] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:48.137 [2024-12-06 05:16:26.194425] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:48.137 [2024-12-06 05:16:26.194432] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:48.137 [2024-12-06 05:16:26.194442] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:48.137 [2024-12-06 05:16:26.194454] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:48.137 [2024-12-06 05:16:26.194461] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:48.137 [2024-12-06 05:16:26.194469] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:48.137 [2024-12-06 05:16:26.194475] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:48.137 [2024-12-06 05:16:26.194482] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:48.137 [2024-12-06 05:16:26.194488] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:48.137 [2024-12-06 05:16:26.194495] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:48.137 [2024-12-06 05:16:26.194502] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:48.137 [2024-12-06 05:16:26.194512] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:48.137 [2024-12-06 05:16:26.194519] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:48.137 [2024-12-06 05:16:26.194528] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:48.137 [2024-12-06 05:16:26.194535] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:48.137 [2024-12-06 05:16:26.194543] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:48.137 [2024-12-06 05:16:26.194549] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:48.137 [2024-12-06 05:16:26.194557] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:48.137 [2024-12-06 05:16:26.194564] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:48.137 [2024-12-06 05:16:26.194573] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:48.137 [2024-12-06 05:16:26.194579] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:48.137 [2024-12-06 05:16:26.194587] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:48.137 [2024-12-06 05:16:26.194593] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:48.137 [2024-12-06 05:16:26.194601] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:48.137 [2024-12-06 05:16:26.194607] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:48.137 [2024-12-06 05:16:26.194615] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:48.137 [2024-12-06 05:16:26.194621] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:48.137 [2024-12-06 05:16:26.194628] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:48.137 [2024-12-06 05:16:26.194637] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:48.137 [2024-12-06 05:16:26.194646] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:48.137 [2024-12-06 05:16:26.194652] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:48.137 [2024-12-06 05:16:26.194659] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:48.137 [2024-12-06 05:16:26.194678] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:48.137 [2024-12-06 05:16:26.194687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.137 [2024-12-06 05:16:26.194693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:48.137 [2024-12-06 05:16:26.194702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.595 ms 00:28:48.137 [2024-12-06 05:16:26.194708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.137 [2024-12-06 05:16:26.194737] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:28:48.137 [2024-12-06 05:16:26.194745] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:28:52.342 [2024-12-06 05:16:29.861145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:52.342 [2024-12-06 05:16:29.861459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:28:52.342 [2024-12-06 05:16:29.861546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3666.384 ms 00:28:52.342 [2024-12-06 05:16:29.861572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.342 [2024-12-06 05:16:29.875515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:52.342 [2024-12-06 05:16:29.875754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:52.342 [2024-12-06 05:16:29.875914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.753 ms 00:28:52.342 [2024-12-06 05:16:29.875945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.342 [2024-12-06 05:16:29.876073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:52.342 [2024-12-06 05:16:29.876097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:52.342 [2024-12-06 05:16:29.876188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:28:52.342 [2024-12-06 05:16:29.876212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.342 [2024-12-06 05:16:29.887966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:52.342 [2024-12-06 05:16:29.888163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:52.342 [2024-12-06 05:16:29.888236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.665 ms 00:28:52.342 [2024-12-06 05:16:29.888261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.342 [2024-12-06 05:16:29.888312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:52.342 [2024-12-06 05:16:29.888333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:52.342 [2024-12-06 05:16:29.888361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:52.342 [2024-12-06 05:16:29.888382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.342 [2024-12-06 05:16:29.888958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:52.342 [2024-12-06 05:16:29.889163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:52.342 [2024-12-06 05:16:29.889236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.506 ms 00:28:52.343 [2024-12-06 05:16:29.889260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.343 [2024-12-06 05:16:29.889409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:52.343 [2024-12-06 05:16:29.889517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:52.343 [2024-12-06 05:16:29.889548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:28:52.343 [2024-12-06 05:16:29.889571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.343 [2024-12-06 05:16:29.916124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:52.343 [2024-12-06 05:16:29.916346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:52.343 [2024-12-06 05:16:29.916606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.211 ms 00:28:52.343 [2024-12-06 05:16:29.916654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.343 [2024-12-06 05:16:29.927561] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:52.343 [2024-12-06 05:16:29.931543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:52.343 [2024-12-06 05:16:29.931720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:52.343 [2024-12-06 05:16:29.931779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.710 ms 00:28:52.343 [2024-12-06 05:16:29.931806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.343 [2024-12-06 05:16:30.025983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:52.343 [2024-12-06 05:16:30.026208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:28:52.343 [2024-12-06 05:16:30.026271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 94.129 ms 00:28:52.343 [2024-12-06 05:16:30.026302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.343 [2024-12-06 05:16:30.026523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:52.343 [2024-12-06 05:16:30.026688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:52.343 [2024-12-06 05:16:30.026764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.161 ms 00:28:52.343 [2024-12-06 05:16:30.026792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.343 [2024-12-06 05:16:30.033177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:52.343 [2024-12-06 05:16:30.033355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:28:52.343 [2024-12-06 05:16:30.033376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.342 ms 00:28:52.343 [2024-12-06 05:16:30.033388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.343 [2024-12-06 05:16:30.038920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:52.343 [2024-12-06 05:16:30.038976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:28:52.343 [2024-12-06 05:16:30.038992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.370 ms 00:28:52.343 [2024-12-06 05:16:30.039002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.343 [2024-12-06 05:16:30.039348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:52.343 [2024-12-06 05:16:30.039363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:52.343 [2024-12-06 05:16:30.039381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.298 ms 00:28:52.343 [2024-12-06 05:16:30.039394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.343 [2024-12-06 05:16:30.085057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:52.343 [2024-12-06 05:16:30.085123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:28:52.343 [2024-12-06 05:16:30.085135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.640 ms 00:28:52.343 [2024-12-06 05:16:30.085147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.343 [2024-12-06 05:16:30.092718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:52.343 [2024-12-06 05:16:30.092772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:28:52.343 [2024-12-06 05:16:30.092784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.489 ms 00:28:52.343 [2024-12-06 05:16:30.092796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.343 [2024-12-06 05:16:30.099003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:52.343 [2024-12-06 05:16:30.099060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:28:52.343 [2024-12-06 05:16:30.099071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.158 ms 00:28:52.343 [2024-12-06 05:16:30.099081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.343 [2024-12-06 05:16:30.105703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:52.343 [2024-12-06 05:16:30.105758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:52.343 [2024-12-06 05:16:30.105768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.574 ms 00:28:52.343 [2024-12-06 05:16:30.105782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.343 [2024-12-06 05:16:30.105846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:52.343 [2024-12-06 05:16:30.105874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:52.343 [2024-12-06 05:16:30.105884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:28:52.343 [2024-12-06 05:16:30.105896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.343 [2024-12-06 05:16:30.105972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:52.343 [2024-12-06 05:16:30.105986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:52.343 [2024-12-06 05:16:30.105995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:28:52.343 [2024-12-06 05:16:30.106005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.343 [2024-12-06 05:16:30.107179] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3920.865 ms, result 0 00:28:52.343 { 00:28:52.343 "name": "ftl0", 00:28:52.343 "uuid": "7b4b71fc-81b6-42ef-bf87-4b87e3b52047" 00:28:52.343 } 00:28:52.343 05:16:30 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:28:52.343 05:16:30 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:28:52.343 05:16:30 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:28:52.343 05:16:30 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:28:52.343 [2024-12-06 05:16:30.548749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:52.343 [2024-12-06 05:16:30.548984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:52.343 [2024-12-06 05:16:30.549013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:52.343 [2024-12-06 05:16:30.549023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.343 [2024-12-06 05:16:30.549059] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:52.343 [2024-12-06 05:16:30.549855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:52.343 [2024-12-06 05:16:30.549905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:52.343 [2024-12-06 05:16:30.549918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.775 ms 00:28:52.343 [2024-12-06 05:16:30.549939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.343 [2024-12-06 05:16:30.550229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:52.343 [2024-12-06 05:16:30.550266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:52.343 [2024-12-06 05:16:30.550276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.261 ms 00:28:52.343 [2024-12-06 05:16:30.550289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.343 [2024-12-06 05:16:30.553537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:52.343 [2024-12-06 05:16:30.553569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:52.343 [2024-12-06 05:16:30.553579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.232 ms 00:28:52.343 [2024-12-06 05:16:30.553591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.343 [2024-12-06 05:16:30.559896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:52.343 [2024-12-06 05:16:30.560088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:52.343 [2024-12-06 05:16:30.560109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.285 ms 00:28:52.343 [2024-12-06 05:16:30.560120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.343 [2024-12-06 05:16:30.563328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:52.343 [2024-12-06 05:16:30.563505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:52.343 [2024-12-06 05:16:30.563524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.087 ms 00:28:52.343 [2024-12-06 05:16:30.563535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.343 [2024-12-06 05:16:30.569820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:52.343 [2024-12-06 05:16:30.570014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:52.343 [2024-12-06 05:16:30.570035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.241 ms 00:28:52.343 [2024-12-06 05:16:30.570046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.343 [2024-12-06 05:16:30.570293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:52.343 [2024-12-06 05:16:30.570310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:52.343 [2024-12-06 05:16:30.570320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:28:52.343 [2024-12-06 05:16:30.570331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.606 [2024-12-06 05:16:30.573587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:52.606 [2024-12-06 05:16:30.573808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:52.606 [2024-12-06 05:16:30.573827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.232 ms 00:28:52.606 [2024-12-06 05:16:30.573837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.606 [2024-12-06 05:16:30.576490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:52.606 [2024-12-06 05:16:30.576548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:52.606 [2024-12-06 05:16:30.576559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.607 ms 00:28:52.606 [2024-12-06 05:16:30.576569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.606 [2024-12-06 05:16:30.578983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:52.606 [2024-12-06 05:16:30.579038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:52.606 [2024-12-06 05:16:30.579049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.368 ms 00:28:52.606 [2024-12-06 05:16:30.579062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.606 [2024-12-06 05:16:30.581368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:52.606 [2024-12-06 05:16:30.581422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:52.606 [2024-12-06 05:16:30.581440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.232 ms 00:28:52.606 [2024-12-06 05:16:30.581450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.606 [2024-12-06 05:16:30.581495] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:52.606 [2024-12-06 05:16:30.581513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:28:52.606 [2024-12-06 05:16:30.581524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:52.606 [2024-12-06 05:16:30.581535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:52.606 [2024-12-06 05:16:30.581543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:52.606 [2024-12-06 05:16:30.581556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:52.606 [2024-12-06 05:16:30.581563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:52.606 [2024-12-06 05:16:30.581573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:52.606 [2024-12-06 05:16:30.581582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:52.606 [2024-12-06 05:16:30.581593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:52.606 [2024-12-06 05:16:30.581601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:52.606 [2024-12-06 05:16:30.581612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:52.606 [2024-12-06 05:16:30.581620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:52.606 [2024-12-06 05:16:30.581646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:52.606 [2024-12-06 05:16:30.581654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:52.606 [2024-12-06 05:16:30.581690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:52.606 [2024-12-06 05:16:30.581700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:52.606 [2024-12-06 05:16:30.581710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:52.606 [2024-12-06 05:16:30.581718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:52.606 [2024-12-06 05:16:30.581730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:52.606 [2024-12-06 05:16:30.581738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:52.606 [2024-12-06 05:16:30.581750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:52.606 [2024-12-06 05:16:30.581759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:52.606 [2024-12-06 05:16:30.581769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:52.606 [2024-12-06 05:16:30.581776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:52.606 [2024-12-06 05:16:30.581787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:52.606 [2024-12-06 05:16:30.581795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:52.606 [2024-12-06 05:16:30.581806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:52.606 [2024-12-06 05:16:30.581813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:52.606 [2024-12-06 05:16:30.581824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:52.606 [2024-12-06 05:16:30.581833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:52.606 [2024-12-06 05:16:30.581845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:52.606 [2024-12-06 05:16:30.581853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:52.606 [2024-12-06 05:16:30.581865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:52.606 [2024-12-06 05:16:30.581873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:52.606 [2024-12-06 05:16:30.581883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:52.606 [2024-12-06 05:16:30.581892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:52.606 [2024-12-06 05:16:30.581905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:52.606 [2024-12-06 05:16:30.581912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:52.606 [2024-12-06 05:16:30.581921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:52.606 [2024-12-06 05:16:30.581930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:52.606 [2024-12-06 05:16:30.581940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:52.606 [2024-12-06 05:16:30.581947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:52.606 [2024-12-06 05:16:30.581957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:52.606 [2024-12-06 05:16:30.581964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:52.606 [2024-12-06 05:16:30.581975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:52.606 [2024-12-06 05:16:30.581983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:52.606 [2024-12-06 05:16:30.581994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:52.606 [2024-12-06 05:16:30.582002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:52.606 [2024-12-06 05:16:30.582011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:52.606 [2024-12-06 05:16:30.582019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:52.606 [2024-12-06 05:16:30.582029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:52.606 [2024-12-06 05:16:30.582036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:52.606 [2024-12-06 05:16:30.582060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:52.607 [2024-12-06 05:16:30.582067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:52.607 [2024-12-06 05:16:30.582077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:52.607 [2024-12-06 05:16:30.582084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:52.607 [2024-12-06 05:16:30.582093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:52.607 [2024-12-06 05:16:30.582103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:52.607 [2024-12-06 05:16:30.582112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:52.607 [2024-12-06 05:16:30.582119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:52.607 [2024-12-06 05:16:30.582129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:52.607 [2024-12-06 05:16:30.582138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:52.607 [2024-12-06 05:16:30.582148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:52.607 [2024-12-06 05:16:30.582156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:52.607 [2024-12-06 05:16:30.582167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:52.607 [2024-12-06 05:16:30.582174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:52.607 [2024-12-06 05:16:30.582186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:52.607 [2024-12-06 05:16:30.582193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:52.607 [2024-12-06 05:16:30.582207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:52.607 [2024-12-06 05:16:30.582214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:52.607 [2024-12-06 05:16:30.582225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:52.607 [2024-12-06 05:16:30.582233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:52.607 [2024-12-06 05:16:30.582242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:52.607 [2024-12-06 05:16:30.582249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:52.607 [2024-12-06 05:16:30.582258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:52.607 [2024-12-06 05:16:30.582268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:52.607 [2024-12-06 05:16:30.582278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:52.607 [2024-12-06 05:16:30.582285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:52.607 [2024-12-06 05:16:30.582295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:52.607 [2024-12-06 05:16:30.582302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:52.607 [2024-12-06 05:16:30.582311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:52.607 [2024-12-06 05:16:30.582320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:52.607 [2024-12-06 05:16:30.582330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:52.607 [2024-12-06 05:16:30.582337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:52.607 [2024-12-06 05:16:30.582348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:52.607 [2024-12-06 05:16:30.582355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:52.607 [2024-12-06 05:16:30.582365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:52.607 [2024-12-06 05:16:30.582374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:52.607 [2024-12-06 05:16:30.582384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:52.607 [2024-12-06 05:16:30.582390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:52.607 [2024-12-06 05:16:30.582401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:52.607 [2024-12-06 05:16:30.582408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:52.607 [2024-12-06 05:16:30.582418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:52.607 [2024-12-06 05:16:30.582428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:52.607 [2024-12-06 05:16:30.582439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:52.607 [2024-12-06 05:16:30.582446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:52.607 [2024-12-06 05:16:30.582456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:52.607 [2024-12-06 05:16:30.582466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:52.607 [2024-12-06 05:16:30.582477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:52.607 [2024-12-06 05:16:30.582484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:52.607 [2024-12-06 05:16:30.582504] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:52.607 [2024-12-06 05:16:30.582514] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7b4b71fc-81b6-42ef-bf87-4b87e3b52047 00:28:52.607 [2024-12-06 05:16:30.582526] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:28:52.607 [2024-12-06 05:16:30.582533] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:52.607 [2024-12-06 05:16:30.582543] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:52.607 [2024-12-06 05:16:30.582551] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:52.607 [2024-12-06 05:16:30.582561] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:52.607 [2024-12-06 05:16:30.582568] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:52.607 [2024-12-06 05:16:30.582580] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:52.607 [2024-12-06 05:16:30.582587] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:52.607 [2024-12-06 05:16:30.582595] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:52.607 [2024-12-06 05:16:30.582602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:52.607 [2024-12-06 05:16:30.582612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:52.607 [2024-12-06 05:16:30.582624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.108 ms 00:28:52.607 [2024-12-06 05:16:30.582634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.607 [2024-12-06 05:16:30.584946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:52.607 [2024-12-06 05:16:30.584990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:52.607 [2024-12-06 05:16:30.585001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.291 ms 00:28:52.607 [2024-12-06 05:16:30.585013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.607 [2024-12-06 05:16:30.585136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:52.607 [2024-12-06 05:16:30.585150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:52.607 [2024-12-06 05:16:30.585159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:28:52.607 [2024-12-06 05:16:30.585168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.607 [2024-12-06 05:16:30.593457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:52.607 [2024-12-06 05:16:30.593514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:52.607 [2024-12-06 05:16:30.593533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:52.607 [2024-12-06 05:16:30.593544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.607 [2024-12-06 05:16:30.593617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:52.607 [2024-12-06 05:16:30.593628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:52.607 [2024-12-06 05:16:30.593656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:52.607 [2024-12-06 05:16:30.593693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.607 [2024-12-06 05:16:30.593778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:52.607 [2024-12-06 05:16:30.593795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:52.607 [2024-12-06 05:16:30.593803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:52.607 [2024-12-06 05:16:30.593814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.607 [2024-12-06 05:16:30.593835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:52.607 [2024-12-06 05:16:30.593848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:52.607 [2024-12-06 05:16:30.593859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:52.607 [2024-12-06 05:16:30.593871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.607 [2024-12-06 05:16:30.606792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:52.607 [2024-12-06 05:16:30.606846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:52.607 [2024-12-06 05:16:30.606857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:52.607 [2024-12-06 05:16:30.606868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.607 [2024-12-06 05:16:30.617337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:52.607 [2024-12-06 05:16:30.617393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:52.607 [2024-12-06 05:16:30.617410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:52.607 [2024-12-06 05:16:30.617424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.607 [2024-12-06 05:16:30.617497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:52.607 [2024-12-06 05:16:30.617513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:52.607 [2024-12-06 05:16:30.617520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:52.607 [2024-12-06 05:16:30.617531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.608 [2024-12-06 05:16:30.617583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:52.608 [2024-12-06 05:16:30.617595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:52.608 [2024-12-06 05:16:30.617604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:52.608 [2024-12-06 05:16:30.617616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.608 [2024-12-06 05:16:30.617727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:52.608 [2024-12-06 05:16:30.617740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:52.608 [2024-12-06 05:16:30.617749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:52.608 [2024-12-06 05:16:30.617759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.608 [2024-12-06 05:16:30.617793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:52.608 [2024-12-06 05:16:30.617805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:52.608 [2024-12-06 05:16:30.617814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:52.608 [2024-12-06 05:16:30.617826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.608 [2024-12-06 05:16:30.617867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:52.608 [2024-12-06 05:16:30.617884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:52.608 [2024-12-06 05:16:30.617895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:52.608 [2024-12-06 05:16:30.617905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.608 [2024-12-06 05:16:30.617952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:52.608 [2024-12-06 05:16:30.617972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:52.608 [2024-12-06 05:16:30.617981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:52.608 [2024-12-06 05:16:30.617993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:52.608 [2024-12-06 05:16:30.618133] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 69.350 ms, result 0 00:28:52.608 true 00:28:52.608 05:16:30 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 93218 00:28:52.608 05:16:30 ftl.ftl_restore_fast -- common/autotest_common.sh@950 -- # '[' -z 93218 ']' 00:28:52.608 05:16:30 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # kill -0 93218 00:28:52.608 05:16:30 ftl.ftl_restore_fast -- common/autotest_common.sh@955 -- # uname 00:28:52.608 05:16:30 ftl.ftl_restore_fast -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:52.608 05:16:30 ftl.ftl_restore_fast -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 93218 00:28:52.608 killing process with pid 93218 00:28:52.608 05:16:30 ftl.ftl_restore_fast -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:52.608 05:16:30 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:52.608 05:16:30 ftl.ftl_restore_fast -- common/autotest_common.sh@968 -- # echo 'killing process with pid 93218' 00:28:52.608 05:16:30 ftl.ftl_restore_fast -- common/autotest_common.sh@969 -- # kill 93218 00:28:52.608 05:16:30 ftl.ftl_restore_fast -- common/autotest_common.sh@974 -- # wait 93218 00:28:56.815 05:16:34 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:29:01.007 262144+0 records in 00:29:01.007 262144+0 records out 00:29:01.007 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.78352 s, 284 MB/s 00:29:01.007 05:16:38 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:29:02.388 05:16:40 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:02.649 [2024-12-06 05:16:40.654328] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:29:02.649 [2024-12-06 05:16:40.654460] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93422 ] 00:29:02.649 [2024-12-06 05:16:40.786680] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:02.649 [2024-12-06 05:16:40.830949] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:02.912 [2024-12-06 05:16:40.946142] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:02.912 [2024-12-06 05:16:40.946235] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:02.912 [2024-12-06 05:16:41.106768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.912 [2024-12-06 05:16:41.106996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:02.912 [2024-12-06 05:16:41.107027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:02.912 [2024-12-06 05:16:41.107037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.912 [2024-12-06 05:16:41.107110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.912 [2024-12-06 05:16:41.107122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:02.912 [2024-12-06 05:16:41.107131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:29:02.912 [2024-12-06 05:16:41.107146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.912 [2024-12-06 05:16:41.107174] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:02.912 [2024-12-06 05:16:41.107467] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:02.912 [2024-12-06 05:16:41.107485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.912 [2024-12-06 05:16:41.107500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:02.912 [2024-12-06 05:16:41.107513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.322 ms 00:29:02.912 [2024-12-06 05:16:41.107521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.912 [2024-12-06 05:16:41.109195] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:29:02.912 [2024-12-06 05:16:41.112843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.912 [2024-12-06 05:16:41.112900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:02.912 [2024-12-06 05:16:41.112912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.650 ms 00:29:02.912 [2024-12-06 05:16:41.112921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.912 [2024-12-06 05:16:41.113001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.912 [2024-12-06 05:16:41.113018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:02.912 [2024-12-06 05:16:41.113030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:29:02.912 [2024-12-06 05:16:41.113038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.912 [2024-12-06 05:16:41.121152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.912 [2024-12-06 05:16:41.121341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:02.912 [2024-12-06 05:16:41.121361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.072 ms 00:29:02.912 [2024-12-06 05:16:41.121378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.912 [2024-12-06 05:16:41.121483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.912 [2024-12-06 05:16:41.121494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:02.912 [2024-12-06 05:16:41.121503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:29:02.912 [2024-12-06 05:16:41.121512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.912 [2024-12-06 05:16:41.121573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.912 [2024-12-06 05:16:41.121583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:02.912 [2024-12-06 05:16:41.121592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:29:02.912 [2024-12-06 05:16:41.121600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.912 [2024-12-06 05:16:41.121641] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:02.912 [2024-12-06 05:16:41.123595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.912 [2024-12-06 05:16:41.123639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:02.912 [2024-12-06 05:16:41.123649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.959 ms 00:29:02.912 [2024-12-06 05:16:41.123657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.912 [2024-12-06 05:16:41.123729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.912 [2024-12-06 05:16:41.123738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:02.912 [2024-12-06 05:16:41.123747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:29:02.912 [2024-12-06 05:16:41.123756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.912 [2024-12-06 05:16:41.123785] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:02.912 [2024-12-06 05:16:41.123812] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:02.912 [2024-12-06 05:16:41.123858] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:02.913 [2024-12-06 05:16:41.123877] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:29:02.913 [2024-12-06 05:16:41.123984] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:02.913 [2024-12-06 05:16:41.123997] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:02.913 [2024-12-06 05:16:41.124009] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:02.913 [2024-12-06 05:16:41.124019] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:02.913 [2024-12-06 05:16:41.124033] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:02.913 [2024-12-06 05:16:41.124042] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:02.913 [2024-12-06 05:16:41.124050] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:02.913 [2024-12-06 05:16:41.124058] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:02.913 [2024-12-06 05:16:41.124066] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:02.913 [2024-12-06 05:16:41.124074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.913 [2024-12-06 05:16:41.124082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:02.913 [2024-12-06 05:16:41.124090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.292 ms 00:29:02.913 [2024-12-06 05:16:41.124101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.913 [2024-12-06 05:16:41.124189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.913 [2024-12-06 05:16:41.124199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:02.913 [2024-12-06 05:16:41.124209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:29:02.913 [2024-12-06 05:16:41.124217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.913 [2024-12-06 05:16:41.124316] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:02.913 [2024-12-06 05:16:41.124338] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:02.913 [2024-12-06 05:16:41.124351] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:02.913 [2024-12-06 05:16:41.124360] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:02.913 [2024-12-06 05:16:41.124370] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:02.913 [2024-12-06 05:16:41.124378] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:02.913 [2024-12-06 05:16:41.124386] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:02.913 [2024-12-06 05:16:41.124394] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:02.913 [2024-12-06 05:16:41.124402] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:02.913 [2024-12-06 05:16:41.124410] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:02.913 [2024-12-06 05:16:41.124418] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:02.913 [2024-12-06 05:16:41.124426] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:02.913 [2024-12-06 05:16:41.124436] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:02.913 [2024-12-06 05:16:41.124444] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:02.913 [2024-12-06 05:16:41.124451] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:02.913 [2024-12-06 05:16:41.124460] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:02.913 [2024-12-06 05:16:41.124470] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:02.913 [2024-12-06 05:16:41.124479] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:02.913 [2024-12-06 05:16:41.124487] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:02.913 [2024-12-06 05:16:41.124495] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:02.913 [2024-12-06 05:16:41.124502] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:02.913 [2024-12-06 05:16:41.124510] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:02.913 [2024-12-06 05:16:41.124518] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:02.913 [2024-12-06 05:16:41.124526] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:02.913 [2024-12-06 05:16:41.124533] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:02.913 [2024-12-06 05:16:41.124542] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:02.913 [2024-12-06 05:16:41.124549] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:02.913 [2024-12-06 05:16:41.124557] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:02.913 [2024-12-06 05:16:41.124571] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:02.913 [2024-12-06 05:16:41.124579] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:02.913 [2024-12-06 05:16:41.124588] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:02.913 [2024-12-06 05:16:41.124596] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:02.913 [2024-12-06 05:16:41.124603] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:02.913 [2024-12-06 05:16:41.124611] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:02.913 [2024-12-06 05:16:41.124618] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:02.913 [2024-12-06 05:16:41.124624] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:02.913 [2024-12-06 05:16:41.124632] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:02.913 [2024-12-06 05:16:41.124640] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:02.913 [2024-12-06 05:16:41.124647] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:02.913 [2024-12-06 05:16:41.124654] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:02.913 [2024-12-06 05:16:41.124660] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:02.913 [2024-12-06 05:16:41.125157] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:02.913 [2024-12-06 05:16:41.125176] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:02.913 [2024-12-06 05:16:41.125184] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:02.913 [2024-12-06 05:16:41.125204] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:02.913 [2024-12-06 05:16:41.125213] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:02.913 [2024-12-06 05:16:41.125225] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:02.913 [2024-12-06 05:16:41.125233] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:02.913 [2024-12-06 05:16:41.125242] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:02.913 [2024-12-06 05:16:41.125249] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:02.913 [2024-12-06 05:16:41.125256] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:02.913 [2024-12-06 05:16:41.125263] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:02.913 [2024-12-06 05:16:41.125271] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:02.913 [2024-12-06 05:16:41.125281] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:02.913 [2024-12-06 05:16:41.125293] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:02.913 [2024-12-06 05:16:41.125302] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:02.913 [2024-12-06 05:16:41.125310] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:02.913 [2024-12-06 05:16:41.125318] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:02.913 [2024-12-06 05:16:41.125325] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:02.913 [2024-12-06 05:16:41.125332] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:02.913 [2024-12-06 05:16:41.125342] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:02.913 [2024-12-06 05:16:41.125349] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:02.913 [2024-12-06 05:16:41.125357] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:02.913 [2024-12-06 05:16:41.125364] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:02.913 [2024-12-06 05:16:41.125378] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:02.913 [2024-12-06 05:16:41.125385] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:02.913 [2024-12-06 05:16:41.125393] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:02.913 [2024-12-06 05:16:41.125400] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:02.913 [2024-12-06 05:16:41.125407] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:02.913 [2024-12-06 05:16:41.125415] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:02.913 [2024-12-06 05:16:41.125425] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:02.913 [2024-12-06 05:16:41.125438] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:02.913 [2024-12-06 05:16:41.125446] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:02.913 [2024-12-06 05:16:41.125453] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:02.913 [2024-12-06 05:16:41.125460] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:02.913 [2024-12-06 05:16:41.125472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.913 [2024-12-06 05:16:41.125482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:02.913 [2024-12-06 05:16:41.125492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.223 ms 00:29:02.913 [2024-12-06 05:16:41.125500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.176 [2024-12-06 05:16:41.149595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.176 [2024-12-06 05:16:41.149698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:03.176 [2024-12-06 05:16:41.149720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.003 ms 00:29:03.176 [2024-12-06 05:16:41.149731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.176 [2024-12-06 05:16:41.149839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.176 [2024-12-06 05:16:41.149853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:03.176 [2024-12-06 05:16:41.149863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:29:03.176 [2024-12-06 05:16:41.149874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.176 [2024-12-06 05:16:41.161416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.176 [2024-12-06 05:16:41.161469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:03.176 [2024-12-06 05:16:41.161480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.470 ms 00:29:03.176 [2024-12-06 05:16:41.161490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.176 [2024-12-06 05:16:41.161530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.176 [2024-12-06 05:16:41.161540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:03.176 [2024-12-06 05:16:41.161549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:03.176 [2024-12-06 05:16:41.161557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.176 [2024-12-06 05:16:41.162156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.176 [2024-12-06 05:16:41.162185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:03.176 [2024-12-06 05:16:41.162197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.531 ms 00:29:03.176 [2024-12-06 05:16:41.162206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.176 [2024-12-06 05:16:41.162350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.176 [2024-12-06 05:16:41.162361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:03.176 [2024-12-06 05:16:41.162375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:29:03.176 [2024-12-06 05:16:41.162384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.176 [2024-12-06 05:16:41.169305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.176 [2024-12-06 05:16:41.169352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:03.176 [2024-12-06 05:16:41.169369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.897 ms 00:29:03.176 [2024-12-06 05:16:41.169377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.176 [2024-12-06 05:16:41.173298] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:29:03.176 [2024-12-06 05:16:41.173356] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:03.176 [2024-12-06 05:16:41.173369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.176 [2024-12-06 05:16:41.173378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:03.176 [2024-12-06 05:16:41.173387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.896 ms 00:29:03.176 [2024-12-06 05:16:41.173395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.176 [2024-12-06 05:16:41.189048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.176 [2024-12-06 05:16:41.189100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:03.176 [2024-12-06 05:16:41.189113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.602 ms 00:29:03.176 [2024-12-06 05:16:41.189125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.176 [2024-12-06 05:16:41.191951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.176 [2024-12-06 05:16:41.192127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:03.176 [2024-12-06 05:16:41.192145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.769 ms 00:29:03.176 [2024-12-06 05:16:41.192154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.176 [2024-12-06 05:16:41.194909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.176 [2024-12-06 05:16:41.194956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:03.176 [2024-12-06 05:16:41.194966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.712 ms 00:29:03.176 [2024-12-06 05:16:41.194974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.176 [2024-12-06 05:16:41.195320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.176 [2024-12-06 05:16:41.195333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:03.176 [2024-12-06 05:16:41.195342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.265 ms 00:29:03.176 [2024-12-06 05:16:41.195350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.176 [2024-12-06 05:16:41.220564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.176 [2024-12-06 05:16:41.220820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:03.176 [2024-12-06 05:16:41.221048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.196 ms 00:29:03.176 [2024-12-06 05:16:41.221088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.176 [2024-12-06 05:16:41.229362] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:03.176 [2024-12-06 05:16:41.232539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.176 [2024-12-06 05:16:41.232695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:03.176 [2024-12-06 05:16:41.232764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.393 ms 00:29:03.176 [2024-12-06 05:16:41.232795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.176 [2024-12-06 05:16:41.232884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.176 [2024-12-06 05:16:41.232911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:03.176 [2024-12-06 05:16:41.232933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:29:03.176 [2024-12-06 05:16:41.232952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.176 [2024-12-06 05:16:41.233035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.176 [2024-12-06 05:16:41.233124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:03.176 [2024-12-06 05:16:41.233138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:29:03.176 [2024-12-06 05:16:41.233146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.176 [2024-12-06 05:16:41.233185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.176 [2024-12-06 05:16:41.233201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:03.176 [2024-12-06 05:16:41.233210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:29:03.177 [2024-12-06 05:16:41.233218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.177 [2024-12-06 05:16:41.233258] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:03.177 [2024-12-06 05:16:41.233272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.177 [2024-12-06 05:16:41.233281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:03.177 [2024-12-06 05:16:41.233290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:29:03.177 [2024-12-06 05:16:41.233298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.177 [2024-12-06 05:16:41.238463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.177 [2024-12-06 05:16:41.238519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:03.177 [2024-12-06 05:16:41.238531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.141 ms 00:29:03.177 [2024-12-06 05:16:41.238539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.177 [2024-12-06 05:16:41.238624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.177 [2024-12-06 05:16:41.238635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:03.177 [2024-12-06 05:16:41.238644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:29:03.177 [2024-12-06 05:16:41.238656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.177 [2024-12-06 05:16:41.239974] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 132.724 ms, result 0 00:29:04.122  [2024-12-06T05:16:43.300Z] Copying: 11/1024 [MB] (11 MBps) [2024-12-06T05:16:44.686Z] Copying: 21/1024 [MB] (10 MBps) [2024-12-06T05:16:45.253Z] Copying: 32/1024 [MB] (10 MBps) [2024-12-06T05:16:46.624Z] Copying: 50/1024 [MB] (17 MBps) [2024-12-06T05:16:47.559Z] Copying: 78/1024 [MB] (28 MBps) [2024-12-06T05:16:48.495Z] Copying: 107/1024 [MB] (28 MBps) [2024-12-06T05:16:49.433Z] Copying: 134/1024 [MB] (26 MBps) [2024-12-06T05:16:50.375Z] Copying: 161/1024 [MB] (26 MBps) [2024-12-06T05:16:51.314Z] Copying: 171/1024 [MB] (10 MBps) [2024-12-06T05:16:52.687Z] Copying: 181/1024 [MB] (10 MBps) [2024-12-06T05:16:53.254Z] Copying: 206/1024 [MB] (24 MBps) [2024-12-06T05:16:54.631Z] Copying: 231/1024 [MB] (24 MBps) [2024-12-06T05:16:55.628Z] Copying: 254/1024 [MB] (23 MBps) [2024-12-06T05:16:56.566Z] Copying: 278/1024 [MB] (23 MBps) [2024-12-06T05:16:57.501Z] Copying: 301/1024 [MB] (23 MBps) [2024-12-06T05:16:58.436Z] Copying: 322/1024 [MB] (21 MBps) [2024-12-06T05:16:59.371Z] Copying: 345/1024 [MB] (22 MBps) [2024-12-06T05:17:00.306Z] Copying: 366/1024 [MB] (21 MBps) [2024-12-06T05:17:01.683Z] Copying: 386/1024 [MB] (19 MBps) [2024-12-06T05:17:02.618Z] Copying: 406/1024 [MB] (19 MBps) [2024-12-06T05:17:03.553Z] Copying: 423/1024 [MB] (17 MBps) [2024-12-06T05:17:04.488Z] Copying: 445/1024 [MB] (22 MBps) [2024-12-06T05:17:05.422Z] Copying: 467/1024 [MB] (21 MBps) [2024-12-06T05:17:06.354Z] Copying: 488/1024 [MB] (21 MBps) [2024-12-06T05:17:07.286Z] Copying: 513/1024 [MB] (24 MBps) [2024-12-06T05:17:08.658Z] Copying: 534/1024 [MB] (21 MBps) [2024-12-06T05:17:09.591Z] Copying: 553/1024 [MB] (18 MBps) [2024-12-06T05:17:10.545Z] Copying: 568/1024 [MB] (15 MBps) [2024-12-06T05:17:11.488Z] Copying: 582/1024 [MB] (13 MBps) [2024-12-06T05:17:12.428Z] Copying: 594/1024 [MB] (12 MBps) [2024-12-06T05:17:13.367Z] Copying: 608/1024 [MB] (13 MBps) [2024-12-06T05:17:14.311Z] Copying: 637/1024 [MB] (29 MBps) [2024-12-06T05:17:15.687Z] Copying: 662876/1048576 [kB] (10212 kBps) [2024-12-06T05:17:16.252Z] Copying: 663/1024 [MB] (16 MBps) [2024-12-06T05:17:17.626Z] Copying: 684/1024 [MB] (20 MBps) [2024-12-06T05:17:18.561Z] Copying: 705/1024 [MB] (21 MBps) [2024-12-06T05:17:19.495Z] Copying: 725/1024 [MB] (20 MBps) [2024-12-06T05:17:20.432Z] Copying: 746/1024 [MB] (20 MBps) [2024-12-06T05:17:21.371Z] Copying: 766/1024 [MB] (19 MBps) [2024-12-06T05:17:22.305Z] Copying: 778/1024 [MB] (12 MBps) [2024-12-06T05:17:23.690Z] Copying: 802/1024 [MB] (23 MBps) [2024-12-06T05:17:24.255Z] Copying: 828/1024 [MB] (25 MBps) [2024-12-06T05:17:25.628Z] Copying: 854/1024 [MB] (26 MBps) [2024-12-06T05:17:26.561Z] Copying: 882/1024 [MB] (28 MBps) [2024-12-06T05:17:27.564Z] Copying: 910/1024 [MB] (27 MBps) [2024-12-06T05:17:28.499Z] Copying: 959/1024 [MB] (48 MBps) [2024-12-06T05:17:29.437Z] Copying: 996/1024 [MB] (36 MBps) [2024-12-06T05:17:29.437Z] Copying: 1022/1024 [MB] (26 MBps) [2024-12-06T05:17:29.437Z] Copying: 1024/1024 [MB] (average 21 MBps)[2024-12-06 05:17:29.340103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.205 [2024-12-06 05:17:29.340227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:51.205 [2024-12-06 05:17:29.340300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:51.205 [2024-12-06 05:17:29.340324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.205 [2024-12-06 05:17:29.340389] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:51.205 [2024-12-06 05:17:29.340971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.205 [2024-12-06 05:17:29.341081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:51.205 [2024-12-06 05:17:29.341137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.534 ms 00:29:51.205 [2024-12-06 05:17:29.341159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.205 [2024-12-06 05:17:29.343214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.205 [2024-12-06 05:17:29.343326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:51.205 [2024-12-06 05:17:29.343383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.009 ms 00:29:51.205 [2024-12-06 05:17:29.343405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.205 [2024-12-06 05:17:29.343451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.205 [2024-12-06 05:17:29.343475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:29:51.205 [2024-12-06 05:17:29.343494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:51.205 [2024-12-06 05:17:29.343512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.205 [2024-12-06 05:17:29.343568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.205 [2024-12-06 05:17:29.343588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:29:51.205 [2024-12-06 05:17:29.343662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:29:51.205 [2024-12-06 05:17:29.343699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.205 [2024-12-06 05:17:29.343731] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:51.205 [2024-12-06 05:17:29.343756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:29:51.205 [2024-12-06 05:17:29.343787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:51.205 [2024-12-06 05:17:29.343816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:51.206 [2024-12-06 05:17:29.343888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:51.206 [2024-12-06 05:17:29.343918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:51.206 [2024-12-06 05:17:29.343946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:51.206 [2024-12-06 05:17:29.343974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:51.206 [2024-12-06 05:17:29.344001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:51.206 [2024-12-06 05:17:29.344062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:51.206 [2024-12-06 05:17:29.344091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:51.206 [2024-12-06 05:17:29.344154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:51.206 [2024-12-06 05:17:29.344184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:51.206 [2024-12-06 05:17:29.344238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:51.206 [2024-12-06 05:17:29.344268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:51.206 [2024-12-06 05:17:29.344296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:51.206 [2024-12-06 05:17:29.344348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:51.206 [2024-12-06 05:17:29.344379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:51.206 [2024-12-06 05:17:29.344406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:51.206 [2024-12-06 05:17:29.344456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:51.206 [2024-12-06 05:17:29.344517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:51.206 [2024-12-06 05:17:29.344548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:51.206 [2024-12-06 05:17:29.344601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:51.206 [2024-12-06 05:17:29.344630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:51.206 [2024-12-06 05:17:29.344657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:51.206 [2024-12-06 05:17:29.344706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:51.206 [2024-12-06 05:17:29.344735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:51.206 [2024-12-06 05:17:29.344794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:51.206 [2024-12-06 05:17:29.344822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:51.206 [2024-12-06 05:17:29.344857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:51.206 [2024-12-06 05:17:29.344891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:51.206 [2024-12-06 05:17:29.344942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:51.206 [2024-12-06 05:17:29.344974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:51.206 [2024-12-06 05:17:29.345002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:51.206 [2024-12-06 05:17:29.345030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:51.206 [2024-12-06 05:17:29.345057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:51.206 [2024-12-06 05:17:29.345199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:51.206 [2024-12-06 05:17:29.345208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:51.206 [2024-12-06 05:17:29.345216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:51.206 [2024-12-06 05:17:29.345223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:51.206 [2024-12-06 05:17:29.345232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:51.206 [2024-12-06 05:17:29.345239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:51.206 [2024-12-06 05:17:29.345247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:51.206 [2024-12-06 05:17:29.345255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:51.206 [2024-12-06 05:17:29.345262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:51.206 [2024-12-06 05:17:29.345270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:51.206 [2024-12-06 05:17:29.345277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:51.206 [2024-12-06 05:17:29.345284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:51.206 [2024-12-06 05:17:29.345292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:51.206 [2024-12-06 05:17:29.345299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:51.206 [2024-12-06 05:17:29.345307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:51.206 [2024-12-06 05:17:29.345314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:51.206 [2024-12-06 05:17:29.345321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:51.206 [2024-12-06 05:17:29.345328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:51.206 [2024-12-06 05:17:29.345336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:51.206 [2024-12-06 05:17:29.345342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:51.206 [2024-12-06 05:17:29.345350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:51.206 [2024-12-06 05:17:29.345357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:51.206 [2024-12-06 05:17:29.345364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:51.206 [2024-12-06 05:17:29.345371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:51.206 [2024-12-06 05:17:29.345379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:51.206 [2024-12-06 05:17:29.345386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:51.207 [2024-12-06 05:17:29.345394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:51.207 [2024-12-06 05:17:29.345401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:51.207 [2024-12-06 05:17:29.345408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:51.207 [2024-12-06 05:17:29.345415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:51.207 [2024-12-06 05:17:29.345424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:51.207 [2024-12-06 05:17:29.345431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:51.207 [2024-12-06 05:17:29.345439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:51.207 [2024-12-06 05:17:29.345446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:51.207 [2024-12-06 05:17:29.345453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:51.207 [2024-12-06 05:17:29.345461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:51.207 [2024-12-06 05:17:29.345468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:51.207 [2024-12-06 05:17:29.345475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:51.207 [2024-12-06 05:17:29.345483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:51.207 [2024-12-06 05:17:29.345492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:51.207 [2024-12-06 05:17:29.345499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:51.207 [2024-12-06 05:17:29.345507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:51.207 [2024-12-06 05:17:29.345534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:51.207 [2024-12-06 05:17:29.345541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:51.207 [2024-12-06 05:17:29.345549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:51.207 [2024-12-06 05:17:29.345556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:51.207 [2024-12-06 05:17:29.345564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:51.207 [2024-12-06 05:17:29.345572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:51.207 [2024-12-06 05:17:29.345579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:51.207 [2024-12-06 05:17:29.345586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:51.207 [2024-12-06 05:17:29.345594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:51.207 [2024-12-06 05:17:29.345601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:51.207 [2024-12-06 05:17:29.345608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:51.207 [2024-12-06 05:17:29.345616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:51.207 [2024-12-06 05:17:29.345623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:51.207 [2024-12-06 05:17:29.345630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:51.207 [2024-12-06 05:17:29.345638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:51.207 [2024-12-06 05:17:29.345644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:51.207 [2024-12-06 05:17:29.345652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:51.207 [2024-12-06 05:17:29.345660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:51.207 [2024-12-06 05:17:29.345812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:51.207 [2024-12-06 05:17:29.345842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:51.207 [2024-12-06 05:17:29.345869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:51.207 [2024-12-06 05:17:29.345954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:51.207 [2024-12-06 05:17:29.345982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:51.207 [2024-12-06 05:17:29.346018] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:51.207 [2024-12-06 05:17:29.346062] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7b4b71fc-81b6-42ef-bf87-4b87e3b52047 00:29:51.207 [2024-12-06 05:17:29.346126] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:29:51.207 [2024-12-06 05:17:29.346148] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:29:51.207 [2024-12-06 05:17:29.346188] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:51.207 [2024-12-06 05:17:29.346199] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:51.207 [2024-12-06 05:17:29.346208] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:51.207 [2024-12-06 05:17:29.346216] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:51.207 [2024-12-06 05:17:29.346223] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:51.207 [2024-12-06 05:17:29.346230] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:51.207 [2024-12-06 05:17:29.346236] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:51.207 [2024-12-06 05:17:29.346244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.207 [2024-12-06 05:17:29.346251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:51.207 [2024-12-06 05:17:29.346260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.513 ms 00:29:51.207 [2024-12-06 05:17:29.346267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.207 [2024-12-06 05:17:29.347922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.207 [2024-12-06 05:17:29.347957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:51.207 [2024-12-06 05:17:29.347967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.631 ms 00:29:51.207 [2024-12-06 05:17:29.347975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.207 [2024-12-06 05:17:29.348068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.207 [2024-12-06 05:17:29.348077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:51.208 [2024-12-06 05:17:29.348085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:29:51.208 [2024-12-06 05:17:29.348098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.208 [2024-12-06 05:17:29.353284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:51.208 [2024-12-06 05:17:29.353394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:51.208 [2024-12-06 05:17:29.353441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:51.208 [2024-12-06 05:17:29.353464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.208 [2024-12-06 05:17:29.353535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:51.208 [2024-12-06 05:17:29.353558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:51.208 [2024-12-06 05:17:29.353577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:51.208 [2024-12-06 05:17:29.353601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.208 [2024-12-06 05:17:29.353659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:51.208 [2024-12-06 05:17:29.353697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:51.208 [2024-12-06 05:17:29.353717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:51.208 [2024-12-06 05:17:29.353773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.208 [2024-12-06 05:17:29.353803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:51.208 [2024-12-06 05:17:29.353823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:51.208 [2024-12-06 05:17:29.353843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:51.208 [2024-12-06 05:17:29.353890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.208 [2024-12-06 05:17:29.364111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:51.208 [2024-12-06 05:17:29.364256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:51.208 [2024-12-06 05:17:29.364307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:51.208 [2024-12-06 05:17:29.364337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.208 [2024-12-06 05:17:29.372407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:51.208 [2024-12-06 05:17:29.372544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:51.208 [2024-12-06 05:17:29.372594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:51.208 [2024-12-06 05:17:29.372625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.208 [2024-12-06 05:17:29.372696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:51.208 [2024-12-06 05:17:29.372719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:51.208 [2024-12-06 05:17:29.372738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:51.208 [2024-12-06 05:17:29.372761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.208 [2024-12-06 05:17:29.372796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:51.208 [2024-12-06 05:17:29.372816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:51.208 [2024-12-06 05:17:29.372876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:51.208 [2024-12-06 05:17:29.372898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.208 [2024-12-06 05:17:29.372974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:51.208 [2024-12-06 05:17:29.373000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:51.208 [2024-12-06 05:17:29.373020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:51.208 [2024-12-06 05:17:29.373080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.208 [2024-12-06 05:17:29.373125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:51.208 [2024-12-06 05:17:29.373154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:51.208 [2024-12-06 05:17:29.373179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:51.208 [2024-12-06 05:17:29.373197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.208 [2024-12-06 05:17:29.373247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:51.208 [2024-12-06 05:17:29.373335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:51.208 [2024-12-06 05:17:29.373355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:51.208 [2024-12-06 05:17:29.373373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.208 [2024-12-06 05:17:29.373425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:51.208 [2024-12-06 05:17:29.373563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:51.208 [2024-12-06 05:17:29.373599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:51.208 [2024-12-06 05:17:29.373617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.208 [2024-12-06 05:17:29.373781] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 33.644 ms, result 0 00:29:52.149 00:29:52.149 00:29:52.149 05:17:30 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:29:52.149 [2024-12-06 05:17:30.143484] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:29:52.149 [2024-12-06 05:17:30.143653] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93929 ] 00:29:52.149 [2024-12-06 05:17:30.282095] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:52.149 [2024-12-06 05:17:30.332016] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:52.411 [2024-12-06 05:17:30.443797] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:52.411 [2024-12-06 05:17:30.443876] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:52.411 [2024-12-06 05:17:30.604583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.411 [2024-12-06 05:17:30.604650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:52.411 [2024-12-06 05:17:30.604686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:52.411 [2024-12-06 05:17:30.604696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.411 [2024-12-06 05:17:30.604761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.411 [2024-12-06 05:17:30.604773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:52.411 [2024-12-06 05:17:30.604782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:29:52.411 [2024-12-06 05:17:30.604815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.411 [2024-12-06 05:17:30.604838] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:52.411 [2024-12-06 05:17:30.605238] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:52.411 [2024-12-06 05:17:30.605281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.411 [2024-12-06 05:17:30.605290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:52.411 [2024-12-06 05:17:30.605304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.449 ms 00:29:52.411 [2024-12-06 05:17:30.605312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.411 [2024-12-06 05:17:30.605649] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:29:52.411 [2024-12-06 05:17:30.605694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.411 [2024-12-06 05:17:30.605703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:52.411 [2024-12-06 05:17:30.605714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:29:52.411 [2024-12-06 05:17:30.605722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.411 [2024-12-06 05:17:30.605819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.411 [2024-12-06 05:17:30.605834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:52.411 [2024-12-06 05:17:30.605846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:29:52.411 [2024-12-06 05:17:30.605855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.411 [2024-12-06 05:17:30.606109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.411 [2024-12-06 05:17:30.606121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:52.411 [2024-12-06 05:17:30.606130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.217 ms 00:29:52.411 [2024-12-06 05:17:30.606143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.411 [2024-12-06 05:17:30.606228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.411 [2024-12-06 05:17:30.606242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:52.411 [2024-12-06 05:17:30.606250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:29:52.411 [2024-12-06 05:17:30.606261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.411 [2024-12-06 05:17:30.606283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.411 [2024-12-06 05:17:30.606293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:52.411 [2024-12-06 05:17:30.606301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:29:52.411 [2024-12-06 05:17:30.606309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.411 [2024-12-06 05:17:30.606334] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:52.411 [2024-12-06 05:17:30.608486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.411 [2024-12-06 05:17:30.608747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:52.411 [2024-12-06 05:17:30.608773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.156 ms 00:29:52.411 [2024-12-06 05:17:30.608781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.411 [2024-12-06 05:17:30.608829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.411 [2024-12-06 05:17:30.608839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:52.411 [2024-12-06 05:17:30.608851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:29:52.412 [2024-12-06 05:17:30.608859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.412 [2024-12-06 05:17:30.608918] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:52.412 [2024-12-06 05:17:30.608946] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:52.412 [2024-12-06 05:17:30.608993] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:52.412 [2024-12-06 05:17:30.609010] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:29:52.412 [2024-12-06 05:17:30.609118] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:52.412 [2024-12-06 05:17:30.609130] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:52.412 [2024-12-06 05:17:30.609148] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:52.412 [2024-12-06 05:17:30.609163] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:52.412 [2024-12-06 05:17:30.609174] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:52.412 [2024-12-06 05:17:30.609187] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:52.412 [2024-12-06 05:17:30.609197] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:52.412 [2024-12-06 05:17:30.609205] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:52.412 [2024-12-06 05:17:30.609213] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:52.412 [2024-12-06 05:17:30.609225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.412 [2024-12-06 05:17:30.609235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:52.412 [2024-12-06 05:17:30.609244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.309 ms 00:29:52.412 [2024-12-06 05:17:30.609252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.412 [2024-12-06 05:17:30.609339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.412 [2024-12-06 05:17:30.609349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:52.412 [2024-12-06 05:17:30.609358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:29:52.412 [2024-12-06 05:17:30.609368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.412 [2024-12-06 05:17:30.609470] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:52.412 [2024-12-06 05:17:30.609483] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:52.412 [2024-12-06 05:17:30.609492] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:52.412 [2024-12-06 05:17:30.609503] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:52.412 [2024-12-06 05:17:30.609544] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:52.412 [2024-12-06 05:17:30.609552] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:52.412 [2024-12-06 05:17:30.609561] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:52.412 [2024-12-06 05:17:30.609569] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:52.412 [2024-12-06 05:17:30.609578] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:52.412 [2024-12-06 05:17:30.609585] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:52.412 [2024-12-06 05:17:30.609593] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:52.412 [2024-12-06 05:17:30.609601] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:52.412 [2024-12-06 05:17:30.609609] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:52.412 [2024-12-06 05:17:30.609618] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:52.412 [2024-12-06 05:17:30.609626] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:52.412 [2024-12-06 05:17:30.609636] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:52.412 [2024-12-06 05:17:30.609644] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:52.412 [2024-12-06 05:17:30.609652] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:52.412 [2024-12-06 05:17:30.609660] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:52.412 [2024-12-06 05:17:30.609695] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:52.412 [2024-12-06 05:17:30.609704] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:52.412 [2024-12-06 05:17:30.609712] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:52.412 [2024-12-06 05:17:30.609719] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:52.412 [2024-12-06 05:17:30.609726] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:52.412 [2024-12-06 05:17:30.609733] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:52.412 [2024-12-06 05:17:30.609740] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:52.412 [2024-12-06 05:17:30.609747] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:52.412 [2024-12-06 05:17:30.609754] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:52.412 [2024-12-06 05:17:30.609761] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:52.412 [2024-12-06 05:17:30.609768] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:52.412 [2024-12-06 05:17:30.609775] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:52.412 [2024-12-06 05:17:30.609783] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:52.412 [2024-12-06 05:17:30.609790] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:52.412 [2024-12-06 05:17:30.609797] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:52.412 [2024-12-06 05:17:30.609804] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:52.412 [2024-12-06 05:17:30.609815] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:52.412 [2024-12-06 05:17:30.609822] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:52.412 [2024-12-06 05:17:30.609829] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:52.412 [2024-12-06 05:17:30.609836] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:52.412 [2024-12-06 05:17:30.609842] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:52.412 [2024-12-06 05:17:30.609849] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:52.412 [2024-12-06 05:17:30.609856] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:52.412 [2024-12-06 05:17:30.609864] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:52.412 [2024-12-06 05:17:30.609872] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:52.412 [2024-12-06 05:17:30.609881] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:52.412 [2024-12-06 05:17:30.609889] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:52.412 [2024-12-06 05:17:30.609896] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:52.412 [2024-12-06 05:17:30.609906] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:52.412 [2024-12-06 05:17:30.609913] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:52.412 [2024-12-06 05:17:30.609920] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:52.412 [2024-12-06 05:17:30.609928] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:52.412 [2024-12-06 05:17:30.609938] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:52.412 [2024-12-06 05:17:30.609945] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:52.412 [2024-12-06 05:17:30.609953] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:52.412 [2024-12-06 05:17:30.609965] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:52.412 [2024-12-06 05:17:30.609974] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:52.413 [2024-12-06 05:17:30.609982] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:52.413 [2024-12-06 05:17:30.609989] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:52.413 [2024-12-06 05:17:30.609996] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:52.413 [2024-12-06 05:17:30.610004] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:52.413 [2024-12-06 05:17:30.610011] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:52.413 [2024-12-06 05:17:30.610018] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:52.413 [2024-12-06 05:17:30.610026] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:52.413 [2024-12-06 05:17:30.610033] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:52.413 [2024-12-06 05:17:30.610040] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:52.413 [2024-12-06 05:17:30.610048] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:52.413 [2024-12-06 05:17:30.610061] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:52.413 [2024-12-06 05:17:30.610077] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:52.413 [2024-12-06 05:17:30.610085] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:52.413 [2024-12-06 05:17:30.610092] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:52.413 [2024-12-06 05:17:30.610100] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:52.413 [2024-12-06 05:17:30.610108] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:52.413 [2024-12-06 05:17:30.610117] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:52.413 [2024-12-06 05:17:30.610125] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:52.413 [2024-12-06 05:17:30.610132] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:52.413 [2024-12-06 05:17:30.610140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.413 [2024-12-06 05:17:30.610147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:52.413 [2024-12-06 05:17:30.610155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.738 ms 00:29:52.413 [2024-12-06 05:17:30.610163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.413 [2024-12-06 05:17:30.630925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.413 [2024-12-06 05:17:30.631136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:52.413 [2024-12-06 05:17:30.631164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.712 ms 00:29:52.413 [2024-12-06 05:17:30.631173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.413 [2024-12-06 05:17:30.631270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.413 [2024-12-06 05:17:30.631281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:52.413 [2024-12-06 05:17:30.631291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:29:52.413 [2024-12-06 05:17:30.631298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.676 [2024-12-06 05:17:30.644129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.676 [2024-12-06 05:17:30.644185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:52.676 [2024-12-06 05:17:30.644203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.762 ms 00:29:52.676 [2024-12-06 05:17:30.644213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.676 [2024-12-06 05:17:30.644259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.676 [2024-12-06 05:17:30.644271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:52.676 [2024-12-06 05:17:30.644282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:52.676 [2024-12-06 05:17:30.644292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.676 [2024-12-06 05:17:30.644405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.676 [2024-12-06 05:17:30.644419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:52.676 [2024-12-06 05:17:30.644430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:29:52.676 [2024-12-06 05:17:30.644443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.676 [2024-12-06 05:17:30.644592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.676 [2024-12-06 05:17:30.644608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:52.676 [2024-12-06 05:17:30.644618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.131 ms 00:29:52.676 [2024-12-06 05:17:30.644631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.676 [2024-12-06 05:17:30.651588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.676 [2024-12-06 05:17:30.651635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:52.676 [2024-12-06 05:17:30.651646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.933 ms 00:29:52.676 [2024-12-06 05:17:30.651654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.676 [2024-12-06 05:17:30.651794] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:29:52.676 [2024-12-06 05:17:30.651810] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:52.676 [2024-12-06 05:17:30.651819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.676 [2024-12-06 05:17:30.651828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:52.676 [2024-12-06 05:17:30.651837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:29:52.676 [2024-12-06 05:17:30.651844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.676 [2024-12-06 05:17:30.664139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.676 [2024-12-06 05:17:30.664188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:52.676 [2024-12-06 05:17:30.664199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.277 ms 00:29:52.676 [2024-12-06 05:17:30.664207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.676 [2024-12-06 05:17:30.664340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.676 [2024-12-06 05:17:30.664356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:52.676 [2024-12-06 05:17:30.664364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:29:52.676 [2024-12-06 05:17:30.664372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.676 [2024-12-06 05:17:30.664422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.676 [2024-12-06 05:17:30.664436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:52.676 [2024-12-06 05:17:30.664444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:29:52.676 [2024-12-06 05:17:30.664455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.676 [2024-12-06 05:17:30.664797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.676 [2024-12-06 05:17:30.664811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:52.676 [2024-12-06 05:17:30.664819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.304 ms 00:29:52.676 [2024-12-06 05:17:30.664832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.676 [2024-12-06 05:17:30.664855] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:29:52.676 [2024-12-06 05:17:30.664865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.676 [2024-12-06 05:17:30.664873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:52.676 [2024-12-06 05:17:30.664882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:29:52.676 [2024-12-06 05:17:30.664893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.676 [2024-12-06 05:17:30.674349] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:52.676 [2024-12-06 05:17:30.674660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.676 [2024-12-06 05:17:30.674703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:52.676 [2024-12-06 05:17:30.674714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.748 ms 00:29:52.676 [2024-12-06 05:17:30.674722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.676 [2024-12-06 05:17:30.677141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.676 [2024-12-06 05:17:30.677182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:52.676 [2024-12-06 05:17:30.677192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.394 ms 00:29:52.676 [2024-12-06 05:17:30.677200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.676 [2024-12-06 05:17:30.677301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.676 [2024-12-06 05:17:30.677312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:52.676 [2024-12-06 05:17:30.677321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:29:52.676 [2024-12-06 05:17:30.677329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.676 [2024-12-06 05:17:30.677359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.676 [2024-12-06 05:17:30.677372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:52.676 [2024-12-06 05:17:30.677384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:29:52.676 [2024-12-06 05:17:30.677391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.676 [2024-12-06 05:17:30.677430] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:52.676 [2024-12-06 05:17:30.677444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.676 [2024-12-06 05:17:30.677458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:52.677 [2024-12-06 05:17:30.677470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:29:52.677 [2024-12-06 05:17:30.677477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.677 [2024-12-06 05:17:30.684067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.677 [2024-12-06 05:17:30.684245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:52.677 [2024-12-06 05:17:30.684316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.568 ms 00:29:52.677 [2024-12-06 05:17:30.684340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.677 [2024-12-06 05:17:30.684438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.677 [2024-12-06 05:17:30.684464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:52.677 [2024-12-06 05:17:30.684490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:29:52.677 [2024-12-06 05:17:30.684513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.677 [2024-12-06 05:17:30.686400] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 81.358 ms, result 0 00:29:54.066  [2024-12-06T05:17:32.872Z] Copying: 15/1024 [MB] (15 MBps) [2024-12-06T05:17:34.261Z] Copying: 28/1024 [MB] (13 MBps) [2024-12-06T05:17:35.206Z] Copying: 39/1024 [MB] (10 MBps) [2024-12-06T05:17:36.154Z] Copying: 50/1024 [MB] (10 MBps) [2024-12-06T05:17:37.097Z] Copying: 62/1024 [MB] (12 MBps) [2024-12-06T05:17:38.044Z] Copying: 83/1024 [MB] (20 MBps) [2024-12-06T05:17:38.989Z] Copying: 98/1024 [MB] (14 MBps) [2024-12-06T05:17:39.935Z] Copying: 112/1024 [MB] (13 MBps) [2024-12-06T05:17:40.882Z] Copying: 127/1024 [MB] (15 MBps) [2024-12-06T05:17:42.273Z] Copying: 139/1024 [MB] (11 MBps) [2024-12-06T05:17:43.221Z] Copying: 156/1024 [MB] (16 MBps) [2024-12-06T05:17:44.166Z] Copying: 170/1024 [MB] (13 MBps) [2024-12-06T05:17:45.113Z] Copying: 185/1024 [MB] (15 MBps) [2024-12-06T05:17:46.056Z] Copying: 202/1024 [MB] (17 MBps) [2024-12-06T05:17:47.000Z] Copying: 221/1024 [MB] (19 MBps) [2024-12-06T05:17:47.945Z] Copying: 233/1024 [MB] (11 MBps) [2024-12-06T05:17:48.889Z] Copying: 248/1024 [MB] (14 MBps) [2024-12-06T05:17:50.290Z] Copying: 265/1024 [MB] (17 MBps) [2024-12-06T05:17:51.230Z] Copying: 282/1024 [MB] (17 MBps) [2024-12-06T05:17:52.176Z] Copying: 299/1024 [MB] (16 MBps) [2024-12-06T05:17:53.122Z] Copying: 317/1024 [MB] (18 MBps) [2024-12-06T05:17:54.067Z] Copying: 336/1024 [MB] (18 MBps) [2024-12-06T05:17:55.013Z] Copying: 352/1024 [MB] (15 MBps) [2024-12-06T05:17:55.968Z] Copying: 366/1024 [MB] (14 MBps) [2024-12-06T05:17:56.909Z] Copying: 377/1024 [MB] (11 MBps) [2024-12-06T05:17:58.303Z] Copying: 388/1024 [MB] (11 MBps) [2024-12-06T05:17:58.941Z] Copying: 400/1024 [MB] (11 MBps) [2024-12-06T05:17:59.886Z] Copying: 425/1024 [MB] (24 MBps) [2024-12-06T05:18:01.275Z] Copying: 438/1024 [MB] (12 MBps) [2024-12-06T05:18:02.218Z] Copying: 449/1024 [MB] (11 MBps) [2024-12-06T05:18:03.166Z] Copying: 463/1024 [MB] (13 MBps) [2024-12-06T05:18:04.109Z] Copying: 476/1024 [MB] (12 MBps) [2024-12-06T05:18:05.050Z] Copying: 498/1024 [MB] (22 MBps) [2024-12-06T05:18:05.989Z] Copying: 521/1024 [MB] (23 MBps) [2024-12-06T05:18:06.931Z] Copying: 544/1024 [MB] (22 MBps) [2024-12-06T05:18:07.877Z] Copying: 563/1024 [MB] (19 MBps) [2024-12-06T05:18:09.267Z] Copying: 578/1024 [MB] (14 MBps) [2024-12-06T05:18:10.212Z] Copying: 598/1024 [MB] (20 MBps) [2024-12-06T05:18:11.159Z] Copying: 616/1024 [MB] (18 MBps) [2024-12-06T05:18:12.100Z] Copying: 640/1024 [MB] (23 MBps) [2024-12-06T05:18:13.046Z] Copying: 658/1024 [MB] (17 MBps) [2024-12-06T05:18:13.986Z] Copying: 673/1024 [MB] (14 MBps) [2024-12-06T05:18:14.929Z] Copying: 691/1024 [MB] (17 MBps) [2024-12-06T05:18:15.874Z] Copying: 706/1024 [MB] (15 MBps) [2024-12-06T05:18:17.262Z] Copying: 723/1024 [MB] (16 MBps) [2024-12-06T05:18:18.208Z] Copying: 745/1024 [MB] (22 MBps) [2024-12-06T05:18:19.152Z] Copying: 763/1024 [MB] (18 MBps) [2024-12-06T05:18:20.098Z] Copying: 782/1024 [MB] (19 MBps) [2024-12-06T05:18:21.045Z] Copying: 805/1024 [MB] (22 MBps) [2024-12-06T05:18:21.988Z] Copying: 831/1024 [MB] (25 MBps) [2024-12-06T05:18:22.930Z] Copying: 847/1024 [MB] (15 MBps) [2024-12-06T05:18:23.873Z] Copying: 862/1024 [MB] (15 MBps) [2024-12-06T05:18:25.261Z] Copying: 882/1024 [MB] (19 MBps) [2024-12-06T05:18:26.203Z] Copying: 893/1024 [MB] (10 MBps) [2024-12-06T05:18:27.145Z] Copying: 914/1024 [MB] (21 MBps) [2024-12-06T05:18:28.090Z] Copying: 925/1024 [MB] (11 MBps) [2024-12-06T05:18:29.034Z] Copying: 942/1024 [MB] (17 MBps) [2024-12-06T05:18:29.978Z] Copying: 961/1024 [MB] (18 MBps) [2024-12-06T05:18:31.000Z] Copying: 979/1024 [MB] (17 MBps) [2024-12-06T05:18:31.967Z] Copying: 998/1024 [MB] (19 MBps) [2024-12-06T05:18:32.540Z] Copying: 1009/1024 [MB] (11 MBps) [2024-12-06T05:18:33.114Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-12-06 05:18:32.866170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:54.882 [2024-12-06 05:18:32.866252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:54.882 [2024-12-06 05:18:32.866269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:54.882 [2024-12-06 05:18:32.866279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.882 [2024-12-06 05:18:32.866308] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:54.882 [2024-12-06 05:18:32.867121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:54.882 [2024-12-06 05:18:32.867156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:54.882 [2024-12-06 05:18:32.867173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.796 ms 00:30:54.882 [2024-12-06 05:18:32.867183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.882 [2024-12-06 05:18:32.867450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:54.882 [2024-12-06 05:18:32.867479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:54.882 [2024-12-06 05:18:32.867491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.235 ms 00:30:54.882 [2024-12-06 05:18:32.867504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.882 [2024-12-06 05:18:32.867541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:54.882 [2024-12-06 05:18:32.867551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:30:54.882 [2024-12-06 05:18:32.867567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:54.882 [2024-12-06 05:18:32.867575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.882 [2024-12-06 05:18:32.867637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:54.882 [2024-12-06 05:18:32.867652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:30:54.882 [2024-12-06 05:18:32.867661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:30:54.882 [2024-12-06 05:18:32.867687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.882 [2024-12-06 05:18:32.867703] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:54.882 [2024-12-06 05:18:32.867716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:30:54.882 [2024-12-06 05:18:32.867727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:54.882 [2024-12-06 05:18:32.867740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:54.882 [2024-12-06 05:18:32.867752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:54.882 [2024-12-06 05:18:32.867760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:54.882 [2024-12-06 05:18:32.867768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:54.882 [2024-12-06 05:18:32.867778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:54.882 [2024-12-06 05:18:32.867787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:54.882 [2024-12-06 05:18:32.867795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:54.882 [2024-12-06 05:18:32.867803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:54.882 [2024-12-06 05:18:32.867811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:54.882 [2024-12-06 05:18:32.867823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:54.882 [2024-12-06 05:18:32.867835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:54.882 [2024-12-06 05:18:32.867846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:54.882 [2024-12-06 05:18:32.867861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:54.882 [2024-12-06 05:18:32.867871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:54.882 [2024-12-06 05:18:32.867883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:54.882 [2024-12-06 05:18:32.867897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:54.882 [2024-12-06 05:18:32.867906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:54.882 [2024-12-06 05:18:32.867914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:54.882 [2024-12-06 05:18:32.867922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:54.882 [2024-12-06 05:18:32.867930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:54.882 [2024-12-06 05:18:32.867938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:54.882 [2024-12-06 05:18:32.867946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:54.882 [2024-12-06 05:18:32.867954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:54.882 [2024-12-06 05:18:32.867962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:54.882 [2024-12-06 05:18:32.867970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:54.882 [2024-12-06 05:18:32.867978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:54.882 [2024-12-06 05:18:32.867986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:54.882 [2024-12-06 05:18:32.867994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:54.882 [2024-12-06 05:18:32.868005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:54.882 [2024-12-06 05:18:32.868019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:54.882 [2024-12-06 05:18:32.868026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:54.882 [2024-12-06 05:18:32.868037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:54.882 [2024-12-06 05:18:32.868050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:54.882 [2024-12-06 05:18:32.868061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:54.883 [2024-12-06 05:18:32.868068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:54.883 [2024-12-06 05:18:32.868076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:54.883 [2024-12-06 05:18:32.868084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:54.883 [2024-12-06 05:18:32.868092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:54.883 [2024-12-06 05:18:32.868100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:54.883 [2024-12-06 05:18:32.868108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:54.883 [2024-12-06 05:18:32.868116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:54.883 [2024-12-06 05:18:32.868127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:54.883 [2024-12-06 05:18:32.868138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:54.883 [2024-12-06 05:18:32.868145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:54.883 [2024-12-06 05:18:32.868153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:54.883 [2024-12-06 05:18:32.868161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:54.883 [2024-12-06 05:18:32.868168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:54.883 [2024-12-06 05:18:32.868177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:54.883 [2024-12-06 05:18:32.868184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:54.883 [2024-12-06 05:18:32.868191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:54.883 [2024-12-06 05:18:32.868200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:54.883 [2024-12-06 05:18:32.868208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:54.883 [2024-12-06 05:18:32.868225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:54.883 [2024-12-06 05:18:32.868232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:54.883 [2024-12-06 05:18:32.868240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:54.883 [2024-12-06 05:18:32.868248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:54.883 [2024-12-06 05:18:32.868257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:54.883 [2024-12-06 05:18:32.868265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:54.883 [2024-12-06 05:18:32.868274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:54.883 [2024-12-06 05:18:32.868282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:54.883 [2024-12-06 05:18:32.868289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:54.883 [2024-12-06 05:18:32.868297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:54.883 [2024-12-06 05:18:32.868304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:54.883 [2024-12-06 05:18:32.868312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:54.883 [2024-12-06 05:18:32.868325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:54.883 [2024-12-06 05:18:32.868336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:54.883 [2024-12-06 05:18:32.868348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:54.883 [2024-12-06 05:18:32.868362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:54.883 [2024-12-06 05:18:32.868371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:54.883 [2024-12-06 05:18:32.868379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:54.883 [2024-12-06 05:18:32.868388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:54.883 [2024-12-06 05:18:32.868397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:54.883 [2024-12-06 05:18:32.868405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:54.883 [2024-12-06 05:18:32.868414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:54.883 [2024-12-06 05:18:32.868422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:54.883 [2024-12-06 05:18:32.868430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:54.883 [2024-12-06 05:18:32.868439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:54.883 [2024-12-06 05:18:32.868447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:54.883 [2024-12-06 05:18:32.868455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:54.883 [2024-12-06 05:18:32.868464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:54.883 [2024-12-06 05:18:32.868472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:54.883 [2024-12-06 05:18:32.868480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:54.883 [2024-12-06 05:18:32.868489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:54.883 [2024-12-06 05:18:32.868497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:54.883 [2024-12-06 05:18:32.868505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:54.883 [2024-12-06 05:18:32.868513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:54.883 [2024-12-06 05:18:32.868520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:54.883 [2024-12-06 05:18:32.868530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:54.883 [2024-12-06 05:18:32.868537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:54.883 [2024-12-06 05:18:32.868545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:54.883 [2024-12-06 05:18:32.868553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:54.883 [2024-12-06 05:18:32.868578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:54.883 [2024-12-06 05:18:32.868587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:54.883 [2024-12-06 05:18:32.868595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:54.883 [2024-12-06 05:18:32.868603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:54.883 [2024-12-06 05:18:32.868611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:54.883 [2024-12-06 05:18:32.868620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:54.883 [2024-12-06 05:18:32.868628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:54.883 [2024-12-06 05:18:32.868645] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:54.883 [2024-12-06 05:18:32.868653] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7b4b71fc-81b6-42ef-bf87-4b87e3b52047 00:30:54.883 [2024-12-06 05:18:32.868682] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:30:54.883 [2024-12-06 05:18:32.868691] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:30:54.883 [2024-12-06 05:18:32.868699] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:30:54.883 [2024-12-06 05:18:32.868709] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:30:54.883 [2024-12-06 05:18:32.868717] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:54.883 [2024-12-06 05:18:32.868730] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:54.883 [2024-12-06 05:18:32.868742] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:54.884 [2024-12-06 05:18:32.868750] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:54.884 [2024-12-06 05:18:32.868758] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:54.884 [2024-12-06 05:18:32.868769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:54.884 [2024-12-06 05:18:32.868780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:54.884 [2024-12-06 05:18:32.868790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.067 ms 00:30:54.884 [2024-12-06 05:18:32.868797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.884 [2024-12-06 05:18:32.871180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:54.884 [2024-12-06 05:18:32.871216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:54.884 [2024-12-06 05:18:32.871226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.363 ms 00:30:54.884 [2024-12-06 05:18:32.871235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.884 [2024-12-06 05:18:32.871361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:54.884 [2024-12-06 05:18:32.871370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:54.884 [2024-12-06 05:18:32.871379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:30:54.884 [2024-12-06 05:18:32.871387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.884 [2024-12-06 05:18:32.878071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:54.884 [2024-12-06 05:18:32.878106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:54.884 [2024-12-06 05:18:32.878117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:54.884 [2024-12-06 05:18:32.878126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.884 [2024-12-06 05:18:32.878189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:54.884 [2024-12-06 05:18:32.878198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:54.884 [2024-12-06 05:18:32.878207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:54.884 [2024-12-06 05:18:32.878215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.884 [2024-12-06 05:18:32.878280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:54.884 [2024-12-06 05:18:32.878297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:54.884 [2024-12-06 05:18:32.878305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:54.884 [2024-12-06 05:18:32.878314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.884 [2024-12-06 05:18:32.878332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:54.884 [2024-12-06 05:18:32.878341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:54.884 [2024-12-06 05:18:32.878350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:54.884 [2024-12-06 05:18:32.878358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.884 [2024-12-06 05:18:32.892247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:54.884 [2024-12-06 05:18:32.892301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:54.884 [2024-12-06 05:18:32.892312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:54.884 [2024-12-06 05:18:32.892326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.884 [2024-12-06 05:18:32.904241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:54.884 [2024-12-06 05:18:32.904298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:54.884 [2024-12-06 05:18:32.904311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:54.884 [2024-12-06 05:18:32.904320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.884 [2024-12-06 05:18:32.904386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:54.884 [2024-12-06 05:18:32.904396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:54.884 [2024-12-06 05:18:32.904405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:54.884 [2024-12-06 05:18:32.904420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.884 [2024-12-06 05:18:32.904457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:54.884 [2024-12-06 05:18:32.904467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:54.884 [2024-12-06 05:18:32.904476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:54.884 [2024-12-06 05:18:32.904484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.884 [2024-12-06 05:18:32.904539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:54.884 [2024-12-06 05:18:32.904553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:54.884 [2024-12-06 05:18:32.904562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:54.884 [2024-12-06 05:18:32.904570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.884 [2024-12-06 05:18:32.904596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:54.884 [2024-12-06 05:18:32.904606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:54.884 [2024-12-06 05:18:32.904614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:54.884 [2024-12-06 05:18:32.904622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.884 [2024-12-06 05:18:32.904688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:54.884 [2024-12-06 05:18:32.904700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:54.884 [2024-12-06 05:18:32.904712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:54.884 [2024-12-06 05:18:32.904720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.884 [2024-12-06 05:18:32.904771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:54.884 [2024-12-06 05:18:32.904781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:54.884 [2024-12-06 05:18:32.904791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:54.884 [2024-12-06 05:18:32.904819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.884 [2024-12-06 05:18:32.904961] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 38.752 ms, result 0 00:30:55.145 00:30:55.145 00:30:55.145 05:18:33 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:30:57.691 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:30:57.691 05:18:35 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:30:57.692 [2024-12-06 05:18:35.411573] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:30:57.692 [2024-12-06 05:18:35.411823] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94577 ] 00:30:57.692 [2024-12-06 05:18:35.545050] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:57.692 [2024-12-06 05:18:35.591358] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:30:57.692 [2024-12-06 05:18:35.705351] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:57.692 [2024-12-06 05:18:35.705468] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:57.692 [2024-12-06 05:18:35.865361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.692 [2024-12-06 05:18:35.865599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:57.692 [2024-12-06 05:18:35.865629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:30:57.692 [2024-12-06 05:18:35.865639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.692 [2024-12-06 05:18:35.865730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.692 [2024-12-06 05:18:35.865743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:57.692 [2024-12-06 05:18:35.865752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:30:57.692 [2024-12-06 05:18:35.865768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.692 [2024-12-06 05:18:35.865792] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:57.692 [2024-12-06 05:18:35.866054] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:57.692 [2024-12-06 05:18:35.866070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.692 [2024-12-06 05:18:35.866079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:57.692 [2024-12-06 05:18:35.866095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 00:30:57.692 [2024-12-06 05:18:35.866103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.692 [2024-12-06 05:18:35.866402] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:30:57.692 [2024-12-06 05:18:35.866428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.692 [2024-12-06 05:18:35.866439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:57.692 [2024-12-06 05:18:35.866450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:30:57.692 [2024-12-06 05:18:35.866459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.692 [2024-12-06 05:18:35.866512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.692 [2024-12-06 05:18:35.866526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:57.692 [2024-12-06 05:18:35.866536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:30:57.692 [2024-12-06 05:18:35.866543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.692 [2024-12-06 05:18:35.866817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.692 [2024-12-06 05:18:35.866829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:57.692 [2024-12-06 05:18:35.866837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.234 ms 00:30:57.692 [2024-12-06 05:18:35.866845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.692 [2024-12-06 05:18:35.866924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.692 [2024-12-06 05:18:35.866943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:57.692 [2024-12-06 05:18:35.866952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:30:57.692 [2024-12-06 05:18:35.866960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.692 [2024-12-06 05:18:35.866985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.692 [2024-12-06 05:18:35.866994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:57.692 [2024-12-06 05:18:35.867009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:30:57.692 [2024-12-06 05:18:35.867016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.692 [2024-12-06 05:18:35.867036] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:57.692 [2024-12-06 05:18:35.869134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.692 [2024-12-06 05:18:35.869174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:57.692 [2024-12-06 05:18:35.869187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.101 ms 00:30:57.692 [2024-12-06 05:18:35.869196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.692 [2024-12-06 05:18:35.869231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.692 [2024-12-06 05:18:35.869240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:57.692 [2024-12-06 05:18:35.869249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:30:57.692 [2024-12-06 05:18:35.869257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.692 [2024-12-06 05:18:35.869307] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:57.692 [2024-12-06 05:18:35.869334] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:30:57.692 [2024-12-06 05:18:35.869378] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:57.692 [2024-12-06 05:18:35.869408] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:30:57.692 [2024-12-06 05:18:35.869515] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:57.692 [2024-12-06 05:18:35.869527] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:57.692 [2024-12-06 05:18:35.869538] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:57.692 [2024-12-06 05:18:35.869551] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:57.692 [2024-12-06 05:18:35.869561] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:57.692 [2024-12-06 05:18:35.869572] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:57.692 [2024-12-06 05:18:35.869582] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:57.692 [2024-12-06 05:18:35.869591] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:57.692 [2024-12-06 05:18:35.869599] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:57.692 [2024-12-06 05:18:35.869607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.692 [2024-12-06 05:18:35.869614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:57.692 [2024-12-06 05:18:35.869623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.304 ms 00:30:57.692 [2024-12-06 05:18:35.869630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.692 [2024-12-06 05:18:35.869734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.692 [2024-12-06 05:18:35.869748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:57.692 [2024-12-06 05:18:35.869755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:30:57.692 [2024-12-06 05:18:35.869765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.692 [2024-12-06 05:18:35.869871] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:57.692 [2024-12-06 05:18:35.869882] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:57.692 [2024-12-06 05:18:35.869890] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:57.692 [2024-12-06 05:18:35.869901] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:57.692 [2024-12-06 05:18:35.869908] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:57.692 [2024-12-06 05:18:35.869916] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:57.692 [2024-12-06 05:18:35.869923] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:57.692 [2024-12-06 05:18:35.869930] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:57.692 [2024-12-06 05:18:35.869937] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:57.692 [2024-12-06 05:18:35.869944] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:57.692 [2024-12-06 05:18:35.869951] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:57.692 [2024-12-06 05:18:35.869960] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:57.692 [2024-12-06 05:18:35.869966] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:57.692 [2024-12-06 05:18:35.869973] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:57.692 [2024-12-06 05:18:35.869980] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:57.692 [2024-12-06 05:18:35.869987] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:57.692 [2024-12-06 05:18:35.869994] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:57.692 [2024-12-06 05:18:35.870010] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:57.692 [2024-12-06 05:18:35.870016] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:57.692 [2024-12-06 05:18:35.870026] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:57.692 [2024-12-06 05:18:35.870033] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:57.692 [2024-12-06 05:18:35.870039] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:57.692 [2024-12-06 05:18:35.870045] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:57.692 [2024-12-06 05:18:35.870052] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:57.692 [2024-12-06 05:18:35.870059] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:57.692 [2024-12-06 05:18:35.870066] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:57.692 [2024-12-06 05:18:35.870072] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:57.692 [2024-12-06 05:18:35.870080] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:57.692 [2024-12-06 05:18:35.870087] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:57.692 [2024-12-06 05:18:35.870094] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:57.692 [2024-12-06 05:18:35.870100] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:57.692 [2024-12-06 05:18:35.870107] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:57.692 [2024-12-06 05:18:35.870114] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:57.693 [2024-12-06 05:18:35.870121] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:57.693 [2024-12-06 05:18:35.870128] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:57.693 [2024-12-06 05:18:35.870139] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:57.693 [2024-12-06 05:18:35.870145] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:57.693 [2024-12-06 05:18:35.870151] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:57.693 [2024-12-06 05:18:35.870158] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:57.693 [2024-12-06 05:18:35.870164] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:57.693 [2024-12-06 05:18:35.870171] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:57.693 [2024-12-06 05:18:35.870177] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:57.693 [2024-12-06 05:18:35.870184] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:57.693 [2024-12-06 05:18:35.870193] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:57.693 [2024-12-06 05:18:35.870206] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:57.693 [2024-12-06 05:18:35.870214] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:57.693 [2024-12-06 05:18:35.870221] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:57.693 [2024-12-06 05:18:35.870229] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:57.693 [2024-12-06 05:18:35.870235] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:57.693 [2024-12-06 05:18:35.870242] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:57.693 [2024-12-06 05:18:35.870248] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:57.693 [2024-12-06 05:18:35.870258] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:57.693 [2024-12-06 05:18:35.870265] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:57.693 [2024-12-06 05:18:35.870273] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:57.693 [2024-12-06 05:18:35.870285] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:57.693 [2024-12-06 05:18:35.870293] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:57.693 [2024-12-06 05:18:35.870300] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:57.693 [2024-12-06 05:18:35.870308] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:57.693 [2024-12-06 05:18:35.870315] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:57.693 [2024-12-06 05:18:35.870322] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:57.693 [2024-12-06 05:18:35.870329] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:57.693 [2024-12-06 05:18:35.870337] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:57.693 [2024-12-06 05:18:35.870344] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:57.693 [2024-12-06 05:18:35.870352] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:57.693 [2024-12-06 05:18:35.870359] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:57.693 [2024-12-06 05:18:35.870366] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:57.693 [2024-12-06 05:18:35.870379] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:57.693 [2024-12-06 05:18:35.870387] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:57.693 [2024-12-06 05:18:35.870395] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:57.693 [2024-12-06 05:18:35.870402] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:57.693 [2024-12-06 05:18:35.870410] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:57.693 [2024-12-06 05:18:35.870421] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:57.693 [2024-12-06 05:18:35.870429] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:57.693 [2024-12-06 05:18:35.870436] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:57.693 [2024-12-06 05:18:35.870443] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:57.693 [2024-12-06 05:18:35.870452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.693 [2024-12-06 05:18:35.870460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:57.693 [2024-12-06 05:18:35.870469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.650 ms 00:30:57.693 [2024-12-06 05:18:35.870476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.693 [2024-12-06 05:18:35.888934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.693 [2024-12-06 05:18:35.889146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:57.693 [2024-12-06 05:18:35.889178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.414 ms 00:30:57.693 [2024-12-06 05:18:35.889189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.693 [2024-12-06 05:18:35.889305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.693 [2024-12-06 05:18:35.889317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:57.693 [2024-12-06 05:18:35.889328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:30:57.693 [2024-12-06 05:18:35.889338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.693 [2024-12-06 05:18:35.901297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.693 [2024-12-06 05:18:35.901345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:57.693 [2024-12-06 05:18:35.901364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.865 ms 00:30:57.693 [2024-12-06 05:18:35.901372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.693 [2024-12-06 05:18:35.901418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.693 [2024-12-06 05:18:35.901427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:57.693 [2024-12-06 05:18:35.901440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:57.693 [2024-12-06 05:18:35.901447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.693 [2024-12-06 05:18:35.901541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.693 [2024-12-06 05:18:35.901552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:57.693 [2024-12-06 05:18:35.901565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:30:57.693 [2024-12-06 05:18:35.901576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.693 [2024-12-06 05:18:35.901722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.693 [2024-12-06 05:18:35.901733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:57.693 [2024-12-06 05:18:35.901742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.132 ms 00:30:57.693 [2024-12-06 05:18:35.901753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.693 [2024-12-06 05:18:35.908485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.693 [2024-12-06 05:18:35.908648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:57.693 [2024-12-06 05:18:35.908686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.710 ms 00:30:57.693 [2024-12-06 05:18:35.908695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.693 [2024-12-06 05:18:35.908824] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:30:57.693 [2024-12-06 05:18:35.908841] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:57.693 [2024-12-06 05:18:35.908851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.693 [2024-12-06 05:18:35.908859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:57.693 [2024-12-06 05:18:35.908867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:30:57.693 [2024-12-06 05:18:35.908874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.954 [2024-12-06 05:18:35.921193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.954 [2024-12-06 05:18:35.921236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:57.954 [2024-12-06 05:18:35.921247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.303 ms 00:30:57.954 [2024-12-06 05:18:35.921254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.955 [2024-12-06 05:18:35.921388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.955 [2024-12-06 05:18:35.921423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:57.955 [2024-12-06 05:18:35.921433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:30:57.955 [2024-12-06 05:18:35.921441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.955 [2024-12-06 05:18:35.921490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.955 [2024-12-06 05:18:35.921504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:57.955 [2024-12-06 05:18:35.921512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:30:57.955 [2024-12-06 05:18:35.921526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.955 [2024-12-06 05:18:35.921865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.955 [2024-12-06 05:18:35.921884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:57.955 [2024-12-06 05:18:35.921893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.298 ms 00:30:57.955 [2024-12-06 05:18:35.921901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.955 [2024-12-06 05:18:35.921919] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:30:57.955 [2024-12-06 05:18:35.921930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.955 [2024-12-06 05:18:35.921938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:57.955 [2024-12-06 05:18:35.921946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:30:57.955 [2024-12-06 05:18:35.921957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.955 [2024-12-06 05:18:35.931274] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:57.955 [2024-12-06 05:18:35.931462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.955 [2024-12-06 05:18:35.931474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:57.955 [2024-12-06 05:18:35.931484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.488 ms 00:30:57.955 [2024-12-06 05:18:35.931492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.955 [2024-12-06 05:18:35.933975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.955 [2024-12-06 05:18:35.934013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:57.955 [2024-12-06 05:18:35.934023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.461 ms 00:30:57.955 [2024-12-06 05:18:35.934031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.955 [2024-12-06 05:18:35.934125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.955 [2024-12-06 05:18:35.934139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:57.955 [2024-12-06 05:18:35.934148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:30:57.955 [2024-12-06 05:18:35.934161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.955 [2024-12-06 05:18:35.934192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.955 [2024-12-06 05:18:35.934201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:57.955 [2024-12-06 05:18:35.934209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:30:57.955 [2024-12-06 05:18:35.934217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.955 [2024-12-06 05:18:35.934248] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:57.955 [2024-12-06 05:18:35.934258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.955 [2024-12-06 05:18:35.934268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:57.955 [2024-12-06 05:18:35.934276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:30:57.955 [2024-12-06 05:18:35.934282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.955 [2024-12-06 05:18:35.940022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.955 [2024-12-06 05:18:35.940076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:57.955 [2024-12-06 05:18:35.940087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.718 ms 00:30:57.955 [2024-12-06 05:18:35.940096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.955 [2024-12-06 05:18:35.940187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.955 [2024-12-06 05:18:35.940198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:57.955 [2024-12-06 05:18:35.940206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:30:57.955 [2024-12-06 05:18:35.940214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.955 [2024-12-06 05:18:35.941436] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 75.609 ms, result 0 00:30:58.894  [2024-12-06T05:18:38.074Z] Copying: 19/1024 [MB] (19 MBps) [2024-12-06T05:18:39.019Z] Copying: 39/1024 [MB] (19 MBps) [2024-12-06T05:18:39.962Z] Copying: 60/1024 [MB] (21 MBps) [2024-12-06T05:18:41.341Z] Copying: 75/1024 [MB] (14 MBps) [2024-12-06T05:18:42.275Z] Copying: 91/1024 [MB] (16 MBps) [2024-12-06T05:18:43.214Z] Copying: 108/1024 [MB] (16 MBps) [2024-12-06T05:18:44.149Z] Copying: 144/1024 [MB] (36 MBps) [2024-12-06T05:18:45.088Z] Copying: 184/1024 [MB] (39 MBps) [2024-12-06T05:18:46.028Z] Copying: 228/1024 [MB] (44 MBps) [2024-12-06T05:18:46.968Z] Copying: 242/1024 [MB] (14 MBps) [2024-12-06T05:18:48.348Z] Copying: 252/1024 [MB] (10 MBps) [2024-12-06T05:18:49.289Z] Copying: 279/1024 [MB] (26 MBps) [2024-12-06T05:18:50.231Z] Copying: 312/1024 [MB] (33 MBps) [2024-12-06T05:18:51.175Z] Copying: 339/1024 [MB] (27 MBps) [2024-12-06T05:18:52.120Z] Copying: 354/1024 [MB] (14 MBps) [2024-12-06T05:18:53.066Z] Copying: 365/1024 [MB] (10 MBps) [2024-12-06T05:18:54.009Z] Copying: 386/1024 [MB] (20 MBps) [2024-12-06T05:18:54.956Z] Copying: 406/1024 [MB] (20 MBps) [2024-12-06T05:18:56.334Z] Copying: 427/1024 [MB] (20 MBps) [2024-12-06T05:18:57.276Z] Copying: 448/1024 [MB] (21 MBps) [2024-12-06T05:18:58.256Z] Copying: 465/1024 [MB] (17 MBps) [2024-12-06T05:18:59.214Z] Copying: 484/1024 [MB] (18 MBps) [2024-12-06T05:19:00.158Z] Copying: 494/1024 [MB] (10 MBps) [2024-12-06T05:19:01.101Z] Copying: 504/1024 [MB] (10 MBps) [2024-12-06T05:19:02.042Z] Copying: 515/1024 [MB] (11 MBps) [2024-12-06T05:19:02.982Z] Copying: 527/1024 [MB] (11 MBps) [2024-12-06T05:19:04.365Z] Copying: 538/1024 [MB] (11 MBps) [2024-12-06T05:19:05.306Z] Copying: 548/1024 [MB] (10 MBps) [2024-12-06T05:19:06.245Z] Copying: 560/1024 [MB] (11 MBps) [2024-12-06T05:19:07.185Z] Copying: 571/1024 [MB] (11 MBps) [2024-12-06T05:19:08.128Z] Copying: 583/1024 [MB] (11 MBps) [2024-12-06T05:19:09.074Z] Copying: 594/1024 [MB] (11 MBps) [2024-12-06T05:19:10.017Z] Copying: 605/1024 [MB] (11 MBps) [2024-12-06T05:19:10.962Z] Copying: 616/1024 [MB] (11 MBps) [2024-12-06T05:19:12.350Z] Copying: 626/1024 [MB] (10 MBps) [2024-12-06T05:19:13.297Z] Copying: 637/1024 [MB] (11 MBps) [2024-12-06T05:19:14.246Z] Copying: 651/1024 [MB] (13 MBps) [2024-12-06T05:19:15.192Z] Copying: 676976/1048576 [kB] (10040 kBps) [2024-12-06T05:19:16.136Z] Copying: 671/1024 [MB] (10 MBps) [2024-12-06T05:19:17.077Z] Copying: 681/1024 [MB] (10 MBps) [2024-12-06T05:19:18.020Z] Copying: 692/1024 [MB] (11 MBps) [2024-12-06T05:19:18.961Z] Copying: 704/1024 [MB] (11 MBps) [2024-12-06T05:19:20.351Z] Copying: 715/1024 [MB] (11 MBps) [2024-12-06T05:19:21.296Z] Copying: 727/1024 [MB] (11 MBps) [2024-12-06T05:19:22.240Z] Copying: 738/1024 [MB] (11 MBps) [2024-12-06T05:19:23.186Z] Copying: 749/1024 [MB] (11 MBps) [2024-12-06T05:19:24.132Z] Copying: 760/1024 [MB] (11 MBps) [2024-12-06T05:19:25.077Z] Copying: 771/1024 [MB] (11 MBps) [2024-12-06T05:19:26.020Z] Copying: 782/1024 [MB] (11 MBps) [2024-12-06T05:19:27.008Z] Copying: 793/1024 [MB] (10 MBps) [2024-12-06T05:19:27.960Z] Copying: 804/1024 [MB] (10 MBps) [2024-12-06T05:19:29.342Z] Copying: 815/1024 [MB] (10 MBps) [2024-12-06T05:19:30.287Z] Copying: 833/1024 [MB] (18 MBps) [2024-12-06T05:19:31.230Z] Copying: 846/1024 [MB] (13 MBps) [2024-12-06T05:19:32.175Z] Copying: 863/1024 [MB] (16 MBps) [2024-12-06T05:19:33.119Z] Copying: 884/1024 [MB] (21 MBps) [2024-12-06T05:19:34.064Z] Copying: 904/1024 [MB] (19 MBps) [2024-12-06T05:19:35.010Z] Copying: 915/1024 [MB] (10 MBps) [2024-12-06T05:19:36.391Z] Copying: 933/1024 [MB] (17 MBps) [2024-12-06T05:19:36.962Z] Copying: 946/1024 [MB] (13 MBps) [2024-12-06T05:19:38.348Z] Copying: 958/1024 [MB] (12 MBps) [2024-12-06T05:19:39.293Z] Copying: 969/1024 [MB] (10 MBps) [2024-12-06T05:19:40.239Z] Copying: 986/1024 [MB] (16 MBps) [2024-12-06T05:19:41.180Z] Copying: 1002/1024 [MB] (16 MBps) [2024-12-06T05:19:42.122Z] Copying: 1019/1024 [MB] (16 MBps) [2024-12-06T05:19:42.384Z] Copying: 1048236/1048576 [kB] (4604 kBps) [2024-12-06T05:19:42.384Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-12-06 05:19:42.357804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.152 [2024-12-06 05:19:42.357884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:04.152 [2024-12-06 05:19:42.357902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:04.152 [2024-12-06 05:19:42.357910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.152 [2024-12-06 05:19:42.360262] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:04.152 [2024-12-06 05:19:42.364127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.152 [2024-12-06 05:19:42.364177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:04.152 [2024-12-06 05:19:42.364190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.588 ms 00:32:04.152 [2024-12-06 05:19:42.364199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.152 [2024-12-06 05:19:42.378447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.152 [2024-12-06 05:19:42.378522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:04.152 [2024-12-06 05:19:42.378540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.759 ms 00:32:04.152 [2024-12-06 05:19:42.378551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.152 [2024-12-06 05:19:42.378603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.152 [2024-12-06 05:19:42.378622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:32:04.152 [2024-12-06 05:19:42.378633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:04.152 [2024-12-06 05:19:42.378643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.152 [2024-12-06 05:19:42.378733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.152 [2024-12-06 05:19:42.378748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:32:04.153 [2024-12-06 05:19:42.378761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:32:04.153 [2024-12-06 05:19:42.378771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.153 [2024-12-06 05:19:42.378789] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:04.153 [2024-12-06 05:19:42.378804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 129280 / 261120 wr_cnt: 1 state: open 00:32:04.153 [2024-12-06 05:19:42.378839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:04.153 [2024-12-06 05:19:42.378850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:04.153 [2024-12-06 05:19:42.378860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:04.153 [2024-12-06 05:19:42.378870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:04.153 [2024-12-06 05:19:42.378881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:04.153 [2024-12-06 05:19:42.378891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:04.153 [2024-12-06 05:19:42.378900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:04.153 [2024-12-06 05:19:42.378914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:04.153 [2024-12-06 05:19:42.378924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:04.153 [2024-12-06 05:19:42.378934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:04.153 [2024-12-06 05:19:42.378944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:04.153 [2024-12-06 05:19:42.378954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:04.153 [2024-12-06 05:19:42.378964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:04.153 [2024-12-06 05:19:42.378974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:04.153 [2024-12-06 05:19:42.378984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:04.153 [2024-12-06 05:19:42.378993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:04.153 [2024-12-06 05:19:42.379004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:04.153 [2024-12-06 05:19:42.379015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:04.153 [2024-12-06 05:19:42.379024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:04.153 [2024-12-06 05:19:42.379033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:04.153 [2024-12-06 05:19:42.379042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:04.153 [2024-12-06 05:19:42.379052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:04.153 [2024-12-06 05:19:42.379062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:04.153 [2024-12-06 05:19:42.379072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:04.153 [2024-12-06 05:19:42.379083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:04.153 [2024-12-06 05:19:42.379093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:04.153 [2024-12-06 05:19:42.379103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:04.153 [2024-12-06 05:19:42.379112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:04.153 [2024-12-06 05:19:42.379122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:04.153 [2024-12-06 05:19:42.379131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:04.153 [2024-12-06 05:19:42.379141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:04.153 [2024-12-06 05:19:42.379150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:04.153 [2024-12-06 05:19:42.379160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:04.153 [2024-12-06 05:19:42.379170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:04.153 [2024-12-06 05:19:42.379179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:04.153 [2024-12-06 05:19:42.379189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:04.153 [2024-12-06 05:19:42.379198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:04.153 [2024-12-06 05:19:42.379207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:04.153 [2024-12-06 05:19:42.379216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:04.153 [2024-12-06 05:19:42.379226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:04.153 [2024-12-06 05:19:42.379237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:04.153 [2024-12-06 05:19:42.379247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:04.153 [2024-12-06 05:19:42.379256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:04.153 [2024-12-06 05:19:42.379265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:04.153 [2024-12-06 05:19:42.379275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:04.153 [2024-12-06 05:19:42.379284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:04.153 [2024-12-06 05:19:42.379293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:04.153 [2024-12-06 05:19:42.379302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:04.153 [2024-12-06 05:19:42.379311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:04.153 [2024-12-06 05:19:42.379321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:04.153 [2024-12-06 05:19:42.379331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:04.153 [2024-12-06 05:19:42.379340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:04.153 [2024-12-06 05:19:42.379350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:04.153 [2024-12-06 05:19:42.379359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:04.153 [2024-12-06 05:19:42.379371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:04.153 [2024-12-06 05:19:42.379380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:04.153 [2024-12-06 05:19:42.379390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:04.153 [2024-12-06 05:19:42.379400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:04.153 [2024-12-06 05:19:42.379409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:04.153 [2024-12-06 05:19:42.379419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:04.153 [2024-12-06 05:19:42.379428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:04.153 [2024-12-06 05:19:42.379438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:04.153 [2024-12-06 05:19:42.379447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:04.153 [2024-12-06 05:19:42.379457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:04.154 [2024-12-06 05:19:42.379467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:04.154 [2024-12-06 05:19:42.379476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:04.154 [2024-12-06 05:19:42.379486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:04.154 [2024-12-06 05:19:42.379495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:04.154 [2024-12-06 05:19:42.379505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:04.154 [2024-12-06 05:19:42.379515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:04.154 [2024-12-06 05:19:42.379525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:04.154 [2024-12-06 05:19:42.379535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:04.154 [2024-12-06 05:19:42.379544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:04.154 [2024-12-06 05:19:42.379555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:04.154 [2024-12-06 05:19:42.379565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:04.154 [2024-12-06 05:19:42.379574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:04.154 [2024-12-06 05:19:42.379585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:04.154 [2024-12-06 05:19:42.379595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:04.154 [2024-12-06 05:19:42.379604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:04.154 [2024-12-06 05:19:42.379614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:04.154 [2024-12-06 05:19:42.379624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:04.154 [2024-12-06 05:19:42.379633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:04.154 [2024-12-06 05:19:42.379642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:04.154 [2024-12-06 05:19:42.379652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:04.154 [2024-12-06 05:19:42.379661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:04.154 [2024-12-06 05:19:42.379686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:04.154 [2024-12-06 05:19:42.379696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:04.154 [2024-12-06 05:19:42.379706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:04.154 [2024-12-06 05:19:42.379715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:04.154 [2024-12-06 05:19:42.379725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:04.154 [2024-12-06 05:19:42.379735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:04.154 [2024-12-06 05:19:42.379745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:04.154 [2024-12-06 05:19:42.379765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:04.154 [2024-12-06 05:19:42.379775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:04.154 [2024-12-06 05:19:42.379784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:04.154 [2024-12-06 05:19:42.379794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:04.154 [2024-12-06 05:19:42.379805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:04.154 [2024-12-06 05:19:42.379815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:04.154 [2024-12-06 05:19:42.379825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:04.154 [2024-12-06 05:19:42.379845] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:04.154 [2024-12-06 05:19:42.379861] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7b4b71fc-81b6-42ef-bf87-4b87e3b52047 00:32:04.154 [2024-12-06 05:19:42.379871] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 129280 00:32:04.154 [2024-12-06 05:19:42.379880] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 129312 00:32:04.154 [2024-12-06 05:19:42.379890] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 129280 00:32:04.154 [2024-12-06 05:19:42.379900] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0002 00:32:04.154 [2024-12-06 05:19:42.379909] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:04.154 [2024-12-06 05:19:42.379922] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:04.154 [2024-12-06 05:19:42.379932] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:04.154 [2024-12-06 05:19:42.379941] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:04.154 [2024-12-06 05:19:42.379950] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:04.154 [2024-12-06 05:19:42.379961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.154 [2024-12-06 05:19:42.379971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:04.154 [2024-12-06 05:19:42.379987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.172 ms 00:32:04.154 [2024-12-06 05:19:42.380001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.417 [2024-12-06 05:19:42.382462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.417 [2024-12-06 05:19:42.382652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:04.417 [2024-12-06 05:19:42.382694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.441 ms 00:32:04.417 [2024-12-06 05:19:42.382713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.417 [2024-12-06 05:19:42.382847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.417 [2024-12-06 05:19:42.382867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:04.417 [2024-12-06 05:19:42.382882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:32:04.417 [2024-12-06 05:19:42.382892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.417 [2024-12-06 05:19:42.389826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.417 [2024-12-06 05:19:42.389982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:04.417 [2024-12-06 05:19:42.390008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.417 [2024-12-06 05:19:42.390016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.417 [2024-12-06 05:19:42.390088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.417 [2024-12-06 05:19:42.390100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:04.417 [2024-12-06 05:19:42.390108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.417 [2024-12-06 05:19:42.390116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.417 [2024-12-06 05:19:42.390171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.417 [2024-12-06 05:19:42.390182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:04.417 [2024-12-06 05:19:42.390191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.417 [2024-12-06 05:19:42.390201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.417 [2024-12-06 05:19:42.390222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.417 [2024-12-06 05:19:42.390231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:04.417 [2024-12-06 05:19:42.390240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.417 [2024-12-06 05:19:42.390247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.417 [2024-12-06 05:19:42.403703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.418 [2024-12-06 05:19:42.403751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:04.418 [2024-12-06 05:19:42.403766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.418 [2024-12-06 05:19:42.403779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.418 [2024-12-06 05:19:42.414856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.418 [2024-12-06 05:19:42.414908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:04.418 [2024-12-06 05:19:42.414920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.418 [2024-12-06 05:19:42.414928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.418 [2024-12-06 05:19:42.414976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.418 [2024-12-06 05:19:42.414986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:04.418 [2024-12-06 05:19:42.415004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.418 [2024-12-06 05:19:42.415013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.418 [2024-12-06 05:19:42.415052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.418 [2024-12-06 05:19:42.415062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:04.418 [2024-12-06 05:19:42.415070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.418 [2024-12-06 05:19:42.415078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.418 [2024-12-06 05:19:42.415132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.418 [2024-12-06 05:19:42.415142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:04.418 [2024-12-06 05:19:42.415150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.418 [2024-12-06 05:19:42.415162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.418 [2024-12-06 05:19:42.415190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.418 [2024-12-06 05:19:42.415200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:04.418 [2024-12-06 05:19:42.415208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.418 [2024-12-06 05:19:42.415215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.418 [2024-12-06 05:19:42.415257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.418 [2024-12-06 05:19:42.415267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:04.418 [2024-12-06 05:19:42.415279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.418 [2024-12-06 05:19:42.415287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.418 [2024-12-06 05:19:42.415337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.418 [2024-12-06 05:19:42.415348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:04.418 [2024-12-06 05:19:42.415357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.418 [2024-12-06 05:19:42.415366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.418 [2024-12-06 05:19:42.415503] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 59.799 ms, result 0 00:32:05.363 00:32:05.363 00:32:05.363 05:19:43 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:32:05.363 [2024-12-06 05:19:43.316178] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:32:05.363 [2024-12-06 05:19:43.316537] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95262 ] 00:32:05.363 [2024-12-06 05:19:43.454269] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:05.363 [2024-12-06 05:19:43.504371] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:32:05.627 [2024-12-06 05:19:43.619002] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:05.627 [2024-12-06 05:19:43.619094] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:05.627 [2024-12-06 05:19:43.780280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.627 [2024-12-06 05:19:43.780345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:05.627 [2024-12-06 05:19:43.780363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:05.627 [2024-12-06 05:19:43.780372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.627 [2024-12-06 05:19:43.780431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.627 [2024-12-06 05:19:43.780442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:05.627 [2024-12-06 05:19:43.780452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:32:05.627 [2024-12-06 05:19:43.780465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.627 [2024-12-06 05:19:43.780486] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:05.627 [2024-12-06 05:19:43.780789] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:05.627 [2024-12-06 05:19:43.780808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.627 [2024-12-06 05:19:43.780816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:05.627 [2024-12-06 05:19:43.780829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.327 ms 00:32:05.627 [2024-12-06 05:19:43.780837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.627 [2024-12-06 05:19:43.781110] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:32:05.627 [2024-12-06 05:19:43.781135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.627 [2024-12-06 05:19:43.781144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:32:05.627 [2024-12-06 05:19:43.781154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:32:05.627 [2024-12-06 05:19:43.781162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.627 [2024-12-06 05:19:43.781218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.627 [2024-12-06 05:19:43.781231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:32:05.627 [2024-12-06 05:19:43.781245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:32:05.627 [2024-12-06 05:19:43.781252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.627 [2024-12-06 05:19:43.781566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.627 [2024-12-06 05:19:43.781579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:05.627 [2024-12-06 05:19:43.781588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.223 ms 00:32:05.627 [2024-12-06 05:19:43.781599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.627 [2024-12-06 05:19:43.781697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.627 [2024-12-06 05:19:43.781711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:05.627 [2024-12-06 05:19:43.781720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:32:05.627 [2024-12-06 05:19:43.781731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.627 [2024-12-06 05:19:43.781755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.627 [2024-12-06 05:19:43.781770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:05.627 [2024-12-06 05:19:43.781778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:32:05.627 [2024-12-06 05:19:43.781786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.627 [2024-12-06 05:19:43.781806] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:05.627 [2024-12-06 05:19:43.783879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.627 [2024-12-06 05:19:43.783919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:05.627 [2024-12-06 05:19:43.783930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.077 ms 00:32:05.627 [2024-12-06 05:19:43.783938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.627 [2024-12-06 05:19:43.783973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.627 [2024-12-06 05:19:43.783992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:05.627 [2024-12-06 05:19:43.784001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:32:05.627 [2024-12-06 05:19:43.784009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.627 [2024-12-06 05:19:43.784042] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:32:05.627 [2024-12-06 05:19:43.784064] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:32:05.627 [2024-12-06 05:19:43.784112] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:32:05.627 [2024-12-06 05:19:43.784130] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:32:05.627 [2024-12-06 05:19:43.784234] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:05.627 [2024-12-06 05:19:43.784246] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:05.627 [2024-12-06 05:19:43.784257] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:32:05.627 [2024-12-06 05:19:43.784269] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:05.627 [2024-12-06 05:19:43.784278] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:05.627 [2024-12-06 05:19:43.784286] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:32:05.627 [2024-12-06 05:19:43.784296] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:05.627 [2024-12-06 05:19:43.784304] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:05.627 [2024-12-06 05:19:43.784314] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:05.627 [2024-12-06 05:19:43.784322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.627 [2024-12-06 05:19:43.784330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:05.627 [2024-12-06 05:19:43.784338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.282 ms 00:32:05.627 [2024-12-06 05:19:43.784345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.627 [2024-12-06 05:19:43.784443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.627 [2024-12-06 05:19:43.784452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:05.627 [2024-12-06 05:19:43.784460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:32:05.627 [2024-12-06 05:19:43.784470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.627 [2024-12-06 05:19:43.784574] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:05.627 [2024-12-06 05:19:43.784585] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:05.627 [2024-12-06 05:19:43.784597] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:05.628 [2024-12-06 05:19:43.784606] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:05.628 [2024-12-06 05:19:43.784614] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:05.628 [2024-12-06 05:19:43.784621] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:05.628 [2024-12-06 05:19:43.784630] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:32:05.628 [2024-12-06 05:19:43.784637] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:05.628 [2024-12-06 05:19:43.784644] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:32:05.628 [2024-12-06 05:19:43.784651] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:05.628 [2024-12-06 05:19:43.784658] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:05.628 [2024-12-06 05:19:43.784682] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:32:05.628 [2024-12-06 05:19:43.784689] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:05.628 [2024-12-06 05:19:43.784696] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:05.628 [2024-12-06 05:19:43.784703] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:32:05.628 [2024-12-06 05:19:43.784710] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:05.628 [2024-12-06 05:19:43.784717] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:05.628 [2024-12-06 05:19:43.784724] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:32:05.628 [2024-12-06 05:19:43.784731] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:05.628 [2024-12-06 05:19:43.784738] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:05.628 [2024-12-06 05:19:43.784744] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:32:05.628 [2024-12-06 05:19:43.784751] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:05.628 [2024-12-06 05:19:43.784761] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:05.628 [2024-12-06 05:19:43.784768] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:32:05.628 [2024-12-06 05:19:43.784775] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:05.628 [2024-12-06 05:19:43.784782] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:05.628 [2024-12-06 05:19:43.784789] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:32:05.628 [2024-12-06 05:19:43.784796] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:05.628 [2024-12-06 05:19:43.784802] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:05.628 [2024-12-06 05:19:43.784809] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:32:05.628 [2024-12-06 05:19:43.784816] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:05.628 [2024-12-06 05:19:43.784822] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:05.628 [2024-12-06 05:19:43.784830] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:32:05.628 [2024-12-06 05:19:43.784836] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:05.628 [2024-12-06 05:19:43.784843] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:05.628 [2024-12-06 05:19:43.784849] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:32:05.628 [2024-12-06 05:19:43.784856] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:05.628 [2024-12-06 05:19:43.784863] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:05.628 [2024-12-06 05:19:43.784871] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:32:05.628 [2024-12-06 05:19:43.784879] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:05.628 [2024-12-06 05:19:43.784886] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:05.628 [2024-12-06 05:19:43.784893] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:32:05.628 [2024-12-06 05:19:43.784900] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:05.628 [2024-12-06 05:19:43.784908] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:05.628 [2024-12-06 05:19:43.784922] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:05.628 [2024-12-06 05:19:43.784930] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:05.628 [2024-12-06 05:19:43.784937] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:05.628 [2024-12-06 05:19:43.784945] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:05.628 [2024-12-06 05:19:43.784952] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:05.628 [2024-12-06 05:19:43.784959] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:05.628 [2024-12-06 05:19:43.784966] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:05.628 [2024-12-06 05:19:43.784972] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:05.628 [2024-12-06 05:19:43.784979] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:05.628 [2024-12-06 05:19:43.784987] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:05.628 [2024-12-06 05:19:43.785001] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:05.628 [2024-12-06 05:19:43.785009] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:32:05.628 [2024-12-06 05:19:43.785016] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:32:05.628 [2024-12-06 05:19:43.785023] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:32:05.628 [2024-12-06 05:19:43.785030] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:32:05.628 [2024-12-06 05:19:43.785038] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:32:05.628 [2024-12-06 05:19:43.785044] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:32:05.628 [2024-12-06 05:19:43.785052] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:32:05.628 [2024-12-06 05:19:43.785059] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:32:05.628 [2024-12-06 05:19:43.785066] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:32:05.628 [2024-12-06 05:19:43.785073] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:32:05.628 [2024-12-06 05:19:43.785080] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:32:05.628 [2024-12-06 05:19:43.785092] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:32:05.628 [2024-12-06 05:19:43.785100] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:32:05.628 [2024-12-06 05:19:43.785107] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:32:05.628 [2024-12-06 05:19:43.785114] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:05.628 [2024-12-06 05:19:43.785125] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:05.628 [2024-12-06 05:19:43.785135] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:05.628 [2024-12-06 05:19:43.785143] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:05.628 [2024-12-06 05:19:43.785150] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:05.629 [2024-12-06 05:19:43.785157] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:05.629 [2024-12-06 05:19:43.785165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.629 [2024-12-06 05:19:43.785173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:05.629 [2024-12-06 05:19:43.785181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.659 ms 00:32:05.629 [2024-12-06 05:19:43.785188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.629 [2024-12-06 05:19:43.803855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.629 [2024-12-06 05:19:43.804048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:05.629 [2024-12-06 05:19:43.804126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.624 ms 00:32:05.629 [2024-12-06 05:19:43.804150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.629 [2024-12-06 05:19:43.804256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.629 [2024-12-06 05:19:43.804280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:05.629 [2024-12-06 05:19:43.804300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:32:05.629 [2024-12-06 05:19:43.804327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.629 [2024-12-06 05:19:43.817209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.629 [2024-12-06 05:19:43.817426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:05.629 [2024-12-06 05:19:43.817514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.805 ms 00:32:05.629 [2024-12-06 05:19:43.817553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.629 [2024-12-06 05:19:43.817623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.629 [2024-12-06 05:19:43.817655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:05.629 [2024-12-06 05:19:43.817713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:05.629 [2024-12-06 05:19:43.817741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.629 [2024-12-06 05:19:43.817863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.629 [2024-12-06 05:19:43.818040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:05.629 [2024-12-06 05:19:43.818070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:32:05.629 [2024-12-06 05:19:43.818095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.629 [2024-12-06 05:19:43.818247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.629 [2024-12-06 05:19:43.818270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:05.629 [2024-12-06 05:19:43.818289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:32:05.629 [2024-12-06 05:19:43.818308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.629 [2024-12-06 05:19:43.825067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.629 [2024-12-06 05:19:43.825211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:05.629 [2024-12-06 05:19:43.825271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.728 ms 00:32:05.629 [2024-12-06 05:19:43.825312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.629 [2024-12-06 05:19:43.825451] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:32:05.629 [2024-12-06 05:19:43.825490] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:32:05.629 [2024-12-06 05:19:43.825520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.629 [2024-12-06 05:19:43.825540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:32:05.629 [2024-12-06 05:19:43.825565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:32:05.629 [2024-12-06 05:19:43.825626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.629 [2024-12-06 05:19:43.837955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.629 [2024-12-06 05:19:43.838099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:32:05.629 [2024-12-06 05:19:43.838166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.296 ms 00:32:05.629 [2024-12-06 05:19:43.838189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.629 [2024-12-06 05:19:43.838330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.629 [2024-12-06 05:19:43.838353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:32:05.629 [2024-12-06 05:19:43.838377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:32:05.629 [2024-12-06 05:19:43.838396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.629 [2024-12-06 05:19:43.838461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.629 [2024-12-06 05:19:43.838498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:32:05.629 [2024-12-06 05:19:43.838518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:32:05.629 [2024-12-06 05:19:43.838585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.629 [2024-12-06 05:19:43.838940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.629 [2024-12-06 05:19:43.838981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:05.629 [2024-12-06 05:19:43.839053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.297 ms 00:32:05.629 [2024-12-06 05:19:43.839075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.629 [2024-12-06 05:19:43.839109] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:32:05.629 [2024-12-06 05:19:43.839143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.629 [2024-12-06 05:19:43.839161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:32:05.629 [2024-12-06 05:19:43.839234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:32:05.629 [2024-12-06 05:19:43.839260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.629 [2024-12-06 05:19:43.848947] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:32:05.629 [2024-12-06 05:19:43.849208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.629 [2024-12-06 05:19:43.849230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:05.629 [2024-12-06 05:19:43.849240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.915 ms 00:32:05.629 [2024-12-06 05:19:43.849248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.629 [2024-12-06 05:19:43.851814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.629 [2024-12-06 05:19:43.851856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:32:05.629 [2024-12-06 05:19:43.851866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.538 ms 00:32:05.629 [2024-12-06 05:19:43.851873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.629 [2024-12-06 05:19:43.851958] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:32:05.629 [2024-12-06 05:19:43.852539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.629 [2024-12-06 05:19:43.852562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:05.629 [2024-12-06 05:19:43.852571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.607 ms 00:32:05.629 [2024-12-06 05:19:43.852579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.629 [2024-12-06 05:19:43.852617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.629 [2024-12-06 05:19:43.852626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:05.629 [2024-12-06 05:19:43.852634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:32:05.629 [2024-12-06 05:19:43.852641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.629 [2024-12-06 05:19:43.852688] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:32:05.629 [2024-12-06 05:19:43.852706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.629 [2024-12-06 05:19:43.852714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:32:05.629 [2024-12-06 05:19:43.852723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:32:05.629 [2024-12-06 05:19:43.852734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.892 [2024-12-06 05:19:43.858852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.892 [2024-12-06 05:19:43.858906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:05.892 [2024-12-06 05:19:43.858923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.100 ms 00:32:05.892 [2024-12-06 05:19:43.858931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.892 [2024-12-06 05:19:43.859019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:05.892 [2024-12-06 05:19:43.859029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:05.892 [2024-12-06 05:19:43.859038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:32:05.892 [2024-12-06 05:19:43.859046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:05.892 [2024-12-06 05:19:43.860207] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 79.518 ms, result 0 00:32:06.835  [2024-12-06T05:19:46.450Z] Copying: 20/1024 [MB] (20 MBps) [2024-12-06T05:19:47.393Z] Copying: 32/1024 [MB] (12 MBps) [2024-12-06T05:19:48.335Z] Copying: 43/1024 [MB] (10 MBps) [2024-12-06T05:19:49.279Z] Copying: 53/1024 [MB] (10 MBps) [2024-12-06T05:19:50.223Z] Copying: 71/1024 [MB] (17 MBps) [2024-12-06T05:19:51.170Z] Copying: 91/1024 [MB] (19 MBps) [2024-12-06T05:19:52.114Z] Copying: 107/1024 [MB] (16 MBps) [2024-12-06T05:19:53.057Z] Copying: 125/1024 [MB] (17 MBps) [2024-12-06T05:19:54.446Z] Copying: 147/1024 [MB] (22 MBps) [2024-12-06T05:19:55.453Z] Copying: 159/1024 [MB] (11 MBps) [2024-12-06T05:19:56.047Z] Copying: 173/1024 [MB] (14 MBps) [2024-12-06T05:19:57.430Z] Copying: 184/1024 [MB] (10 MBps) [2024-12-06T05:19:58.373Z] Copying: 199/1024 [MB] (14 MBps) [2024-12-06T05:19:59.313Z] Copying: 209/1024 [MB] (10 MBps) [2024-12-06T05:20:00.259Z] Copying: 223/1024 [MB] (14 MBps) [2024-12-06T05:20:01.204Z] Copying: 238/1024 [MB] (14 MBps) [2024-12-06T05:20:02.149Z] Copying: 255/1024 [MB] (17 MBps) [2024-12-06T05:20:03.096Z] Copying: 271/1024 [MB] (15 MBps) [2024-12-06T05:20:04.485Z] Copying: 282/1024 [MB] (11 MBps) [2024-12-06T05:20:05.059Z] Copying: 304/1024 [MB] (21 MBps) [2024-12-06T05:20:06.445Z] Copying: 319/1024 [MB] (14 MBps) [2024-12-06T05:20:07.385Z] Copying: 330/1024 [MB] (10 MBps) [2024-12-06T05:20:08.326Z] Copying: 353/1024 [MB] (23 MBps) [2024-12-06T05:20:09.266Z] Copying: 365/1024 [MB] (11 MBps) [2024-12-06T05:20:10.227Z] Copying: 375/1024 [MB] (10 MBps) [2024-12-06T05:20:11.169Z] Copying: 388/1024 [MB] (12 MBps) [2024-12-06T05:20:12.114Z] Copying: 403/1024 [MB] (15 MBps) [2024-12-06T05:20:13.059Z] Copying: 414/1024 [MB] (11 MBps) [2024-12-06T05:20:14.444Z] Copying: 426/1024 [MB] (11 MBps) [2024-12-06T05:20:15.383Z] Copying: 437/1024 [MB] (10 MBps) [2024-12-06T05:20:16.324Z] Copying: 448/1024 [MB] (11 MBps) [2024-12-06T05:20:17.266Z] Copying: 459/1024 [MB] (10 MBps) [2024-12-06T05:20:18.206Z] Copying: 470/1024 [MB] (10 MBps) [2024-12-06T05:20:19.150Z] Copying: 491/1024 [MB] (21 MBps) [2024-12-06T05:20:20.094Z] Copying: 503/1024 [MB] (12 MBps) [2024-12-06T05:20:21.481Z] Copying: 514/1024 [MB] (10 MBps) [2024-12-06T05:20:22.055Z] Copying: 530/1024 [MB] (15 MBps) [2024-12-06T05:20:23.445Z] Copying: 545/1024 [MB] (15 MBps) [2024-12-06T05:20:24.411Z] Copying: 556/1024 [MB] (11 MBps) [2024-12-06T05:20:25.404Z] Copying: 578/1024 [MB] (21 MBps) [2024-12-06T05:20:26.345Z] Copying: 589/1024 [MB] (10 MBps) [2024-12-06T05:20:27.288Z] Copying: 599/1024 [MB] (10 MBps) [2024-12-06T05:20:28.235Z] Copying: 615/1024 [MB] (15 MBps) [2024-12-06T05:20:29.177Z] Copying: 628/1024 [MB] (13 MBps) [2024-12-06T05:20:30.121Z] Copying: 643/1024 [MB] (14 MBps) [2024-12-06T05:20:31.067Z] Copying: 655/1024 [MB] (12 MBps) [2024-12-06T05:20:32.451Z] Copying: 666/1024 [MB] (10 MBps) [2024-12-06T05:20:33.392Z] Copying: 685/1024 [MB] (19 MBps) [2024-12-06T05:20:34.391Z] Copying: 695/1024 [MB] (10 MBps) [2024-12-06T05:20:35.335Z] Copying: 710/1024 [MB] (14 MBps) [2024-12-06T05:20:36.277Z] Copying: 726/1024 [MB] (16 MBps) [2024-12-06T05:20:37.215Z] Copying: 743/1024 [MB] (16 MBps) [2024-12-06T05:20:38.159Z] Copying: 768/1024 [MB] (25 MBps) [2024-12-06T05:20:39.102Z] Copying: 781/1024 [MB] (13 MBps) [2024-12-06T05:20:40.490Z] Copying: 794/1024 [MB] (13 MBps) [2024-12-06T05:20:41.064Z] Copying: 813/1024 [MB] (18 MBps) [2024-12-06T05:20:42.449Z] Copying: 838/1024 [MB] (24 MBps) [2024-12-06T05:20:43.393Z] Copying: 858/1024 [MB] (20 MBps) [2024-12-06T05:20:44.338Z] Copying: 881/1024 [MB] (22 MBps) [2024-12-06T05:20:45.284Z] Copying: 891/1024 [MB] (10 MBps) [2024-12-06T05:20:46.224Z] Copying: 902/1024 [MB] (10 MBps) [2024-12-06T05:20:47.167Z] Copying: 920/1024 [MB] (18 MBps) [2024-12-06T05:20:48.107Z] Copying: 938/1024 [MB] (17 MBps) [2024-12-06T05:20:49.053Z] Copying: 948/1024 [MB] (10 MBps) [2024-12-06T05:20:50.441Z] Copying: 965/1024 [MB] (16 MBps) [2024-12-06T05:20:51.386Z] Copying: 976/1024 [MB] (10 MBps) [2024-12-06T05:20:52.331Z] Copying: 990/1024 [MB] (14 MBps) [2024-12-06T05:20:53.279Z] Copying: 1001/1024 [MB] (10 MBps) [2024-12-06T05:20:54.282Z] Copying: 1012/1024 [MB] (10 MBps) [2024-12-06T05:20:54.282Z] Copying: 1023/1024 [MB] (11 MBps) [2024-12-06T05:20:54.282Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-12-06 05:20:54.218052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:16.050 [2024-12-06 05:20:54.218178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:33:16.050 [2024-12-06 05:20:54.218198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:16.050 [2024-12-06 05:20:54.218210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:16.050 [2024-12-06 05:20:54.218239] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:33:16.050 [2024-12-06 05:20:54.219262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:16.050 [2024-12-06 05:20:54.219298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:33:16.050 [2024-12-06 05:20:54.219312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.002 ms 00:33:16.050 [2024-12-06 05:20:54.219324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:16.050 [2024-12-06 05:20:54.219627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:16.050 [2024-12-06 05:20:54.219644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:33:16.050 [2024-12-06 05:20:54.219656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.269 ms 00:33:16.050 [2024-12-06 05:20:54.219690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:16.050 [2024-12-06 05:20:54.219733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:16.050 [2024-12-06 05:20:54.219744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:33:16.050 [2024-12-06 05:20:54.219755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:33:16.050 [2024-12-06 05:20:54.219765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:16.051 [2024-12-06 05:20:54.219839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:16.051 [2024-12-06 05:20:54.219851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:33:16.051 [2024-12-06 05:20:54.219863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:33:16.051 [2024-12-06 05:20:54.219876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:16.051 [2024-12-06 05:20:54.219894] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:33:16.051 [2024-12-06 05:20:54.219911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:33:16.051 [2024-12-06 05:20:54.219923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:33:16.051 [2024-12-06 05:20:54.219933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:33:16.051 [2024-12-06 05:20:54.219943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:16.051 [2024-12-06 05:20:54.219953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:16.051 [2024-12-06 05:20:54.219962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:16.051 [2024-12-06 05:20:54.219972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:16.051 [2024-12-06 05:20:54.219982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:16.051 [2024-12-06 05:20:54.219991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:16.051 [2024-12-06 05:20:54.220001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:16.051 [2024-12-06 05:20:54.220010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:16.051 [2024-12-06 05:20:54.220020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:16.051 [2024-12-06 05:20:54.220032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:16.051 [2024-12-06 05:20:54.220042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:16.051 [2024-12-06 05:20:54.220052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:16.051 [2024-12-06 05:20:54.220062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:16.051 [2024-12-06 05:20:54.220072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:16.051 [2024-12-06 05:20:54.220082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:16.051 [2024-12-06 05:20:54.220092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:33:16.051 [2024-12-06 05:20:54.220101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:33:16.051 [2024-12-06 05:20:54.220111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:33:16.051 [2024-12-06 05:20:54.220121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:33:16.051 [2024-12-06 05:20:54.220131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:33:16.051 [2024-12-06 05:20:54.220140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:33:16.051 [2024-12-06 05:20:54.220150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:33:16.051 [2024-12-06 05:20:54.220160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:33:16.051 [2024-12-06 05:20:54.220169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:33:16.051 [2024-12-06 05:20:54.220179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:33:16.051 [2024-12-06 05:20:54.220189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:33:16.051 [2024-12-06 05:20:54.220198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:33:16.051 [2024-12-06 05:20:54.220208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:33:16.051 [2024-12-06 05:20:54.220217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:33:16.051 [2024-12-06 05:20:54.220229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:33:16.051 [2024-12-06 05:20:54.220239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:33:16.051 [2024-12-06 05:20:54.220249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:33:16.051 [2024-12-06 05:20:54.220259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:33:16.051 [2024-12-06 05:20:54.220268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:33:16.051 [2024-12-06 05:20:54.220278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:33:16.051 [2024-12-06 05:20:54.220287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:33:16.051 [2024-12-06 05:20:54.220296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:33:16.051 [2024-12-06 05:20:54.220305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:33:16.051 [2024-12-06 05:20:54.220315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:33:16.051 [2024-12-06 05:20:54.220325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:33:16.051 [2024-12-06 05:20:54.220334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:33:16.051 [2024-12-06 05:20:54.220345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:33:16.051 [2024-12-06 05:20:54.220355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:33:16.051 [2024-12-06 05:20:54.220365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:33:16.051 [2024-12-06 05:20:54.220376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:33:16.051 [2024-12-06 05:20:54.220385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:33:16.051 [2024-12-06 05:20:54.220395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:33:16.051 [2024-12-06 05:20:54.220405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:33:16.051 [2024-12-06 05:20:54.220414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:33:16.051 [2024-12-06 05:20:54.220424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:33:16.051 [2024-12-06 05:20:54.220434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:33:16.051 [2024-12-06 05:20:54.220443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:33:16.051 [2024-12-06 05:20:54.220453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:33:16.051 [2024-12-06 05:20:54.220462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:33:16.051 [2024-12-06 05:20:54.220472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:33:16.051 [2024-12-06 05:20:54.220481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:33:16.051 [2024-12-06 05:20:54.220491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:33:16.051 [2024-12-06 05:20:54.220500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:33:16.051 [2024-12-06 05:20:54.220510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:33:16.051 [2024-12-06 05:20:54.220519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:33:16.051 [2024-12-06 05:20:54.220528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:33:16.051 [2024-12-06 05:20:54.220538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:33:16.051 [2024-12-06 05:20:54.220549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:33:16.051 [2024-12-06 05:20:54.220558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:33:16.051 [2024-12-06 05:20:54.220568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:33:16.051 [2024-12-06 05:20:54.220577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:33:16.051 [2024-12-06 05:20:54.220587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:33:16.051 [2024-12-06 05:20:54.220596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:33:16.051 [2024-12-06 05:20:54.220605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:33:16.051 [2024-12-06 05:20:54.220615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:33:16.051 [2024-12-06 05:20:54.220624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:33:16.051 [2024-12-06 05:20:54.220634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:33:16.051 [2024-12-06 05:20:54.220643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:33:16.051 [2024-12-06 05:20:54.220659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:33:16.051 [2024-12-06 05:20:54.220688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:33:16.051 [2024-12-06 05:20:54.220698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:33:16.051 [2024-12-06 05:20:54.220708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:33:16.051 [2024-12-06 05:20:54.220717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:33:16.052 [2024-12-06 05:20:54.220727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:33:16.052 [2024-12-06 05:20:54.220738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:33:16.052 [2024-12-06 05:20:54.220748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:33:16.052 [2024-12-06 05:20:54.220758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:33:16.052 [2024-12-06 05:20:54.220768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:33:16.052 [2024-12-06 05:20:54.220777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:33:16.052 [2024-12-06 05:20:54.220787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:33:16.052 [2024-12-06 05:20:54.220797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:33:16.052 [2024-12-06 05:20:54.220808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:33:16.052 [2024-12-06 05:20:54.220817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:33:16.052 [2024-12-06 05:20:54.220827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:33:16.052 [2024-12-06 05:20:54.220837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:33:16.052 [2024-12-06 05:20:54.220857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:33:16.052 [2024-12-06 05:20:54.220867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:33:16.052 [2024-12-06 05:20:54.220877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:33:16.052 [2024-12-06 05:20:54.220887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:33:16.052 [2024-12-06 05:20:54.220897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:33:16.052 [2024-12-06 05:20:54.220907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:33:16.052 [2024-12-06 05:20:54.220918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:33:16.052 [2024-12-06 05:20:54.220938] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:33:16.052 [2024-12-06 05:20:54.220953] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7b4b71fc-81b6-42ef-bf87-4b87e3b52047 00:33:16.052 [2024-12-06 05:20:54.220964] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:33:16.052 [2024-12-06 05:20:54.220974] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 1824 00:33:16.052 [2024-12-06 05:20:54.220983] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 1792 00:33:16.052 [2024-12-06 05:20:54.220994] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0179 00:33:16.052 [2024-12-06 05:20:54.221005] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:33:16.052 [2024-12-06 05:20:54.221015] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:33:16.052 [2024-12-06 05:20:54.221031] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:33:16.052 [2024-12-06 05:20:54.221040] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:33:16.052 [2024-12-06 05:20:54.221048] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:33:16.052 [2024-12-06 05:20:54.221057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:16.052 [2024-12-06 05:20:54.221073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:33:16.052 [2024-12-06 05:20:54.221083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.165 ms 00:33:16.052 [2024-12-06 05:20:54.221093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:16.052 [2024-12-06 05:20:54.223875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:16.052 [2024-12-06 05:20:54.223918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:33:16.052 [2024-12-06 05:20:54.223932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.761 ms 00:33:16.052 [2024-12-06 05:20:54.223950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:16.052 [2024-12-06 05:20:54.224092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:16.052 [2024-12-06 05:20:54.224103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:33:16.052 [2024-12-06 05:20:54.224114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:33:16.052 [2024-12-06 05:20:54.224123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:16.052 [2024-12-06 05:20:54.231736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:16.052 [2024-12-06 05:20:54.231918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:16.052 [2024-12-06 05:20:54.231989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:16.052 [2024-12-06 05:20:54.232021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:16.052 [2024-12-06 05:20:54.232113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:16.052 [2024-12-06 05:20:54.232142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:16.052 [2024-12-06 05:20:54.232210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:16.052 [2024-12-06 05:20:54.232233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:16.052 [2024-12-06 05:20:54.232317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:16.052 [2024-12-06 05:20:54.232345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:16.052 [2024-12-06 05:20:54.232365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:16.052 [2024-12-06 05:20:54.232587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:16.052 [2024-12-06 05:20:54.232657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:16.052 [2024-12-06 05:20:54.232734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:16.052 [2024-12-06 05:20:54.232757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:16.052 [2024-12-06 05:20:54.232777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:16.052 [2024-12-06 05:20:54.246613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:16.052 [2024-12-06 05:20:54.246655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:16.052 [2024-12-06 05:20:54.246702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:16.052 [2024-12-06 05:20:54.246710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:16.052 [2024-12-06 05:20:54.257775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:16.052 [2024-12-06 05:20:54.257814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:16.052 [2024-12-06 05:20:54.257825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:16.052 [2024-12-06 05:20:54.257833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:16.052 [2024-12-06 05:20:54.257878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:16.052 [2024-12-06 05:20:54.257887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:16.052 [2024-12-06 05:20:54.257895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:16.052 [2024-12-06 05:20:54.257903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:16.052 [2024-12-06 05:20:54.257952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:16.052 [2024-12-06 05:20:54.257965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:16.052 [2024-12-06 05:20:54.257977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:16.052 [2024-12-06 05:20:54.257985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:16.052 [2024-12-06 05:20:54.258041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:16.052 [2024-12-06 05:20:54.258051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:16.052 [2024-12-06 05:20:54.258059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:16.052 [2024-12-06 05:20:54.258067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:16.052 [2024-12-06 05:20:54.258093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:16.052 [2024-12-06 05:20:54.258111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:33:16.052 [2024-12-06 05:20:54.258120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:16.052 [2024-12-06 05:20:54.258127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:16.052 [2024-12-06 05:20:54.258170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:16.052 [2024-12-06 05:20:54.258179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:16.052 [2024-12-06 05:20:54.258188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:16.052 [2024-12-06 05:20:54.258195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:16.052 [2024-12-06 05:20:54.258244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:16.052 [2024-12-06 05:20:54.258254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:16.052 [2024-12-06 05:20:54.258263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:16.052 [2024-12-06 05:20:54.258270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:16.052 [2024-12-06 05:20:54.258391] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 40.321 ms, result 0 00:33:16.314 00:33:16.314 00:33:16.314 05:20:54 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:33:18.225 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:33:18.225 05:20:56 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:33:18.225 05:20:56 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:33:18.225 05:20:56 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:33:18.487 05:20:56 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:33:18.487 05:20:56 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:33:18.487 05:20:56 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 93218 00:33:18.487 Process with pid 93218 is not found 00:33:18.487 Remove shared memory files 00:33:18.487 05:20:56 ftl.ftl_restore_fast -- common/autotest_common.sh@950 -- # '[' -z 93218 ']' 00:33:18.487 05:20:56 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # kill -0 93218 00:33:18.487 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (93218) - No such process 00:33:18.487 05:20:56 ftl.ftl_restore_fast -- common/autotest_common.sh@977 -- # echo 'Process with pid 93218 is not found' 00:33:18.487 05:20:56 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:33:18.487 05:20:56 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:33:18.487 05:20:56 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:33:18.487 05:20:56 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_7b4b71fc-81b6-42ef-bf87-4b87e3b52047_band_md /dev/hugepages/ftl_7b4b71fc-81b6-42ef-bf87-4b87e3b52047_l2p_l1 /dev/hugepages/ftl_7b4b71fc-81b6-42ef-bf87-4b87e3b52047_l2p_l2 /dev/hugepages/ftl_7b4b71fc-81b6-42ef-bf87-4b87e3b52047_l2p_l2_ctx /dev/hugepages/ftl_7b4b71fc-81b6-42ef-bf87-4b87e3b52047_nvc_md /dev/hugepages/ftl_7b4b71fc-81b6-42ef-bf87-4b87e3b52047_p2l_pool /dev/hugepages/ftl_7b4b71fc-81b6-42ef-bf87-4b87e3b52047_sb /dev/hugepages/ftl_7b4b71fc-81b6-42ef-bf87-4b87e3b52047_sb_shm /dev/hugepages/ftl_7b4b71fc-81b6-42ef-bf87-4b87e3b52047_trim_bitmap /dev/hugepages/ftl_7b4b71fc-81b6-42ef-bf87-4b87e3b52047_trim_log /dev/hugepages/ftl_7b4b71fc-81b6-42ef-bf87-4b87e3b52047_trim_md /dev/hugepages/ftl_7b4b71fc-81b6-42ef-bf87-4b87e3b52047_vmap 00:33:18.487 05:20:56 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:33:18.487 05:20:56 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:33:18.487 05:20:56 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:33:18.487 ************************************ 00:33:18.487 END TEST ftl_restore_fast 00:33:18.487 ************************************ 00:33:18.487 00:33:18.487 real 4m34.143s 00:33:18.487 user 4m22.567s 00:33:18.487 sys 0m11.209s 00:33:18.487 05:20:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:18.487 05:20:56 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:33:18.487 Process with pid 83841 is not found 00:33:18.487 05:20:56 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:33:18.487 05:20:56 ftl -- ftl/ftl.sh@14 -- # killprocess 83841 00:33:18.487 05:20:56 ftl -- common/autotest_common.sh@950 -- # '[' -z 83841 ']' 00:33:18.487 05:20:56 ftl -- common/autotest_common.sh@954 -- # kill -0 83841 00:33:18.487 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (83841) - No such process 00:33:18.487 05:20:56 ftl -- common/autotest_common.sh@977 -- # echo 'Process with pid 83841 is not found' 00:33:18.487 05:20:56 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:33:18.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:18.487 05:20:56 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=96030 00:33:18.487 05:20:56 ftl -- ftl/ftl.sh@20 -- # waitforlisten 96030 00:33:18.487 05:20:56 ftl -- common/autotest_common.sh@831 -- # '[' -z 96030 ']' 00:33:18.487 05:20:56 ftl -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:18.487 05:20:56 ftl -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:18.487 05:20:56 ftl -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:18.487 05:20:56 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:18.487 05:20:56 ftl -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:18.487 05:20:56 ftl -- common/autotest_common.sh@10 -- # set +x 00:33:18.487 [2024-12-06 05:20:56.685092] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:33:18.487 [2024-12-06 05:20:56.685457] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96030 ] 00:33:18.749 [2024-12-06 05:20:56.824989] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:18.749 [2024-12-06 05:20:56.858488] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:33:19.695 05:20:57 ftl -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:19.695 05:20:57 ftl -- common/autotest_common.sh@864 -- # return 0 00:33:19.695 05:20:57 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:33:19.695 nvme0n1 00:33:19.695 05:20:57 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:33:19.695 05:20:57 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:33:19.695 05:20:57 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:33:19.957 05:20:58 ftl -- ftl/common.sh@28 -- # stores=b7d5758f-b336-4965-b704-a5bdcc8afb58 00:33:19.957 05:20:58 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:33:19.957 05:20:58 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b7d5758f-b336-4965-b704-a5bdcc8afb58 00:33:20.219 05:20:58 ftl -- ftl/ftl.sh@23 -- # killprocess 96030 00:33:20.219 05:20:58 ftl -- common/autotest_common.sh@950 -- # '[' -z 96030 ']' 00:33:20.219 05:20:58 ftl -- common/autotest_common.sh@954 -- # kill -0 96030 00:33:20.219 05:20:58 ftl -- common/autotest_common.sh@955 -- # uname 00:33:20.219 05:20:58 ftl -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:20.219 05:20:58 ftl -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 96030 00:33:20.219 killing process with pid 96030 00:33:20.219 05:20:58 ftl -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:20.219 05:20:58 ftl -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:20.219 05:20:58 ftl -- common/autotest_common.sh@968 -- # echo 'killing process with pid 96030' 00:33:20.219 05:20:58 ftl -- common/autotest_common.sh@969 -- # kill 96030 00:33:20.219 05:20:58 ftl -- common/autotest_common.sh@974 -- # wait 96030 00:33:20.480 05:20:58 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:33:20.480 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:20.744 Waiting for block devices as requested 00:33:20.744 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:33:20.744 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:33:20.744 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:33:21.005 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:33:26.300 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:33:26.300 Remove shared memory files 00:33:26.300 05:21:04 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:33:26.300 05:21:04 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:33:26.300 05:21:04 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:33:26.300 05:21:04 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:33:26.300 05:21:04 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:33:26.301 05:21:04 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:33:26.301 05:21:04 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:33:26.301 ************************************ 00:33:26.301 END TEST ftl 00:33:26.301 ************************************ 00:33:26.301 00:33:26.301 real 18m20.995s 00:33:26.301 user 20m9.366s 00:33:26.301 sys 1m30.980s 00:33:26.301 05:21:04 ftl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:26.301 05:21:04 ftl -- common/autotest_common.sh@10 -- # set +x 00:33:26.301 05:21:04 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:33:26.301 05:21:04 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:33:26.301 05:21:04 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:33:26.301 05:21:04 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:33:26.301 05:21:04 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:33:26.301 05:21:04 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:33:26.301 05:21:04 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:33:26.301 05:21:04 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:33:26.301 05:21:04 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:33:26.301 05:21:04 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:33:26.301 05:21:04 -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:26.301 05:21:04 -- common/autotest_common.sh@10 -- # set +x 00:33:26.301 05:21:04 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:33:26.301 05:21:04 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:33:26.301 05:21:04 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:33:26.301 05:21:04 -- common/autotest_common.sh@10 -- # set +x 00:33:27.684 INFO: APP EXITING 00:33:27.684 INFO: killing all VMs 00:33:27.684 INFO: killing vhost app 00:33:27.684 INFO: EXIT DONE 00:33:27.945 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:28.206 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:33:28.206 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:33:28.206 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:33:28.206 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:33:28.778 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:29.039 Cleaning 00:33:29.039 Removing: /var/run/dpdk/spdk0/config 00:33:29.039 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:33:29.039 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:33:29.039 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:33:29.039 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:33:29.039 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:33:29.039 Removing: /var/run/dpdk/spdk0/hugepage_info 00:33:29.039 Removing: /var/run/dpdk/spdk0 00:33:29.039 Removing: /var/run/dpdk/spdk_pid69319 00:33:29.039 Removing: /var/run/dpdk/spdk_pid69482 00:33:29.039 Removing: /var/run/dpdk/spdk_pid69678 00:33:29.039 Removing: /var/run/dpdk/spdk_pid69760 00:33:29.039 Removing: /var/run/dpdk/spdk_pid69789 00:33:29.039 Removing: /var/run/dpdk/spdk_pid69900 00:33:29.039 Removing: /var/run/dpdk/spdk_pid69918 00:33:29.039 Removing: /var/run/dpdk/spdk_pid70095 00:33:29.039 Removing: /var/run/dpdk/spdk_pid70169 00:33:29.039 Removing: /var/run/dpdk/spdk_pid70248 00:33:29.039 Removing: /var/run/dpdk/spdk_pid70348 00:33:29.039 Removing: /var/run/dpdk/spdk_pid70423 00:33:29.039 Removing: /var/run/dpdk/spdk_pid70463 00:33:29.039 Removing: /var/run/dpdk/spdk_pid70499 00:33:29.039 Removing: /var/run/dpdk/spdk_pid70569 00:33:29.039 Removing: /var/run/dpdk/spdk_pid70669 00:33:29.039 Removing: /var/run/dpdk/spdk_pid71090 00:33:29.302 Removing: /var/run/dpdk/spdk_pid71132 00:33:29.302 Removing: /var/run/dpdk/spdk_pid71184 00:33:29.302 Removing: /var/run/dpdk/spdk_pid71194 00:33:29.302 Removing: /var/run/dpdk/spdk_pid71258 00:33:29.302 Removing: /var/run/dpdk/spdk_pid71274 00:33:29.302 Removing: /var/run/dpdk/spdk_pid71332 00:33:29.302 Removing: /var/run/dpdk/spdk_pid71348 00:33:29.302 Removing: /var/run/dpdk/spdk_pid71390 00:33:29.302 Removing: /var/run/dpdk/spdk_pid71408 00:33:29.302 Removing: /var/run/dpdk/spdk_pid71450 00:33:29.302 Removing: /var/run/dpdk/spdk_pid71468 00:33:29.302 Removing: /var/run/dpdk/spdk_pid71595 00:33:29.302 Removing: /var/run/dpdk/spdk_pid71626 00:33:29.302 Removing: /var/run/dpdk/spdk_pid71715 00:33:29.302 Removing: /var/run/dpdk/spdk_pid71876 00:33:29.302 Removing: /var/run/dpdk/spdk_pid71938 00:33:29.302 Removing: /var/run/dpdk/spdk_pid71969 00:33:29.302 Removing: /var/run/dpdk/spdk_pid72391 00:33:29.302 Removing: /var/run/dpdk/spdk_pid72484 00:33:29.302 Removing: /var/run/dpdk/spdk_pid72588 00:33:29.302 Removing: /var/run/dpdk/spdk_pid72620 00:33:29.302 Removing: /var/run/dpdk/spdk_pid72651 00:33:29.302 Removing: /var/run/dpdk/spdk_pid72724 00:33:29.302 Removing: /var/run/dpdk/spdk_pid73331 00:33:29.302 Removing: /var/run/dpdk/spdk_pid73362 00:33:29.302 Removing: /var/run/dpdk/spdk_pid73818 00:33:29.302 Removing: /var/run/dpdk/spdk_pid73905 00:33:29.302 Removing: /var/run/dpdk/spdk_pid74014 00:33:29.302 Removing: /var/run/dpdk/spdk_pid74056 00:33:29.302 Removing: /var/run/dpdk/spdk_pid74076 00:33:29.302 Removing: /var/run/dpdk/spdk_pid74096 00:33:29.302 Removing: /var/run/dpdk/spdk_pid75912 00:33:29.302 Removing: /var/run/dpdk/spdk_pid76033 00:33:29.302 Removing: /var/run/dpdk/spdk_pid76037 00:33:29.302 Removing: /var/run/dpdk/spdk_pid76049 00:33:29.302 Removing: /var/run/dpdk/spdk_pid76089 00:33:29.302 Removing: /var/run/dpdk/spdk_pid76093 00:33:29.302 Removing: /var/run/dpdk/spdk_pid76105 00:33:29.302 Removing: /var/run/dpdk/spdk_pid76144 00:33:29.302 Removing: /var/run/dpdk/spdk_pid76148 00:33:29.302 Removing: /var/run/dpdk/spdk_pid76160 00:33:29.302 Removing: /var/run/dpdk/spdk_pid76199 00:33:29.302 Removing: /var/run/dpdk/spdk_pid76203 00:33:29.302 Removing: /var/run/dpdk/spdk_pid76215 00:33:29.302 Removing: /var/run/dpdk/spdk_pid77581 00:33:29.303 Removing: /var/run/dpdk/spdk_pid77667 00:33:29.303 Removing: /var/run/dpdk/spdk_pid79065 00:33:29.303 Removing: /var/run/dpdk/spdk_pid80431 00:33:29.303 Removing: /var/run/dpdk/spdk_pid80492 00:33:29.303 Removing: /var/run/dpdk/spdk_pid80557 00:33:29.303 Removing: /var/run/dpdk/spdk_pid80611 00:33:29.303 Removing: /var/run/dpdk/spdk_pid80688 00:33:29.303 Removing: /var/run/dpdk/spdk_pid80752 00:33:29.303 Removing: /var/run/dpdk/spdk_pid80894 00:33:29.303 Removing: /var/run/dpdk/spdk_pid81237 00:33:29.303 Removing: /var/run/dpdk/spdk_pid81268 00:33:29.303 Removing: /var/run/dpdk/spdk_pid81701 00:33:29.303 Removing: /var/run/dpdk/spdk_pid81880 00:33:29.303 Removing: /var/run/dpdk/spdk_pid81969 00:33:29.303 Removing: /var/run/dpdk/spdk_pid82083 00:33:29.303 Removing: /var/run/dpdk/spdk_pid82115 00:33:29.303 Removing: /var/run/dpdk/spdk_pid82146 00:33:29.303 Removing: /var/run/dpdk/spdk_pid82430 00:33:29.303 Removing: /var/run/dpdk/spdk_pid82475 00:33:29.303 Removing: /var/run/dpdk/spdk_pid82531 00:33:29.303 Removing: /var/run/dpdk/spdk_pid82905 00:33:29.303 Removing: /var/run/dpdk/spdk_pid83044 00:33:29.303 Removing: /var/run/dpdk/spdk_pid83841 00:33:29.303 Removing: /var/run/dpdk/spdk_pid83959 00:33:29.303 Removing: /var/run/dpdk/spdk_pid84117 00:33:29.303 Removing: /var/run/dpdk/spdk_pid84209 00:33:29.303 Removing: /var/run/dpdk/spdk_pid84501 00:33:29.303 Removing: /var/run/dpdk/spdk_pid84776 00:33:29.303 Removing: /var/run/dpdk/spdk_pid85122 00:33:29.303 Removing: /var/run/dpdk/spdk_pid85293 00:33:29.303 Removing: /var/run/dpdk/spdk_pid85522 00:33:29.303 Removing: /var/run/dpdk/spdk_pid85559 00:33:29.303 Removing: /var/run/dpdk/spdk_pid85783 00:33:29.303 Removing: /var/run/dpdk/spdk_pid85805 00:33:29.303 Removing: /var/run/dpdk/spdk_pid85851 00:33:29.303 Removing: /var/run/dpdk/spdk_pid86153 00:33:29.303 Removing: /var/run/dpdk/spdk_pid86353 00:33:29.303 Removing: /var/run/dpdk/spdk_pid87108 00:33:29.303 Removing: /var/run/dpdk/spdk_pid88015 00:33:29.303 Removing: /var/run/dpdk/spdk_pid88621 00:33:29.303 Removing: /var/run/dpdk/spdk_pid89408 00:33:29.303 Removing: /var/run/dpdk/spdk_pid89532 00:33:29.303 Removing: /var/run/dpdk/spdk_pid89612 00:33:29.303 Removing: /var/run/dpdk/spdk_pid90137 00:33:29.303 Removing: /var/run/dpdk/spdk_pid90189 00:33:29.303 Removing: /var/run/dpdk/spdk_pid91023 00:33:29.303 Removing: /var/run/dpdk/spdk_pid91464 00:33:29.303 Removing: /var/run/dpdk/spdk_pid92278 00:33:29.303 Removing: /var/run/dpdk/spdk_pid92404 00:33:29.303 Removing: /var/run/dpdk/spdk_pid92437 00:33:29.303 Removing: /var/run/dpdk/spdk_pid92493 00:33:29.303 Removing: /var/run/dpdk/spdk_pid92543 00:33:29.303 Removing: /var/run/dpdk/spdk_pid92596 00:33:29.303 Removing: /var/run/dpdk/spdk_pid92774 00:33:29.303 Removing: /var/run/dpdk/spdk_pid92848 00:33:29.303 Removing: /var/run/dpdk/spdk_pid92904 00:33:29.303 Removing: /var/run/dpdk/spdk_pid92993 00:33:29.564 Removing: /var/run/dpdk/spdk_pid93022 00:33:29.565 Removing: /var/run/dpdk/spdk_pid93082 00:33:29.565 Removing: /var/run/dpdk/spdk_pid93218 00:33:29.565 Removing: /var/run/dpdk/spdk_pid93422 00:33:29.565 Removing: /var/run/dpdk/spdk_pid93929 00:33:29.565 Removing: /var/run/dpdk/spdk_pid94577 00:33:29.565 Removing: /var/run/dpdk/spdk_pid95262 00:33:29.565 Removing: /var/run/dpdk/spdk_pid96030 00:33:29.565 Clean 00:33:29.565 05:21:07 -- common/autotest_common.sh@1451 -- # return 0 00:33:29.565 05:21:07 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:33:29.565 05:21:07 -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:29.565 05:21:07 -- common/autotest_common.sh@10 -- # set +x 00:33:29.565 05:21:07 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:33:29.565 05:21:07 -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:29.565 05:21:07 -- common/autotest_common.sh@10 -- # set +x 00:33:29.565 05:21:07 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:33:29.565 05:21:07 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:33:29.565 05:21:07 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:33:29.565 05:21:07 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:33:29.565 05:21:07 -- spdk/autotest.sh@394 -- # hostname 00:33:29.565 05:21:07 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:33:29.826 geninfo: WARNING: invalid characters removed from testname! 00:33:56.500 05:21:31 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:57.445 05:21:35 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:59.990 05:21:37 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:01.369 05:21:39 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:03.923 05:21:42 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:07.229 05:21:44 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:09.144 05:21:47 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:34:09.144 05:21:47 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:34:09.144 05:21:47 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:34:09.144 05:21:47 -- common/autotest_common.sh@1681 -- $ lcov --version 00:34:09.144 05:21:47 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:34:09.144 05:21:47 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:34:09.144 05:21:47 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:34:09.144 05:21:47 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:34:09.144 05:21:47 -- scripts/common.sh@336 -- $ IFS=.-: 00:34:09.144 05:21:47 -- scripts/common.sh@336 -- $ read -ra ver1 00:34:09.144 05:21:47 -- scripts/common.sh@337 -- $ IFS=.-: 00:34:09.144 05:21:47 -- scripts/common.sh@337 -- $ read -ra ver2 00:34:09.144 05:21:47 -- scripts/common.sh@338 -- $ local 'op=<' 00:34:09.144 05:21:47 -- scripts/common.sh@340 -- $ ver1_l=2 00:34:09.144 05:21:47 -- scripts/common.sh@341 -- $ ver2_l=1 00:34:09.144 05:21:47 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:34:09.144 05:21:47 -- scripts/common.sh@344 -- $ case "$op" in 00:34:09.144 05:21:47 -- scripts/common.sh@345 -- $ : 1 00:34:09.144 05:21:47 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:34:09.144 05:21:47 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:09.144 05:21:47 -- scripts/common.sh@365 -- $ decimal 1 00:34:09.144 05:21:47 -- scripts/common.sh@353 -- $ local d=1 00:34:09.144 05:21:47 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:34:09.144 05:21:47 -- scripts/common.sh@355 -- $ echo 1 00:34:09.144 05:21:47 -- scripts/common.sh@365 -- $ ver1[v]=1 00:34:09.144 05:21:47 -- scripts/common.sh@366 -- $ decimal 2 00:34:09.404 05:21:47 -- scripts/common.sh@353 -- $ local d=2 00:34:09.404 05:21:47 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:34:09.404 05:21:47 -- scripts/common.sh@355 -- $ echo 2 00:34:09.404 05:21:47 -- scripts/common.sh@366 -- $ ver2[v]=2 00:34:09.404 05:21:47 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:34:09.404 05:21:47 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:34:09.404 05:21:47 -- scripts/common.sh@368 -- $ return 0 00:34:09.404 05:21:47 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:09.404 05:21:47 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:34:09.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:09.404 --rc genhtml_branch_coverage=1 00:34:09.404 --rc genhtml_function_coverage=1 00:34:09.404 --rc genhtml_legend=1 00:34:09.404 --rc geninfo_all_blocks=1 00:34:09.404 --rc geninfo_unexecuted_blocks=1 00:34:09.404 00:34:09.404 ' 00:34:09.404 05:21:47 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:34:09.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:09.404 --rc genhtml_branch_coverage=1 00:34:09.404 --rc genhtml_function_coverage=1 00:34:09.404 --rc genhtml_legend=1 00:34:09.404 --rc geninfo_all_blocks=1 00:34:09.404 --rc geninfo_unexecuted_blocks=1 00:34:09.404 00:34:09.404 ' 00:34:09.404 05:21:47 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:34:09.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:09.404 --rc genhtml_branch_coverage=1 00:34:09.404 --rc genhtml_function_coverage=1 00:34:09.404 --rc genhtml_legend=1 00:34:09.404 --rc geninfo_all_blocks=1 00:34:09.404 --rc geninfo_unexecuted_blocks=1 00:34:09.404 00:34:09.404 ' 00:34:09.404 05:21:47 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:34:09.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:09.404 --rc genhtml_branch_coverage=1 00:34:09.404 --rc genhtml_function_coverage=1 00:34:09.404 --rc genhtml_legend=1 00:34:09.404 --rc geninfo_all_blocks=1 00:34:09.404 --rc geninfo_unexecuted_blocks=1 00:34:09.404 00:34:09.404 ' 00:34:09.404 05:21:47 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:34:09.404 05:21:47 -- scripts/common.sh@15 -- $ shopt -s extglob 00:34:09.404 05:21:47 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:34:09.404 05:21:47 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:09.404 05:21:47 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:09.404 05:21:47 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:09.404 05:21:47 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:09.404 05:21:47 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:09.404 05:21:47 -- paths/export.sh@5 -- $ export PATH 00:34:09.404 05:21:47 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:09.404 05:21:47 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:34:09.404 05:21:47 -- common/autobuild_common.sh@479 -- $ date +%s 00:34:09.404 05:21:47 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1733462507.XXXXXX 00:34:09.404 05:21:47 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1733462507.0s9pJW 00:34:09.404 05:21:47 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:34:09.404 05:21:47 -- common/autobuild_common.sh@485 -- $ '[' -n v22.11.4 ']' 00:34:09.404 05:21:47 -- common/autobuild_common.sh@486 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:34:09.404 05:21:47 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:34:09.404 05:21:47 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:34:09.404 05:21:47 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:34:09.404 05:21:47 -- common/autobuild_common.sh@495 -- $ get_config_params 00:34:09.404 05:21:47 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:34:09.404 05:21:47 -- common/autotest_common.sh@10 -- $ set +x 00:34:09.404 05:21:47 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:34:09.404 05:21:47 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:34:09.404 05:21:47 -- pm/common@17 -- $ local monitor 00:34:09.404 05:21:47 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:09.404 05:21:47 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:09.404 05:21:47 -- pm/common@25 -- $ sleep 1 00:34:09.404 05:21:47 -- pm/common@21 -- $ date +%s 00:34:09.404 05:21:47 -- pm/common@21 -- $ date +%s 00:34:09.404 05:21:47 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1733462507 00:34:09.404 05:21:47 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1733462507 00:34:09.404 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1733462507_collect-vmstat.pm.log 00:34:09.404 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1733462507_collect-cpu-load.pm.log 00:34:10.360 05:21:48 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:34:10.360 05:21:48 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:34:10.360 05:21:48 -- spdk/autopackage.sh@14 -- $ timing_finish 00:34:10.360 05:21:48 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:34:10.360 05:21:48 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:34:10.360 05:21:48 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:34:10.360 05:21:48 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:34:10.360 05:21:48 -- pm/common@29 -- $ signal_monitor_resources TERM 00:34:10.360 05:21:48 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:34:10.360 05:21:48 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:10.360 05:21:48 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:34:10.360 05:21:48 -- pm/common@44 -- $ pid=97707 00:34:10.360 05:21:48 -- pm/common@50 -- $ kill -TERM 97707 00:34:10.360 05:21:48 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:10.360 05:21:48 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:34:10.360 05:21:48 -- pm/common@44 -- $ pid=97708 00:34:10.360 05:21:48 -- pm/common@50 -- $ kill -TERM 97708 00:34:10.360 + [[ -n 5768 ]] 00:34:10.360 + sudo kill 5768 00:34:10.371 [Pipeline] } 00:34:10.387 [Pipeline] // timeout 00:34:10.394 [Pipeline] } 00:34:10.408 [Pipeline] // stage 00:34:10.414 [Pipeline] } 00:34:10.428 [Pipeline] // catchError 00:34:10.438 [Pipeline] stage 00:34:10.440 [Pipeline] { (Stop VM) 00:34:10.454 [Pipeline] sh 00:34:10.741 + vagrant halt 00:34:13.287 ==> default: Halting domain... 00:34:18.586 [Pipeline] sh 00:34:18.867 + vagrant destroy -f 00:34:21.406 ==> default: Removing domain... 00:34:21.682 [Pipeline] sh 00:34:21.969 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:34:21.980 [Pipeline] } 00:34:21.996 [Pipeline] // stage 00:34:22.003 [Pipeline] } 00:34:22.018 [Pipeline] // dir 00:34:22.025 [Pipeline] } 00:34:22.042 [Pipeline] // wrap 00:34:22.050 [Pipeline] } 00:34:22.063 [Pipeline] // catchError 00:34:22.073 [Pipeline] stage 00:34:22.076 [Pipeline] { (Epilogue) 00:34:22.092 [Pipeline] sh 00:34:22.381 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:34:27.670 [Pipeline] catchError 00:34:27.672 [Pipeline] { 00:34:27.684 [Pipeline] sh 00:34:27.969 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:34:27.969 Artifacts sizes are good 00:34:28.007 [Pipeline] } 00:34:28.021 [Pipeline] // catchError 00:34:28.030 [Pipeline] archiveArtifacts 00:34:28.036 Archiving artifacts 00:34:28.160 [Pipeline] cleanWs 00:34:28.182 [WS-CLEANUP] Deleting project workspace... 00:34:28.182 [WS-CLEANUP] Deferred wipeout is used... 00:34:28.207 [WS-CLEANUP] done 00:34:28.210 [Pipeline] } 00:34:28.228 [Pipeline] // stage 00:34:28.234 [Pipeline] } 00:34:28.250 [Pipeline] // node 00:34:28.257 [Pipeline] End of Pipeline 00:34:28.299 Finished: SUCCESS