00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-v22.11" build number 2435 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3700 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.108 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.109 The recommended git tool is: git 00:00:00.109 using credential 00000000-0000-0000-0000-000000000002 00:00:00.111 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.167 Fetching changes from the remote Git repository 00:00:00.173 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.228 Using shallow fetch with depth 1 00:00:00.228 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.228 > git --version # timeout=10 00:00:00.258 > git --version # 'git version 2.39.2' 00:00:00.258 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.289 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.289 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.596 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.610 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.624 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:06.624 > git config core.sparsecheckout # timeout=10 00:00:06.636 > git read-tree -mu HEAD # timeout=10 00:00:06.654 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:06.677 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:06.677 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:06.770 [Pipeline] Start of Pipeline 00:00:06.784 [Pipeline] library 00:00:06.786 Loading library shm_lib@master 00:00:06.786 Library shm_lib@master is cached. Copying from home. 00:00:06.803 [Pipeline] node 00:00:06.816 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:06.818 [Pipeline] { 00:00:06.830 [Pipeline] catchError 00:00:06.832 [Pipeline] { 00:00:06.845 [Pipeline] wrap 00:00:06.854 [Pipeline] { 00:00:06.862 [Pipeline] stage 00:00:06.864 [Pipeline] { (Prologue) 00:00:06.883 [Pipeline] echo 00:00:06.884 Node: VM-host-SM38 00:00:06.891 [Pipeline] cleanWs 00:00:06.903 [WS-CLEANUP] Deleting project workspace... 00:00:06.903 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.911 [WS-CLEANUP] done 00:00:07.127 [Pipeline] setCustomBuildProperty 00:00:07.227 [Pipeline] httpRequest 00:00:07.987 [Pipeline] echo 00:00:07.989 Sorcerer 10.211.164.20 is alive 00:00:07.998 [Pipeline] retry 00:00:08.000 [Pipeline] { 00:00:08.013 [Pipeline] httpRequest 00:00:08.018 HttpMethod: GET 00:00:08.018 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.019 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.020 Response Code: HTTP/1.1 200 OK 00:00:08.021 Success: Status code 200 is in the accepted range: 200,404 00:00:08.022 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.029 [Pipeline] } 00:00:09.040 [Pipeline] // retry 00:00:09.046 [Pipeline] sh 00:00:09.330 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.343 [Pipeline] httpRequest 00:00:09.654 [Pipeline] echo 00:00:09.656 Sorcerer 10.211.164.20 is alive 00:00:09.667 [Pipeline] retry 00:00:09.669 [Pipeline] { 00:00:09.682 [Pipeline] httpRequest 00:00:09.687 HttpMethod: GET 00:00:09.688 URL: http://10.211.164.20/packages/spdk_8d3947977640da882a3cdcc21a7575115b7e7787.tar.gz 00:00:09.688 Sending request to url: http://10.211.164.20/packages/spdk_8d3947977640da882a3cdcc21a7575115b7e7787.tar.gz 00:00:09.701 Response Code: HTTP/1.1 200 OK 00:00:09.701 Success: Status code 200 is in the accepted range: 200,404 00:00:09.702 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_8d3947977640da882a3cdcc21a7575115b7e7787.tar.gz 00:01:17.638 [Pipeline] } 00:01:17.663 [Pipeline] // retry 00:01:17.673 [Pipeline] sh 00:01:17.967 + tar --no-same-owner -xf spdk_8d3947977640da882a3cdcc21a7575115b7e7787.tar.gz 00:01:20.516 [Pipeline] sh 00:01:20.829 + git -C spdk log --oneline -n5 00:01:20.829 8d3947977 spdk_dd: simplify `io_uring_peek_cqe` return code processing 00:01:20.829 77ee034c7 bdev/nvme: Add lock to unprotected operations around attach controller 00:01:20.829 48454bb28 bdev/nvme: Add lock to unprotected operations around detach controller 00:01:20.829 4b59d7893 bdev/nvme: Use nbdev always for local nvme_bdev pointer variables 00:01:20.829 e56f1618f lib/ftl: Add explicit support for write unit sizes of base device 00:01:20.851 [Pipeline] withCredentials 00:01:20.862 > git --version # timeout=10 00:01:20.878 > git --version # 'git version 2.39.2' 00:01:20.895 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:20.897 [Pipeline] { 00:01:20.906 [Pipeline] retry 00:01:20.908 [Pipeline] { 00:01:20.920 [Pipeline] sh 00:01:21.202 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:01:21.216 [Pipeline] } 00:01:21.238 [Pipeline] // retry 00:01:21.245 [Pipeline] } 00:01:21.262 [Pipeline] // withCredentials 00:01:21.273 [Pipeline] httpRequest 00:01:21.662 [Pipeline] echo 00:01:21.664 Sorcerer 10.211.164.20 is alive 00:01:21.675 [Pipeline] retry 00:01:21.677 [Pipeline] { 00:01:21.691 [Pipeline] httpRequest 00:01:21.697 HttpMethod: GET 00:01:21.697 URL: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:21.698 Sending request to url: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:21.708 Response Code: HTTP/1.1 200 OK 00:01:21.708 Success: Status code 200 is in the accepted range: 200,404 00:01:21.709 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:41.732 [Pipeline] } 00:01:41.748 [Pipeline] // retry 00:01:41.755 [Pipeline] sh 00:01:42.040 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:43.431 [Pipeline] sh 00:01:43.709 + git -C dpdk log --oneline -n5 00:01:43.709 caf0f5d395 version: 22.11.4 00:01:43.709 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:43.709 dc9c799c7d vhost: fix missing spinlock unlock 00:01:43.709 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:43.709 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:43.728 [Pipeline] writeFile 00:01:43.744 [Pipeline] sh 00:01:44.023 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:44.033 [Pipeline] sh 00:01:44.310 + cat autorun-spdk.conf 00:01:44.310 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:44.310 SPDK_TEST_NVME=1 00:01:44.310 SPDK_TEST_FTL=1 00:01:44.310 SPDK_TEST_ISAL=1 00:01:44.310 SPDK_RUN_ASAN=1 00:01:44.310 SPDK_RUN_UBSAN=1 00:01:44.310 SPDK_TEST_XNVME=1 00:01:44.310 SPDK_TEST_NVME_FDP=1 00:01:44.310 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:44.310 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:44.310 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:44.316 RUN_NIGHTLY=1 00:01:44.318 [Pipeline] } 00:01:44.332 [Pipeline] // stage 00:01:44.348 [Pipeline] stage 00:01:44.350 [Pipeline] { (Run VM) 00:01:44.362 [Pipeline] sh 00:01:44.641 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:44.641 + echo 'Start stage prepare_nvme.sh' 00:01:44.641 Start stage prepare_nvme.sh 00:01:44.641 + [[ -n 4 ]] 00:01:44.641 + disk_prefix=ex4 00:01:44.641 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:01:44.641 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:01:44.641 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:01:44.641 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:44.641 ++ SPDK_TEST_NVME=1 00:01:44.641 ++ SPDK_TEST_FTL=1 00:01:44.641 ++ SPDK_TEST_ISAL=1 00:01:44.641 ++ SPDK_RUN_ASAN=1 00:01:44.641 ++ SPDK_RUN_UBSAN=1 00:01:44.641 ++ SPDK_TEST_XNVME=1 00:01:44.641 ++ SPDK_TEST_NVME_FDP=1 00:01:44.641 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:44.641 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:44.641 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:44.641 ++ RUN_NIGHTLY=1 00:01:44.641 + cd /var/jenkins/workspace/nvme-vg-autotest 00:01:44.641 + nvme_files=() 00:01:44.641 + declare -A nvme_files 00:01:44.641 + backend_dir=/var/lib/libvirt/images/backends 00:01:44.641 + nvme_files['nvme.img']=5G 00:01:44.641 + nvme_files['nvme-cmb.img']=5G 00:01:44.641 + nvme_files['nvme-multi0.img']=4G 00:01:44.641 + nvme_files['nvme-multi1.img']=4G 00:01:44.641 + nvme_files['nvme-multi2.img']=4G 00:01:44.641 + nvme_files['nvme-openstack.img']=8G 00:01:44.641 + nvme_files['nvme-zns.img']=5G 00:01:44.641 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:44.641 + (( SPDK_TEST_FTL == 1 )) 00:01:44.641 + nvme_files["nvme-ftl.img"]=6G 00:01:44.641 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:44.641 + nvme_files["nvme-fdp.img"]=1G 00:01:44.641 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:44.641 + for nvme in "${!nvme_files[@]}" 00:01:44.641 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi2.img -s 4G 00:01:44.641 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:44.641 + for nvme in "${!nvme_files[@]}" 00:01:44.641 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-ftl.img -s 6G 00:01:44.641 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:44.641 + for nvme in "${!nvme_files[@]}" 00:01:44.641 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-cmb.img -s 5G 00:01:44.641 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:44.641 + for nvme in "${!nvme_files[@]}" 00:01:44.641 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-openstack.img -s 8G 00:01:44.641 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:44.641 + for nvme in "${!nvme_files[@]}" 00:01:44.641 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-zns.img -s 5G 00:01:44.641 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:44.641 + for nvme in "${!nvme_files[@]}" 00:01:44.641 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi1.img -s 4G 00:01:44.641 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:44.641 + for nvme in "${!nvme_files[@]}" 00:01:44.641 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi0.img -s 4G 00:01:44.900 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:44.900 + for nvme in "${!nvme_files[@]}" 00:01:44.900 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-fdp.img -s 1G 00:01:44.900 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:44.900 + for nvme in "${!nvme_files[@]}" 00:01:44.900 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme.img -s 5G 00:01:44.900 Formatting '/var/lib/libvirt/images/backends/ex4-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:44.900 ++ sudo grep -rl ex4-nvme.img /etc/libvirt/qemu 00:01:44.900 + echo 'End stage prepare_nvme.sh' 00:01:44.900 End stage prepare_nvme.sh 00:01:44.912 [Pipeline] sh 00:01:45.192 + DISTRO=fedora39 00:01:45.192 + CPUS=10 00:01:45.192 + RAM=12288 00:01:45.192 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:45.192 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex4-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex4-nvme.img -b /var/lib/libvirt/images/backends/ex4-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex4-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:01:45.192 00:01:45.192 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:01:45.192 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:01:45.192 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:01:45.192 HELP=0 00:01:45.192 DRY_RUN=0 00:01:45.192 NVME_FILE=/var/lib/libvirt/images/backends/ex4-nvme-ftl.img,/var/lib/libvirt/images/backends/ex4-nvme.img,/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,/var/lib/libvirt/images/backends/ex4-nvme-fdp.img, 00:01:45.192 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:45.192 NVME_AUTO_CREATE=0 00:01:45.192 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,, 00:01:45.192 NVME_CMB=,,,, 00:01:45.192 NVME_PMR=,,,, 00:01:45.192 NVME_ZNS=,,,, 00:01:45.192 NVME_MS=true,,,, 00:01:45.192 NVME_FDP=,,,on, 00:01:45.192 SPDK_VAGRANT_DISTRO=fedora39 00:01:45.192 SPDK_VAGRANT_VMCPU=10 00:01:45.192 SPDK_VAGRANT_VMRAM=12288 00:01:45.192 SPDK_VAGRANT_PROVIDER=libvirt 00:01:45.192 SPDK_VAGRANT_HTTP_PROXY= 00:01:45.192 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:45.192 SPDK_OPENSTACK_NETWORK=0 00:01:45.192 VAGRANT_PACKAGE_BOX=0 00:01:45.192 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:45.192 FORCE_DISTRO=true 00:01:45.192 VAGRANT_BOX_VERSION= 00:01:45.192 EXTRA_VAGRANTFILES= 00:01:45.192 NIC_MODEL=e1000 00:01:45.192 00:01:45.192 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:01:45.192 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:01:47.754 Bringing machine 'default' up with 'libvirt' provider... 00:01:48.013 ==> default: Creating image (snapshot of base box volume). 00:01:48.273 ==> default: Creating domain with the following settings... 00:01:48.273 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733424725_51b90b8b4537f28ba6ec 00:01:48.273 ==> default: -- Domain type: kvm 00:01:48.273 ==> default: -- Cpus: 10 00:01:48.273 ==> default: -- Feature: acpi 00:01:48.273 ==> default: -- Feature: apic 00:01:48.273 ==> default: -- Feature: pae 00:01:48.273 ==> default: -- Memory: 12288M 00:01:48.273 ==> default: -- Memory Backing: hugepages: 00:01:48.273 ==> default: -- Management MAC: 00:01:48.273 ==> default: -- Loader: 00:01:48.273 ==> default: -- Nvram: 00:01:48.273 ==> default: -- Base box: spdk/fedora39 00:01:48.273 ==> default: -- Storage pool: default 00:01:48.273 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733424725_51b90b8b4537f28ba6ec.img (20G) 00:01:48.273 ==> default: -- Volume Cache: default 00:01:48.273 ==> default: -- Kernel: 00:01:48.273 ==> default: -- Initrd: 00:01:48.273 ==> default: -- Graphics Type: vnc 00:01:48.273 ==> default: -- Graphics Port: -1 00:01:48.273 ==> default: -- Graphics IP: 127.0.0.1 00:01:48.273 ==> default: -- Graphics Password: Not defined 00:01:48.273 ==> default: -- Video Type: cirrus 00:01:48.273 ==> default: -- Video VRAM: 9216 00:01:48.273 ==> default: -- Sound Type: 00:01:48.273 ==> default: -- Keymap: en-us 00:01:48.273 ==> default: -- TPM Path: 00:01:48.273 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:48.273 ==> default: -- Command line args: 00:01:48.273 ==> default: -> value=-device, 00:01:48.273 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:48.273 ==> default: -> value=-drive, 00:01:48.273 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:48.273 ==> default: -> value=-device, 00:01:48.273 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:48.273 ==> default: -> value=-device, 00:01:48.273 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:48.273 ==> default: -> value=-drive, 00:01:48.273 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme.img,if=none,id=nvme-1-drive0, 00:01:48.273 ==> default: -> value=-device, 00:01:48.273 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:48.273 ==> default: -> value=-device, 00:01:48.273 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:01:48.273 ==> default: -> value=-drive, 00:01:48.273 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:48.273 ==> default: -> value=-device, 00:01:48.273 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:48.273 ==> default: -> value=-drive, 00:01:48.273 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:48.273 ==> default: -> value=-device, 00:01:48.273 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:48.273 ==> default: -> value=-drive, 00:01:48.273 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:48.273 ==> default: -> value=-device, 00:01:48.273 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:48.273 ==> default: -> value=-device, 00:01:48.273 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:48.273 ==> default: -> value=-device, 00:01:48.273 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:01:48.273 ==> default: -> value=-drive, 00:01:48.273 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:48.273 ==> default: -> value=-device, 00:01:48.273 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:48.532 ==> default: Creating shared folders metadata... 00:01:48.532 ==> default: Starting domain. 00:01:50.453 ==> default: Waiting for domain to get an IP address... 00:02:08.606 ==> default: Waiting for SSH to become available... 00:02:08.606 ==> default: Configuring and enabling network interfaces... 00:02:11.974 default: SSH address: 192.168.121.117:22 00:02:11.974 default: SSH username: vagrant 00:02:11.974 default: SSH auth method: private key 00:02:13.891 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:22.031 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:28.612 ==> default: Mounting SSHFS shared folder... 00:02:30.003 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:30.003 ==> default: Checking Mount.. 00:02:31.384 ==> default: Folder Successfully Mounted! 00:02:31.384 00:02:31.384 SUCCESS! 00:02:31.384 00:02:31.384 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:31.384 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:31.384 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:31.384 00:02:31.394 [Pipeline] } 00:02:31.409 [Pipeline] // stage 00:02:31.418 [Pipeline] dir 00:02:31.418 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:02:31.420 [Pipeline] { 00:02:31.432 [Pipeline] catchError 00:02:31.434 [Pipeline] { 00:02:31.445 [Pipeline] sh 00:02:31.729 + vagrant ssh-config --host vagrant 00:02:31.729 + sed -ne '/^Host/,$p' 00:02:31.729 + tee ssh_conf 00:02:34.279 Host vagrant 00:02:34.279 HostName 192.168.121.117 00:02:34.279 User vagrant 00:02:34.279 Port 22 00:02:34.279 UserKnownHostsFile /dev/null 00:02:34.279 StrictHostKeyChecking no 00:02:34.279 PasswordAuthentication no 00:02:34.279 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:34.279 IdentitiesOnly yes 00:02:34.279 LogLevel FATAL 00:02:34.279 ForwardAgent yes 00:02:34.279 ForwardX11 yes 00:02:34.279 00:02:34.295 [Pipeline] withEnv 00:02:34.297 [Pipeline] { 00:02:34.310 [Pipeline] sh 00:02:34.614 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:02:34.614 source /etc/os-release 00:02:34.614 [[ -e /image.version ]] && img=$(< /image.version) 00:02:34.614 # Minimal, systemd-like check. 00:02:34.614 if [[ -e /.dockerenv ]]; then 00:02:34.614 # Clear garbage from the node'\''s name: 00:02:34.614 # agt-er_autotest_547-896 -> autotest_547-896 00:02:34.614 # $HOSTNAME is the actual container id 00:02:34.614 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:34.614 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:34.614 # We can assume this is a mount from a host where container is running, 00:02:34.614 # so fetch its hostname to easily identify the target swarm worker. 00:02:34.614 container="$(< /etc/hostname) ($agent)" 00:02:34.614 else 00:02:34.614 # Fallback 00:02:34.614 container=$agent 00:02:34.614 fi 00:02:34.614 fi 00:02:34.614 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:34.614 ' 00:02:34.894 [Pipeline] } 00:02:34.911 [Pipeline] // withEnv 00:02:34.920 [Pipeline] setCustomBuildProperty 00:02:34.933 [Pipeline] stage 00:02:34.935 [Pipeline] { (Tests) 00:02:34.950 [Pipeline] sh 00:02:35.233 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:35.509 [Pipeline] sh 00:02:35.792 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:36.065 [Pipeline] timeout 00:02:36.065 Timeout set to expire in 50 min 00:02:36.067 [Pipeline] { 00:02:36.080 [Pipeline] sh 00:02:36.362 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:02:36.934 HEAD is now at 8d3947977 spdk_dd: simplify `io_uring_peek_cqe` return code processing 00:02:36.947 [Pipeline] sh 00:02:37.233 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:02:37.512 [Pipeline] sh 00:02:37.794 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:38.069 [Pipeline] sh 00:02:38.352 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:02:38.613 ++ readlink -f spdk_repo 00:02:38.613 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:38.613 + [[ -n /home/vagrant/spdk_repo ]] 00:02:38.613 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:38.613 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:38.613 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:38.613 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:38.613 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:38.613 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:38.613 + cd /home/vagrant/spdk_repo 00:02:38.613 + source /etc/os-release 00:02:38.613 ++ NAME='Fedora Linux' 00:02:38.613 ++ VERSION='39 (Cloud Edition)' 00:02:38.613 ++ ID=fedora 00:02:38.613 ++ VERSION_ID=39 00:02:38.613 ++ VERSION_CODENAME= 00:02:38.613 ++ PLATFORM_ID=platform:f39 00:02:38.613 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:38.613 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:38.613 ++ LOGO=fedora-logo-icon 00:02:38.613 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:38.613 ++ HOME_URL=https://fedoraproject.org/ 00:02:38.613 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:38.613 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:38.613 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:38.613 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:38.613 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:38.613 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:38.613 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:38.613 ++ SUPPORT_END=2024-11-12 00:02:38.613 ++ VARIANT='Cloud Edition' 00:02:38.613 ++ VARIANT_ID=cloud 00:02:38.613 + uname -a 00:02:38.613 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:38.613 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:38.872 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:39.132 Hugepages 00:02:39.132 node hugesize free / total 00:02:39.132 node0 1048576kB 0 / 0 00:02:39.132 node0 2048kB 0 / 0 00:02:39.132 00:02:39.132 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:39.132 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:39.132 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:39.132 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:39.132 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:02:39.132 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:02:39.132 + rm -f /tmp/spdk-ld-path 00:02:39.393 + source autorun-spdk.conf 00:02:39.393 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:39.393 ++ SPDK_TEST_NVME=1 00:02:39.393 ++ SPDK_TEST_FTL=1 00:02:39.393 ++ SPDK_TEST_ISAL=1 00:02:39.393 ++ SPDK_RUN_ASAN=1 00:02:39.393 ++ SPDK_RUN_UBSAN=1 00:02:39.393 ++ SPDK_TEST_XNVME=1 00:02:39.393 ++ SPDK_TEST_NVME_FDP=1 00:02:39.393 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:39.393 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:39.393 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:39.393 ++ RUN_NIGHTLY=1 00:02:39.393 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:39.393 + [[ -n '' ]] 00:02:39.393 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:39.393 + for M in /var/spdk/build-*-manifest.txt 00:02:39.393 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:39.393 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:39.393 + for M in /var/spdk/build-*-manifest.txt 00:02:39.393 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:39.393 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:39.393 + for M in /var/spdk/build-*-manifest.txt 00:02:39.393 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:39.393 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:39.393 ++ uname 00:02:39.393 + [[ Linux == \L\i\n\u\x ]] 00:02:39.393 + sudo dmesg -T 00:02:39.393 + sudo dmesg --clear 00:02:39.393 + dmesg_pid=5759 00:02:39.393 + [[ Fedora Linux == FreeBSD ]] 00:02:39.393 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:39.393 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:39.393 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:39.393 + [[ -x /usr/src/fio-static/fio ]] 00:02:39.393 + sudo dmesg -Tw 00:02:39.393 + export FIO_BIN=/usr/src/fio-static/fio 00:02:39.393 + FIO_BIN=/usr/src/fio-static/fio 00:02:39.393 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:39.393 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:39.393 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:39.393 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:39.393 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:39.393 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:39.393 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:39.393 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:39.393 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:39.393 18:52:56 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:02:39.393 18:52:56 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:39.393 18:52:56 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:39.393 18:52:56 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:02:39.393 18:52:56 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:02:39.393 18:52:56 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:02:39.393 18:52:56 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:02:39.393 18:52:56 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:02:39.393 18:52:56 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:02:39.393 18:52:56 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:02:39.393 18:52:56 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:39.393 18:52:56 -- spdk_repo/autorun-spdk.conf@10 -- $ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:39.393 18:52:56 -- spdk_repo/autorun-spdk.conf@11 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:39.393 18:52:56 -- spdk_repo/autorun-spdk.conf@12 -- $ RUN_NIGHTLY=1 00:02:39.393 18:52:56 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:39.393 18:52:56 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:39.393 18:52:56 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:02:39.393 18:52:56 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:39.393 18:52:56 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:39.654 18:52:56 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:39.654 18:52:56 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:39.654 18:52:56 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:39.654 18:52:56 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:39.654 18:52:56 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:39.654 18:52:56 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:39.654 18:52:56 -- paths/export.sh@5 -- $ export PATH 00:02:39.654 18:52:56 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:39.654 18:52:56 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:39.654 18:52:56 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:39.654 18:52:56 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733424776.XXXXXX 00:02:39.654 18:52:56 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733424776.f9w2LR 00:02:39.654 18:52:56 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:39.654 18:52:56 -- common/autobuild_common.sh@499 -- $ '[' -n v22.11.4 ']' 00:02:39.654 18:52:56 -- common/autobuild_common.sh@500 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:39.654 18:52:56 -- common/autobuild_common.sh@500 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:39.654 18:52:56 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:39.654 18:52:56 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:39.654 18:52:56 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:39.654 18:52:56 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:39.654 18:52:56 -- common/autotest_common.sh@10 -- $ set +x 00:02:39.654 18:52:56 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:02:39.654 18:52:56 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:39.654 18:52:56 -- pm/common@17 -- $ local monitor 00:02:39.654 18:52:56 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:39.654 18:52:56 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:39.654 18:52:56 -- pm/common@25 -- $ sleep 1 00:02:39.654 18:52:56 -- pm/common@21 -- $ date +%s 00:02:39.654 18:52:56 -- pm/common@21 -- $ date +%s 00:02:39.654 18:52:56 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733424776 00:02:39.654 18:52:56 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733424776 00:02:39.654 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733424776_collect-vmstat.pm.log 00:02:39.654 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733424776_collect-cpu-load.pm.log 00:02:40.596 18:52:57 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:40.596 18:52:57 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:40.596 18:52:57 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:40.596 18:52:57 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:40.596 18:52:57 -- spdk/autobuild.sh@16 -- $ date -u 00:02:40.596 Thu Dec 5 06:52:57 PM UTC 2024 00:02:40.596 18:52:57 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:40.596 v25.01-pre-296-g8d3947977 00:02:40.596 18:52:58 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:40.596 18:52:58 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:40.596 18:52:58 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:40.596 18:52:58 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:40.596 18:52:58 -- common/autotest_common.sh@10 -- $ set +x 00:02:40.596 ************************************ 00:02:40.596 START TEST asan 00:02:40.596 ************************************ 00:02:40.596 using asan 00:02:40.596 18:52:58 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:02:40.596 00:02:40.596 real 0m0.000s 00:02:40.596 user 0m0.000s 00:02:40.596 sys 0m0.000s 00:02:40.596 18:52:58 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:40.596 18:52:58 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:40.596 ************************************ 00:02:40.596 END TEST asan 00:02:40.596 ************************************ 00:02:40.596 18:52:58 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:40.596 18:52:58 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:40.596 18:52:58 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:40.596 18:52:58 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:40.596 18:52:58 -- common/autotest_common.sh@10 -- $ set +x 00:02:40.596 ************************************ 00:02:40.596 START TEST ubsan 00:02:40.596 ************************************ 00:02:40.596 using ubsan 00:02:40.596 18:52:58 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:40.596 00:02:40.596 real 0m0.000s 00:02:40.596 user 0m0.000s 00:02:40.596 sys 0m0.000s 00:02:40.596 ************************************ 00:02:40.596 END TEST ubsan 00:02:40.596 18:52:58 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:40.596 18:52:58 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:40.596 ************************************ 00:02:40.596 18:52:58 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:02:40.596 18:52:58 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:40.596 18:52:58 -- common/autobuild_common.sh@449 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:40.596 18:52:58 -- common/autotest_common.sh@1105 -- $ '[' 2 -le 1 ']' 00:02:40.596 18:52:58 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:40.596 18:52:58 -- common/autotest_common.sh@10 -- $ set +x 00:02:40.858 ************************************ 00:02:40.858 START TEST build_native_dpdk 00:02:40.858 ************************************ 00:02:40.858 18:52:58 build_native_dpdk -- common/autotest_common.sh@1129 -- $ _build_native_dpdk 00:02:40.858 18:52:58 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:40.858 18:52:58 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:40.858 18:52:58 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:40.858 18:52:58 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:40.858 18:52:58 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:40.858 18:52:58 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:40.858 18:52:58 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:40.858 18:52:58 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:40.858 18:52:58 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:40.858 18:52:58 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:40.858 18:52:58 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:40.858 18:52:58 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:40.858 18:52:58 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:40.858 18:52:58 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:40.858 18:52:58 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:40.858 18:52:58 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:40.858 18:52:58 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:40.858 18:52:58 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:40.858 18:52:58 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:40.858 18:52:58 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:40.858 caf0f5d395 version: 22.11.4 00:02:40.858 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:02:40.858 dc9c799c7d vhost: fix missing spinlock unlock 00:02:40.858 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:02:40.858 6ef77f2a5e net/gve: fix RX buffer size alignment 00:02:40.858 18:52:58 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:40.858 18:52:58 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:40.858 18:52:58 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:02:40.858 18:52:58 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:40.858 18:52:58 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:40.858 18:52:58 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:40.858 18:52:58 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:40.858 18:52:58 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:40.858 18:52:58 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:40.858 18:52:58 build_native_dpdk -- common/autobuild_common.sh@102 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base" "power/acpi" "power/amd_pstate" "power/cppc" "power/intel_pstate" "power/intel_uncore" "power/kvm_vm") 00:02:40.858 18:52:58 build_native_dpdk -- common/autobuild_common.sh@103 -- $ local mlx5_libs_added=n 00:02:40.858 18:52:58 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:02:40.858 18:52:58 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:02:40.858 18:52:58 build_native_dpdk -- common/autobuild_common.sh@146 -- $ [[ 0 -eq 1 ]] 00:02:40.858 18:52:58 build_native_dpdk -- common/autobuild_common.sh@174 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:40.858 18:52:58 build_native_dpdk -- common/autobuild_common.sh@175 -- $ uname -s 00:02:40.858 18:52:58 build_native_dpdk -- common/autobuild_common.sh@175 -- $ '[' Linux = Linux ']' 00:02:40.858 18:52:58 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 22.11.4 21.11.0 00:02:40.858 18:52:58 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:02:40.858 18:52:58 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:40.858 18:52:58 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:40.858 18:52:58 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:40.858 18:52:58 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:40.858 18:52:58 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:40.858 18:52:58 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:40.858 18:52:58 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:40.858 18:52:58 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:40.859 18:52:58 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:40.859 18:52:58 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:40.859 18:52:58 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:40.859 18:52:58 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:40.859 18:52:58 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:40.859 18:52:58 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:40.859 18:52:58 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:40.859 18:52:58 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:40.859 18:52:58 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:40.859 18:52:58 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:40.859 18:52:58 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:40.859 18:52:58 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:02:40.859 18:52:58 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:02:40.859 18:52:58 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:40.859 18:52:58 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:02:40.859 18:52:58 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:02:40.859 18:52:58 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:40.859 18:52:58 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:40.859 18:52:58 build_native_dpdk -- common/autobuild_common.sh@180 -- $ patch -p1 00:02:40.859 patching file config/rte_config.h 00:02:40.859 Hunk #1 succeeded at 60 (offset 1 line). 00:02:40.859 18:52:58 build_native_dpdk -- common/autobuild_common.sh@183 -- $ lt 22.11.4 24.07.0 00:02:40.859 18:52:58 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:02:40.859 18:52:58 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:40.859 18:52:58 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:40.859 18:52:58 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:40.859 18:52:58 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:40.859 18:52:58 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:40.859 18:52:58 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:40.859 18:52:58 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:40.859 18:52:58 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:40.859 18:52:58 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:40.859 18:52:58 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:40.859 18:52:58 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:40.859 18:52:58 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:40.859 18:52:58 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:40.859 18:52:58 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:40.859 18:52:58 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:40.859 18:52:58 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:40.859 18:52:58 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:40.859 18:52:58 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:40.859 18:52:58 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:40.859 18:52:58 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:40.859 18:52:58 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:40.859 18:52:58 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:40.859 18:52:58 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:40.859 18:52:58 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:40.859 18:52:58 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:40.859 18:52:58 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:40.859 18:52:58 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:02:40.859 18:52:58 build_native_dpdk -- common/autobuild_common.sh@184 -- $ patch -p1 00:02:40.859 patching file lib/pcapng/rte_pcapng.c 00:02:40.859 Hunk #1 succeeded at 110 (offset -18 lines). 00:02:40.859 18:52:58 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ge 22.11.4 24.07.0 00:02:40.859 18:52:58 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 22.11.4 '>=' 24.07.0 00:02:40.859 18:52:58 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:40.859 18:52:58 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:40.859 18:52:58 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:40.859 18:52:58 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:40.859 18:52:58 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:40.859 18:52:58 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:40.859 18:52:58 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:02:40.859 18:52:58 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:40.859 18:52:58 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:40.859 18:52:58 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:40.859 18:52:58 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:40.859 18:52:58 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:02:40.859 18:52:58 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:40.859 18:52:58 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:40.859 18:52:58 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:40.859 18:52:58 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:40.859 18:52:58 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:40.859 18:52:58 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:40.859 18:52:58 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:40.859 18:52:58 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:40.859 18:52:58 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:40.859 18:52:58 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:40.859 18:52:58 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:40.859 18:52:58 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:40.859 18:52:58 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:40.859 18:52:58 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:40.859 18:52:58 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:02:40.859 18:52:58 build_native_dpdk -- common/autobuild_common.sh@190 -- $ dpdk_kmods=false 00:02:40.859 18:52:58 build_native_dpdk -- common/autobuild_common.sh@191 -- $ uname -s 00:02:40.859 18:52:58 build_native_dpdk -- common/autobuild_common.sh@191 -- $ '[' Linux = FreeBSD ']' 00:02:40.859 18:52:58 build_native_dpdk -- common/autobuild_common.sh@195 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base power/acpi power/amd_pstate power/cppc power/intel_pstate power/intel_uncore power/kvm_vm 00:02:40.859 18:52:58 build_native_dpdk -- common/autobuild_common.sh@195 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:02:45.066 The Meson build system 00:02:45.066 Version: 1.5.0 00:02:45.066 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:45.066 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:45.066 Build type: native build 00:02:45.066 Program cat found: YES (/usr/bin/cat) 00:02:45.066 Project name: DPDK 00:02:45.066 Project version: 22.11.4 00:02:45.066 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:45.066 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:45.066 Host machine cpu family: x86_64 00:02:45.066 Host machine cpu: x86_64 00:02:45.066 Message: ## Building in Developer Mode ## 00:02:45.066 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:45.066 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:45.066 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:45.066 Program objdump found: YES (/usr/bin/objdump) 00:02:45.066 Program python3 found: YES (/usr/bin/python3) 00:02:45.066 Program cat found: YES (/usr/bin/cat) 00:02:45.066 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:45.066 Checking for size of "void *" : 8 00:02:45.066 Checking for size of "void *" : 8 (cached) 00:02:45.066 Library m found: YES 00:02:45.066 Library numa found: YES 00:02:45.066 Has header "numaif.h" : YES 00:02:45.066 Library fdt found: NO 00:02:45.066 Library execinfo found: NO 00:02:45.066 Has header "execinfo.h" : YES 00:02:45.066 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:45.066 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:45.066 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:45.066 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:45.066 Run-time dependency openssl found: YES 3.1.1 00:02:45.066 Run-time dependency libpcap found: YES 1.10.4 00:02:45.066 Has header "pcap.h" with dependency libpcap: YES 00:02:45.066 Compiler for C supports arguments -Wcast-qual: YES 00:02:45.066 Compiler for C supports arguments -Wdeprecated: YES 00:02:45.066 Compiler for C supports arguments -Wformat: YES 00:02:45.066 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:45.066 Compiler for C supports arguments -Wformat-security: NO 00:02:45.066 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:45.066 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:45.066 Compiler for C supports arguments -Wnested-externs: YES 00:02:45.066 Compiler for C supports arguments -Wold-style-definition: YES 00:02:45.066 Compiler for C supports arguments -Wpointer-arith: YES 00:02:45.066 Compiler for C supports arguments -Wsign-compare: YES 00:02:45.066 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:45.066 Compiler for C supports arguments -Wundef: YES 00:02:45.066 Compiler for C supports arguments -Wwrite-strings: YES 00:02:45.066 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:45.066 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:45.066 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:45.066 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:45.066 Compiler for C supports arguments -mavx512f: YES 00:02:45.066 Checking if "AVX512 checking" compiles: YES 00:02:45.066 Fetching value of define "__SSE4_2__" : 1 00:02:45.066 Fetching value of define "__AES__" : 1 00:02:45.066 Fetching value of define "__AVX__" : 1 00:02:45.066 Fetching value of define "__AVX2__" : 1 00:02:45.066 Fetching value of define "__AVX512BW__" : 1 00:02:45.066 Fetching value of define "__AVX512CD__" : 1 00:02:45.066 Fetching value of define "__AVX512DQ__" : 1 00:02:45.066 Fetching value of define "__AVX512F__" : 1 00:02:45.066 Fetching value of define "__AVX512VL__" : 1 00:02:45.066 Fetching value of define "__PCLMUL__" : 1 00:02:45.066 Fetching value of define "__RDRND__" : 1 00:02:45.066 Fetching value of define "__RDSEED__" : 1 00:02:45.066 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:45.066 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:45.066 Message: lib/kvargs: Defining dependency "kvargs" 00:02:45.066 Message: lib/telemetry: Defining dependency "telemetry" 00:02:45.066 Checking for function "getentropy" : YES 00:02:45.066 Message: lib/eal: Defining dependency "eal" 00:02:45.066 Message: lib/ring: Defining dependency "ring" 00:02:45.066 Message: lib/rcu: Defining dependency "rcu" 00:02:45.066 Message: lib/mempool: Defining dependency "mempool" 00:02:45.066 Message: lib/mbuf: Defining dependency "mbuf" 00:02:45.066 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:45.066 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:45.066 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:45.066 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:45.066 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:45.066 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:45.067 Compiler for C supports arguments -mpclmul: YES 00:02:45.067 Compiler for C supports arguments -maes: YES 00:02:45.067 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:45.067 Compiler for C supports arguments -mavx512bw: YES 00:02:45.067 Compiler for C supports arguments -mavx512dq: YES 00:02:45.067 Compiler for C supports arguments -mavx512vl: YES 00:02:45.067 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:45.067 Compiler for C supports arguments -mavx2: YES 00:02:45.067 Compiler for C supports arguments -mavx: YES 00:02:45.067 Message: lib/net: Defining dependency "net" 00:02:45.067 Message: lib/meter: Defining dependency "meter" 00:02:45.067 Message: lib/ethdev: Defining dependency "ethdev" 00:02:45.067 Message: lib/pci: Defining dependency "pci" 00:02:45.067 Message: lib/cmdline: Defining dependency "cmdline" 00:02:45.067 Message: lib/metrics: Defining dependency "metrics" 00:02:45.067 Message: lib/hash: Defining dependency "hash" 00:02:45.067 Message: lib/timer: Defining dependency "timer" 00:02:45.067 Fetching value of define "__AVX2__" : 1 (cached) 00:02:45.067 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:45.067 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:45.067 Fetching value of define "__AVX512CD__" : 1 (cached) 00:02:45.067 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:45.067 Message: lib/acl: Defining dependency "acl" 00:02:45.067 Message: lib/bbdev: Defining dependency "bbdev" 00:02:45.067 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:45.067 Run-time dependency libelf found: YES 0.191 00:02:45.067 Message: lib/bpf: Defining dependency "bpf" 00:02:45.067 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:45.067 Message: lib/compressdev: Defining dependency "compressdev" 00:02:45.067 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:45.067 Message: lib/distributor: Defining dependency "distributor" 00:02:45.067 Message: lib/efd: Defining dependency "efd" 00:02:45.067 Message: lib/eventdev: Defining dependency "eventdev" 00:02:45.067 Message: lib/gpudev: Defining dependency "gpudev" 00:02:45.067 Message: lib/gro: Defining dependency "gro" 00:02:45.067 Message: lib/gso: Defining dependency "gso" 00:02:45.067 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:45.067 Message: lib/jobstats: Defining dependency "jobstats" 00:02:45.067 Message: lib/latencystats: Defining dependency "latencystats" 00:02:45.067 Message: lib/lpm: Defining dependency "lpm" 00:02:45.067 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:45.067 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:45.067 Fetching value of define "__AVX512IFMA__" : 1 00:02:45.067 Message: lib/member: Defining dependency "member" 00:02:45.067 Message: lib/pcapng: Defining dependency "pcapng" 00:02:45.067 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:45.067 Message: lib/power: Defining dependency "power" 00:02:45.067 Message: lib/rawdev: Defining dependency "rawdev" 00:02:45.067 Message: lib/regexdev: Defining dependency "regexdev" 00:02:45.067 Message: lib/dmadev: Defining dependency "dmadev" 00:02:45.067 Message: lib/rib: Defining dependency "rib" 00:02:45.067 Message: lib/reorder: Defining dependency "reorder" 00:02:45.067 Message: lib/sched: Defining dependency "sched" 00:02:45.067 Message: lib/security: Defining dependency "security" 00:02:45.067 Message: lib/stack: Defining dependency "stack" 00:02:45.067 Has header "linux/userfaultfd.h" : YES 00:02:45.067 Message: lib/vhost: Defining dependency "vhost" 00:02:45.067 Message: lib/ipsec: Defining dependency "ipsec" 00:02:45.067 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:45.067 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:45.067 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:45.067 Message: lib/fib: Defining dependency "fib" 00:02:45.067 Message: lib/port: Defining dependency "port" 00:02:45.067 Message: lib/pdump: Defining dependency "pdump" 00:02:45.067 Message: lib/table: Defining dependency "table" 00:02:45.067 Message: lib/pipeline: Defining dependency "pipeline" 00:02:45.067 Message: lib/graph: Defining dependency "graph" 00:02:45.067 Message: lib/node: Defining dependency "node" 00:02:45.067 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:45.067 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:45.067 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:45.067 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:45.067 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:45.067 Compiler for C supports arguments -Wno-unused-value: YES 00:02:45.067 Compiler for C supports arguments -Wno-format: YES 00:02:45.067 Compiler for C supports arguments -Wno-format-security: YES 00:02:45.067 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:45.067 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:45.067 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:45.067 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:46.443 Fetching value of define "__AVX2__" : 1 (cached) 00:02:46.443 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:46.443 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:46.443 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:46.443 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:46.443 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:46.443 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:46.443 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:46.443 Configuring doxy-api.conf using configuration 00:02:46.443 Program sphinx-build found: NO 00:02:46.443 Configuring rte_build_config.h using configuration 00:02:46.443 Message: 00:02:46.443 ================= 00:02:46.443 Applications Enabled 00:02:46.443 ================= 00:02:46.443 00:02:46.443 apps: 00:02:46.443 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:02:46.443 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:02:46.443 test-security-perf, 00:02:46.443 00:02:46.443 Message: 00:02:46.443 ================= 00:02:46.443 Libraries Enabled 00:02:46.443 ================= 00:02:46.443 00:02:46.443 libs: 00:02:46.443 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:02:46.443 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:02:46.443 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:02:46.443 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:02:46.443 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:02:46.443 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:02:46.443 table, pipeline, graph, node, 00:02:46.443 00:02:46.443 Message: 00:02:46.443 =============== 00:02:46.443 Drivers Enabled 00:02:46.443 =============== 00:02:46.443 00:02:46.443 common: 00:02:46.443 00:02:46.443 bus: 00:02:46.443 pci, vdev, 00:02:46.443 mempool: 00:02:46.443 ring, 00:02:46.443 dma: 00:02:46.443 00:02:46.443 net: 00:02:46.443 i40e, 00:02:46.443 raw: 00:02:46.443 00:02:46.443 crypto: 00:02:46.443 00:02:46.443 compress: 00:02:46.443 00:02:46.443 regex: 00:02:46.443 00:02:46.443 vdpa: 00:02:46.443 00:02:46.443 event: 00:02:46.443 00:02:46.443 baseband: 00:02:46.443 00:02:46.443 gpu: 00:02:46.443 00:02:46.443 00:02:46.443 Message: 00:02:46.443 ================= 00:02:46.443 Content Skipped 00:02:46.443 ================= 00:02:46.443 00:02:46.443 apps: 00:02:46.443 00:02:46.443 libs: 00:02:46.443 kni: explicitly disabled via build config (deprecated lib) 00:02:46.443 flow_classify: explicitly disabled via build config (deprecated lib) 00:02:46.443 00:02:46.443 drivers: 00:02:46.443 common/cpt: not in enabled drivers build config 00:02:46.443 common/dpaax: not in enabled drivers build config 00:02:46.443 common/iavf: not in enabled drivers build config 00:02:46.443 common/idpf: not in enabled drivers build config 00:02:46.443 common/mvep: not in enabled drivers build config 00:02:46.443 common/octeontx: not in enabled drivers build config 00:02:46.443 bus/auxiliary: not in enabled drivers build config 00:02:46.443 bus/dpaa: not in enabled drivers build config 00:02:46.443 bus/fslmc: not in enabled drivers build config 00:02:46.443 bus/ifpga: not in enabled drivers build config 00:02:46.443 bus/vmbus: not in enabled drivers build config 00:02:46.443 common/cnxk: not in enabled drivers build config 00:02:46.443 common/mlx5: not in enabled drivers build config 00:02:46.443 common/qat: not in enabled drivers build config 00:02:46.443 common/sfc_efx: not in enabled drivers build config 00:02:46.443 mempool/bucket: not in enabled drivers build config 00:02:46.443 mempool/cnxk: not in enabled drivers build config 00:02:46.443 mempool/dpaa: not in enabled drivers build config 00:02:46.443 mempool/dpaa2: not in enabled drivers build config 00:02:46.443 mempool/octeontx: not in enabled drivers build config 00:02:46.443 mempool/stack: not in enabled drivers build config 00:02:46.443 dma/cnxk: not in enabled drivers build config 00:02:46.443 dma/dpaa: not in enabled drivers build config 00:02:46.443 dma/dpaa2: not in enabled drivers build config 00:02:46.443 dma/hisilicon: not in enabled drivers build config 00:02:46.443 dma/idxd: not in enabled drivers build config 00:02:46.443 dma/ioat: not in enabled drivers build config 00:02:46.443 dma/skeleton: not in enabled drivers build config 00:02:46.443 net/af_packet: not in enabled drivers build config 00:02:46.443 net/af_xdp: not in enabled drivers build config 00:02:46.443 net/ark: not in enabled drivers build config 00:02:46.443 net/atlantic: not in enabled drivers build config 00:02:46.443 net/avp: not in enabled drivers build config 00:02:46.443 net/axgbe: not in enabled drivers build config 00:02:46.443 net/bnx2x: not in enabled drivers build config 00:02:46.443 net/bnxt: not in enabled drivers build config 00:02:46.443 net/bonding: not in enabled drivers build config 00:02:46.443 net/cnxk: not in enabled drivers build config 00:02:46.443 net/cxgbe: not in enabled drivers build config 00:02:46.443 net/dpaa: not in enabled drivers build config 00:02:46.443 net/dpaa2: not in enabled drivers build config 00:02:46.443 net/e1000: not in enabled drivers build config 00:02:46.443 net/ena: not in enabled drivers build config 00:02:46.443 net/enetc: not in enabled drivers build config 00:02:46.443 net/enetfec: not in enabled drivers build config 00:02:46.443 net/enic: not in enabled drivers build config 00:02:46.443 net/failsafe: not in enabled drivers build config 00:02:46.443 net/fm10k: not in enabled drivers build config 00:02:46.443 net/gve: not in enabled drivers build config 00:02:46.443 net/hinic: not in enabled drivers build config 00:02:46.443 net/hns3: not in enabled drivers build config 00:02:46.443 net/iavf: not in enabled drivers build config 00:02:46.443 net/ice: not in enabled drivers build config 00:02:46.443 net/idpf: not in enabled drivers build config 00:02:46.444 net/igc: not in enabled drivers build config 00:02:46.444 net/ionic: not in enabled drivers build config 00:02:46.444 net/ipn3ke: not in enabled drivers build config 00:02:46.444 net/ixgbe: not in enabled drivers build config 00:02:46.444 net/kni: not in enabled drivers build config 00:02:46.444 net/liquidio: not in enabled drivers build config 00:02:46.444 net/mana: not in enabled drivers build config 00:02:46.444 net/memif: not in enabled drivers build config 00:02:46.444 net/mlx4: not in enabled drivers build config 00:02:46.444 net/mlx5: not in enabled drivers build config 00:02:46.444 net/mvneta: not in enabled drivers build config 00:02:46.444 net/mvpp2: not in enabled drivers build config 00:02:46.444 net/netvsc: not in enabled drivers build config 00:02:46.444 net/nfb: not in enabled drivers build config 00:02:46.444 net/nfp: not in enabled drivers build config 00:02:46.444 net/ngbe: not in enabled drivers build config 00:02:46.444 net/null: not in enabled drivers build config 00:02:46.444 net/octeontx: not in enabled drivers build config 00:02:46.444 net/octeon_ep: not in enabled drivers build config 00:02:46.444 net/pcap: not in enabled drivers build config 00:02:46.444 net/pfe: not in enabled drivers build config 00:02:46.444 net/qede: not in enabled drivers build config 00:02:46.444 net/ring: not in enabled drivers build config 00:02:46.444 net/sfc: not in enabled drivers build config 00:02:46.444 net/softnic: not in enabled drivers build config 00:02:46.444 net/tap: not in enabled drivers build config 00:02:46.444 net/thunderx: not in enabled drivers build config 00:02:46.444 net/txgbe: not in enabled drivers build config 00:02:46.444 net/vdev_netvsc: not in enabled drivers build config 00:02:46.444 net/vhost: not in enabled drivers build config 00:02:46.444 net/virtio: not in enabled drivers build config 00:02:46.444 net/vmxnet3: not in enabled drivers build config 00:02:46.444 raw/cnxk_bphy: not in enabled drivers build config 00:02:46.444 raw/cnxk_gpio: not in enabled drivers build config 00:02:46.444 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:46.444 raw/ifpga: not in enabled drivers build config 00:02:46.444 raw/ntb: not in enabled drivers build config 00:02:46.444 raw/skeleton: not in enabled drivers build config 00:02:46.444 crypto/armv8: not in enabled drivers build config 00:02:46.444 crypto/bcmfs: not in enabled drivers build config 00:02:46.444 crypto/caam_jr: not in enabled drivers build config 00:02:46.444 crypto/ccp: not in enabled drivers build config 00:02:46.444 crypto/cnxk: not in enabled drivers build config 00:02:46.444 crypto/dpaa_sec: not in enabled drivers build config 00:02:46.444 crypto/dpaa2_sec: not in enabled drivers build config 00:02:46.444 crypto/ipsec_mb: not in enabled drivers build config 00:02:46.444 crypto/mlx5: not in enabled drivers build config 00:02:46.444 crypto/mvsam: not in enabled drivers build config 00:02:46.444 crypto/nitrox: not in enabled drivers build config 00:02:46.444 crypto/null: not in enabled drivers build config 00:02:46.444 crypto/octeontx: not in enabled drivers build config 00:02:46.444 crypto/openssl: not in enabled drivers build config 00:02:46.444 crypto/scheduler: not in enabled drivers build config 00:02:46.444 crypto/uadk: not in enabled drivers build config 00:02:46.444 crypto/virtio: not in enabled drivers build config 00:02:46.444 compress/isal: not in enabled drivers build config 00:02:46.444 compress/mlx5: not in enabled drivers build config 00:02:46.444 compress/octeontx: not in enabled drivers build config 00:02:46.444 compress/zlib: not in enabled drivers build config 00:02:46.444 regex/mlx5: not in enabled drivers build config 00:02:46.444 regex/cn9k: not in enabled drivers build config 00:02:46.444 vdpa/ifc: not in enabled drivers build config 00:02:46.444 vdpa/mlx5: not in enabled drivers build config 00:02:46.444 vdpa/sfc: not in enabled drivers build config 00:02:46.444 event/cnxk: not in enabled drivers build config 00:02:46.444 event/dlb2: not in enabled drivers build config 00:02:46.444 event/dpaa: not in enabled drivers build config 00:02:46.444 event/dpaa2: not in enabled drivers build config 00:02:46.444 event/dsw: not in enabled drivers build config 00:02:46.444 event/opdl: not in enabled drivers build config 00:02:46.444 event/skeleton: not in enabled drivers build config 00:02:46.444 event/sw: not in enabled drivers build config 00:02:46.444 event/octeontx: not in enabled drivers build config 00:02:46.444 baseband/acc: not in enabled drivers build config 00:02:46.444 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:46.444 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:46.444 baseband/la12xx: not in enabled drivers build config 00:02:46.444 baseband/null: not in enabled drivers build config 00:02:46.444 baseband/turbo_sw: not in enabled drivers build config 00:02:46.444 gpu/cuda: not in enabled drivers build config 00:02:46.444 00:02:46.444 00:02:46.444 Build targets in project: 309 00:02:46.444 00:02:46.444 DPDK 22.11.4 00:02:46.444 00:02:46.444 User defined options 00:02:46.444 libdir : lib 00:02:46.444 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:46.444 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:46.444 c_link_args : 00:02:46.444 enable_docs : false 00:02:46.444 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:02:46.444 enable_kmods : false 00:02:46.444 machine : native 00:02:46.444 tests : false 00:02:46.444 00:02:46.444 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:46.444 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:46.444 18:53:03 build_native_dpdk -- common/autobuild_common.sh@199 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:46.444 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:46.444 [1/738] Generating lib/rte_kvargs_def with a custom command 00:02:46.444 [2/738] Generating lib/rte_kvargs_mingw with a custom command 00:02:46.444 [3/738] Generating lib/rte_telemetry_def with a custom command 00:02:46.444 [4/738] Generating lib/rte_telemetry_mingw with a custom command 00:02:46.444 [5/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:46.444 [6/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:46.444 [7/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:46.444 [8/738] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:46.444 [9/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:46.444 [10/738] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:46.444 [11/738] Linking static target lib/librte_kvargs.a 00:02:46.444 [12/738] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:46.444 [13/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:46.444 [14/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:46.444 [15/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:46.444 [16/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:46.701 [17/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:46.701 [18/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:46.701 [19/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:46.701 [20/738] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.701 [21/738] Linking target lib/librte_kvargs.so.23.0 00:02:46.701 [22/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:46.701 [23/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:02:46.701 [24/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:46.701 [25/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:46.701 [26/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:46.701 [27/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:46.701 [28/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:46.959 [29/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:46.959 [30/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:46.959 [31/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:46.959 [32/738] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:46.959 [33/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:46.959 [34/738] Linking static target lib/librte_telemetry.a 00:02:46.959 [35/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:46.959 [36/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:46.959 [37/738] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:02:46.959 [38/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:46.959 [39/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:46.959 [40/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:46.959 [41/738] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:47.217 [42/738] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.217 [43/738] Linking target lib/librte_telemetry.so.23.0 00:02:47.217 [44/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:47.217 [45/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:47.217 [46/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:47.217 [47/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:47.217 [48/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:47.217 [49/738] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:47.217 [50/738] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:02:47.217 [51/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:47.217 [52/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:47.217 [53/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:47.217 [54/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:47.217 [55/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:47.217 [56/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:47.217 [57/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:47.486 [58/738] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:47.486 [59/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:47.486 [60/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:47.486 [61/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:47.486 [62/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:47.486 [63/738] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:47.486 [64/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:47.486 [65/738] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:47.486 [66/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:02:47.486 [67/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:47.486 [68/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:47.486 [69/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:47.486 [70/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:47.486 [71/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:47.486 [72/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:47.486 [73/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:47.486 [74/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:47.486 [75/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:47.486 [76/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:47.757 [77/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:47.757 [78/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:47.757 [79/738] Generating lib/rte_eal_def with a custom command 00:02:47.757 [80/738] Generating lib/rte_eal_mingw with a custom command 00:02:47.757 [81/738] Generating lib/rte_ring_def with a custom command 00:02:47.757 [82/738] Generating lib/rte_ring_mingw with a custom command 00:02:47.757 [83/738] Generating lib/rte_rcu_def with a custom command 00:02:47.757 [84/738] Generating lib/rte_rcu_mingw with a custom command 00:02:47.757 [85/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:47.757 [86/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:47.757 [87/738] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:47.757 [88/738] Linking static target lib/librte_ring.a 00:02:47.757 [89/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:47.757 [90/738] Generating lib/rte_mempool_def with a custom command 00:02:47.757 [91/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:47.757 [92/738] Generating lib/rte_mempool_mingw with a custom command 00:02:48.014 [93/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:48.014 [94/738] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:48.014 [95/738] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.014 [96/738] Generating lib/rte_mbuf_def with a custom command 00:02:48.014 [97/738] Generating lib/rte_mbuf_mingw with a custom command 00:02:48.014 [98/738] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:48.014 [99/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:48.014 [100/738] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:48.014 [101/738] Linking static target lib/librte_eal.a 00:02:48.272 [102/738] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:48.272 [103/738] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:48.272 [104/738] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:48.272 [105/738] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:48.272 [106/738] Linking static target lib/librte_rcu.a 00:02:48.272 [107/738] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:48.272 [108/738] Linking static target lib/librte_mempool.a 00:02:48.529 [109/738] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:48.529 [110/738] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:48.529 [111/738] Generating lib/rte_net_def with a custom command 00:02:48.529 [112/738] Generating lib/rte_net_mingw with a custom command 00:02:48.529 [113/738] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:48.529 [114/738] Generating lib/rte_meter_def with a custom command 00:02:48.529 [115/738] Generating lib/rte_meter_mingw with a custom command 00:02:48.529 [116/738] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:48.529 [117/738] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:48.529 [118/738] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:48.529 [119/738] Linking static target lib/librte_meter.a 00:02:48.529 [120/738] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.529 [121/738] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:48.529 [122/738] Linking static target lib/librte_net.a 00:02:48.787 [123/738] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.787 [124/738] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.787 [125/738] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:48.787 [126/738] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:48.787 [127/738] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:49.047 [128/738] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:49.047 [129/738] Linking static target lib/librte_mbuf.a 00:02:49.047 [130/738] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:49.047 [131/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:49.047 [132/738] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.047 [133/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:49.305 [134/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:49.305 [135/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:49.305 [136/738] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.305 [137/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:49.305 [138/738] Generating lib/rte_ethdev_def with a custom command 00:02:49.305 [139/738] Generating lib/rte_ethdev_mingw with a custom command 00:02:49.305 [140/738] Generating lib/rte_pci_def with a custom command 00:02:49.305 [141/738] Generating lib/rte_pci_mingw with a custom command 00:02:49.563 [142/738] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:49.563 [143/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:49.563 [144/738] Linking static target lib/librte_pci.a 00:02:49.563 [145/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:49.563 [146/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:49.563 [147/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:49.563 [148/738] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.563 [149/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:49.563 [150/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:49.563 [151/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:49.563 [152/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:49.563 [153/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:49.563 [154/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:49.563 [155/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:49.822 [156/738] Generating lib/rte_cmdline_def with a custom command 00:02:49.822 [157/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:49.822 [158/738] Generating lib/rte_cmdline_mingw with a custom command 00:02:49.822 [159/738] Generating lib/rte_metrics_def with a custom command 00:02:49.822 [160/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:49.822 [161/738] Generating lib/rte_metrics_mingw with a custom command 00:02:49.822 [162/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:49.822 [163/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:49.822 [164/738] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:49.822 [165/738] Generating lib/rte_hash_def with a custom command 00:02:49.822 [166/738] Generating lib/rte_hash_mingw with a custom command 00:02:49.822 [167/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:49.822 [168/738] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:49.822 [169/738] Generating lib/rte_timer_def with a custom command 00:02:49.822 [170/738] Linking static target lib/librte_cmdline.a 00:02:49.822 [171/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:49.822 [172/738] Generating lib/rte_timer_mingw with a custom command 00:02:50.080 [173/738] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:50.080 [174/738] Linking static target lib/librte_metrics.a 00:02:50.339 [175/738] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:50.339 [176/738] Linking static target lib/librte_timer.a 00:02:50.339 [177/738] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:50.339 [178/738] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.339 [179/738] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:50.339 [180/738] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:50.339 [181/738] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.596 [182/738] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.596 [183/738] Generating lib/rte_acl_def with a custom command 00:02:50.596 [184/738] Generating lib/rte_acl_mingw with a custom command 00:02:50.596 [185/738] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:50.596 [186/738] Generating lib/rte_bbdev_def with a custom command 00:02:50.596 [187/738] Generating lib/rte_bbdev_mingw with a custom command 00:02:50.596 [188/738] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:50.596 [189/738] Generating lib/rte_bitratestats_def with a custom command 00:02:50.596 [190/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:50.596 [191/738] Generating lib/rte_bitratestats_mingw with a custom command 00:02:50.854 [192/738] Linking static target lib/librte_ethdev.a 00:02:50.854 [193/738] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:50.854 [194/738] Linking static target lib/librte_bitratestats.a 00:02:50.854 [195/738] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:51.112 [196/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:51.112 [197/738] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.112 [198/738] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:51.112 [199/738] Linking static target lib/librte_bbdev.a 00:02:51.369 [200/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:51.369 [201/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:51.369 [202/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:51.627 [203/738] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:51.627 [204/738] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.627 [205/738] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:51.627 [206/738] Linking static target lib/librte_hash.a 00:02:51.627 [207/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:51.884 [208/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:51.884 [209/738] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.884 [210/738] Generating lib/rte_bpf_def with a custom command 00:02:51.884 [211/738] Generating lib/rte_bpf_mingw with a custom command 00:02:52.142 [212/738] Generating lib/rte_cfgfile_def with a custom command 00:02:52.142 [213/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:52.142 [214/738] Generating lib/rte_cfgfile_mingw with a custom command 00:02:52.142 [215/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:52.142 [216/738] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:52.142 [217/738] Linking static target lib/librte_cfgfile.a 00:02:52.142 [218/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:52.400 [219/738] Generating lib/rte_compressdev_def with a custom command 00:02:52.400 [220/738] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:52.400 [221/738] Generating lib/rte_compressdev_mingw with a custom command 00:02:52.400 [222/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:52.400 [223/738] Linking static target lib/librte_bpf.a 00:02:52.400 [224/738] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:52.400 [225/738] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.400 [226/738] Generating lib/rte_cryptodev_def with a custom command 00:02:52.400 [227/738] Generating lib/rte_cryptodev_mingw with a custom command 00:02:52.400 [228/738] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:52.400 [229/738] Linking static target lib/librte_compressdev.a 00:02:52.400 [230/738] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:02:52.657 [231/738] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.657 [232/738] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:52.657 [233/738] Generating lib/rte_distributor_def with a custom command 00:02:52.657 [234/738] Generating lib/rte_distributor_mingw with a custom command 00:02:52.657 [235/738] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:02:52.657 [236/738] Linking static target lib/librte_acl.a 00:02:52.914 [237/738] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:52.914 [238/738] Generating lib/rte_efd_def with a custom command 00:02:52.914 [239/738] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:52.914 [240/738] Generating lib/rte_efd_mingw with a custom command 00:02:52.914 [241/738] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.914 [242/738] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:52.914 [243/738] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.914 [244/738] Linking target lib/librte_eal.so.23.0 00:02:52.914 [245/738] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:52.914 [246/738] Linking static target lib/librte_distributor.a 00:02:53.171 [247/738] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.171 [248/738] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:02:53.171 [249/738] Linking target lib/librte_ring.so.23.0 00:02:53.171 [250/738] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:53.171 [251/738] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.171 [252/738] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:02:53.171 [253/738] Linking target lib/librte_meter.so.23.0 00:02:53.171 [254/738] Linking target lib/librte_pci.so.23.0 00:02:53.171 [255/738] Linking target lib/librte_rcu.so.23.0 00:02:53.171 [256/738] Linking target lib/librte_mempool.so.23.0 00:02:53.171 [257/738] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:53.429 [258/738] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:02:53.429 [259/738] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:02:53.429 [260/738] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:02:53.429 [261/738] Linking target lib/librte_timer.so.23.0 00:02:53.429 [262/738] Linking target lib/librte_mbuf.so.23.0 00:02:53.429 [263/738] Linking target lib/librte_acl.so.23.0 00:02:53.429 [264/738] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:02:53.429 [265/738] Linking target lib/librte_cfgfile.so.23.0 00:02:53.429 [266/738] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:02:53.429 [267/738] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:02:53.429 [268/738] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:02:53.429 [269/738] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:53.429 [270/738] Linking target lib/librte_bbdev.so.23.0 00:02:53.429 [271/738] Linking target lib/librte_net.so.23.0 00:02:53.429 [272/738] Linking target lib/librte_compressdev.so.23.0 00:02:53.429 [273/738] Linking static target lib/librte_efd.a 00:02:53.429 [274/738] Linking target lib/librte_distributor.so.23.0 00:02:53.686 [275/738] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:02:53.686 [276/738] Linking target lib/librte_cmdline.so.23.0 00:02:53.686 [277/738] Linking target lib/librte_hash.so.23.0 00:02:53.686 [278/738] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.686 [279/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:53.686 [280/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:53.686 [281/738] Generating lib/rte_eventdev_def with a custom command 00:02:53.686 [282/738] Generating lib/rte_gpudev_def with a custom command 00:02:53.686 [283/738] Generating lib/rte_eventdev_mingw with a custom command 00:02:53.686 [284/738] Generating lib/rte_gpudev_mingw with a custom command 00:02:53.686 [285/738] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:02:53.944 [286/738] Linking target lib/librte_efd.so.23.0 00:02:53.944 [287/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:53.944 [288/738] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:53.944 [289/738] Linking static target lib/librte_cryptodev.a 00:02:53.944 [290/738] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.944 [291/738] Linking target lib/librte_ethdev.so.23.0 00:02:54.201 [292/738] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:02:54.201 [293/738] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:54.201 [294/738] Linking target lib/librte_metrics.so.23.0 00:02:54.201 [295/738] Linking target lib/librte_bpf.so.23.0 00:02:54.201 [296/738] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:54.201 [297/738] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:02:54.201 [298/738] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:02:54.201 [299/738] Generating lib/rte_gro_def with a custom command 00:02:54.201 [300/738] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:54.201 [301/738] Linking target lib/librte_bitratestats.so.23.0 00:02:54.201 [302/738] Generating lib/rte_gro_mingw with a custom command 00:02:54.201 [303/738] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:54.459 [304/738] Linking static target lib/librte_gpudev.a 00:02:54.459 [305/738] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:54.459 [306/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:54.459 [307/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:54.459 [308/738] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:54.459 [309/738] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:54.459 [310/738] Generating lib/rte_gso_def with a custom command 00:02:54.717 [311/738] Generating lib/rte_gso_mingw with a custom command 00:02:54.717 [312/738] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:54.717 [313/738] Linking static target lib/librte_gro.a 00:02:54.717 [314/738] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:54.717 [315/738] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:54.717 [316/738] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:54.717 [317/738] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.717 [318/738] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:54.717 [319/738] Linking static target lib/librte_gso.a 00:02:54.717 [320/738] Linking target lib/librte_gro.so.23.0 00:02:54.717 [321/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:54.975 [322/738] Linking static target lib/librte_eventdev.a 00:02:54.975 [323/738] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.975 [324/738] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.975 [325/738] Linking target lib/librte_gso.so.23.0 00:02:54.975 [326/738] Linking target lib/librte_gpudev.so.23.0 00:02:54.975 [327/738] Generating lib/rte_ip_frag_def with a custom command 00:02:54.975 [328/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:54.975 [329/738] Generating lib/rte_ip_frag_mingw with a custom command 00:02:54.975 [330/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:54.975 [331/738] Generating lib/rte_jobstats_mingw with a custom command 00:02:54.975 [332/738] Generating lib/rte_jobstats_def with a custom command 00:02:54.975 [333/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:54.975 [334/738] Generating lib/rte_latencystats_def with a custom command 00:02:54.975 [335/738] Generating lib/rte_latencystats_mingw with a custom command 00:02:54.975 [336/738] Generating lib/rte_lpm_def with a custom command 00:02:54.975 [337/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:55.233 [338/738] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:55.233 [339/738] Generating lib/rte_lpm_mingw with a custom command 00:02:55.233 [340/738] Linking static target lib/librte_jobstats.a 00:02:55.233 [341/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:55.233 [342/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:55.233 [343/738] Linking static target lib/librte_ip_frag.a 00:02:55.233 [344/738] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.491 [345/738] Linking target lib/librte_jobstats.so.23.0 00:02:55.491 [346/738] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:55.491 [347/738] Linking static target lib/librte_latencystats.a 00:02:55.491 [348/738] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:55.491 [349/738] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.491 [350/738] Linking target lib/librte_cryptodev.so.23.0 00:02:55.491 [351/738] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:55.491 [352/738] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.491 [353/738] Generating lib/rte_member_def with a custom command 00:02:55.491 [354/738] Linking target lib/librte_ip_frag.so.23.0 00:02:55.491 [355/738] Generating lib/rte_member_mingw with a custom command 00:02:55.491 [356/738] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:02:55.491 [357/738] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.491 [358/738] Generating lib/rte_pcapng_def with a custom command 00:02:55.491 [359/738] Linking target lib/librte_latencystats.so.23.0 00:02:55.491 [360/738] Generating lib/rte_pcapng_mingw with a custom command 00:02:55.749 [361/738] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:02:55.749 [362/738] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:55.749 [363/738] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:55.749 [364/738] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:55.749 [365/738] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:55.749 [366/738] Linking static target lib/librte_lpm.a 00:02:55.749 [367/738] Compiling C object lib/librte_member.a.p/member_rte_member_sketch_avx512.c.o 00:02:55.749 [368/738] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:56.007 [369/738] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:56.007 [370/738] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:56.007 [371/738] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.007 [372/738] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:56.007 [373/738] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:56.007 [374/738] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:02:56.007 [375/738] Linking target lib/librte_lpm.so.23.0 00:02:56.007 [376/738] Generating lib/rte_power_def with a custom command 00:02:56.007 [377/738] Generating lib/rte_power_mingw with a custom command 00:02:56.007 [378/738] Generating lib/rte_rawdev_def with a custom command 00:02:56.007 [379/738] Generating lib/rte_rawdev_mingw with a custom command 00:02:56.007 [380/738] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:56.312 [381/738] Generating lib/rte_regexdev_def with a custom command 00:02:56.312 [382/738] Linking static target lib/librte_pcapng.a 00:02:56.312 [383/738] Generating lib/rte_regexdev_mingw with a custom command 00:02:56.312 [384/738] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:02:56.312 [385/738] Generating lib/rte_dmadev_def with a custom command 00:02:56.312 [386/738] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.312 [387/738] Generating lib/rte_dmadev_mingw with a custom command 00:02:56.312 [388/738] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:56.312 [389/738] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:02:56.312 [390/738] Linking target lib/librte_eventdev.so.23.0 00:02:56.312 [391/738] Generating lib/rte_rib_def with a custom command 00:02:56.312 [392/738] Generating lib/rte_rib_mingw with a custom command 00:02:56.312 [393/738] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.312 [394/738] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:56.312 [395/738] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:02:56.312 [396/738] Linking static target lib/librte_power.a 00:02:56.312 [397/738] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:56.312 [398/738] Generating lib/rte_reorder_def with a custom command 00:02:56.312 [399/738] Linking static target lib/librte_rawdev.a 00:02:56.312 [400/738] Linking target lib/librte_pcapng.so.23.0 00:02:56.312 [401/738] Generating lib/rte_reorder_mingw with a custom command 00:02:56.570 [402/738] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:56.570 [403/738] Linking static target lib/librte_regexdev.a 00:02:56.570 [404/738] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:02:56.570 [405/738] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:56.570 [406/738] Linking static target lib/librte_dmadev.a 00:02:56.570 [407/738] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:56.570 [408/738] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:56.571 [409/738] Linking static target lib/librte_member.a 00:02:56.829 [410/738] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:56.829 [411/738] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:56.829 [412/738] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:56.829 [413/738] Linking static target lib/librte_reorder.a 00:02:56.829 [414/738] Generating lib/rte_sched_def with a custom command 00:02:56.829 [415/738] Generating lib/rte_sched_mingw with a custom command 00:02:56.829 [416/738] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:56.829 [417/738] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.829 [418/738] Generating lib/rte_security_def with a custom command 00:02:56.829 [419/738] Linking target lib/librte_rawdev.so.23.0 00:02:56.829 [420/738] Generating lib/rte_security_mingw with a custom command 00:02:56.829 [421/738] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:56.829 [422/738] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.829 [423/738] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:56.829 [424/738] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.829 [425/738] Generating lib/rte_stack_def with a custom command 00:02:56.829 [426/738] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:56.829 [427/738] Linking target lib/librte_member.so.23.0 00:02:56.829 [428/738] Linking static target lib/librte_stack.a 00:02:56.829 [429/738] Linking target lib/librte_reorder.so.23.0 00:02:56.829 [430/738] Generating lib/rte_stack_mingw with a custom command 00:02:57.086 [431/738] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:57.086 [432/738] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.086 [433/738] Linking static target lib/librte_rib.a 00:02:57.086 [434/738] Linking target lib/librte_dmadev.so.23.0 00:02:57.086 [435/738] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.086 [436/738] Linking target lib/librte_regexdev.so.23.0 00:02:57.086 [437/738] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:57.086 [438/738] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.086 [439/738] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:02:57.086 [440/738] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.086 [441/738] Linking target lib/librte_stack.so.23.0 00:02:57.086 [442/738] Linking target lib/librte_power.so.23.0 00:02:57.086 [443/738] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:57.086 [444/738] Linking static target lib/librte_security.a 00:02:57.342 [445/738] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.342 [446/738] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:57.342 [447/738] Linking target lib/librte_rib.so.23.0 00:02:57.342 [448/738] Generating lib/rte_vhost_def with a custom command 00:02:57.342 [449/738] Generating lib/rte_vhost_mingw with a custom command 00:02:57.342 [450/738] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:02:57.342 [451/738] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:57.599 [452/738] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:57.599 [453/738] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.599 [454/738] Linking target lib/librte_security.so.23.0 00:02:57.599 [455/738] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:02:57.857 [456/738] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:57.857 [457/738] Linking static target lib/librte_sched.a 00:02:57.857 [458/738] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:57.857 [459/738] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:57.857 [460/738] Generating lib/rte_ipsec_def with a custom command 00:02:57.857 [461/738] Generating lib/rte_ipsec_mingw with a custom command 00:02:58.114 [462/738] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:58.114 [463/738] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:58.114 [464/738] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.114 [465/738] Linking target lib/librte_sched.so.23.0 00:02:58.114 [466/738] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:58.114 [467/738] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:02:58.371 [468/738] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:58.371 [469/738] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:58.371 [470/738] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:58.371 [471/738] Generating lib/rte_fib_def with a custom command 00:02:58.371 [472/738] Generating lib/rte_fib_mingw with a custom command 00:02:58.371 [473/738] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:58.670 [474/738] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:58.670 [475/738] Linking static target lib/librte_ipsec.a 00:02:58.670 [476/738] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:02:58.670 [477/738] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:02:58.670 [478/738] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.670 [479/738] Linking target lib/librte_ipsec.so.23.0 00:02:58.927 [480/738] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:58.927 [481/738] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:58.927 [482/738] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:58.927 [483/738] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:58.927 [484/738] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:58.927 [485/738] Linking static target lib/librte_fib.a 00:02:58.927 [486/738] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:59.184 [487/738] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:59.184 [488/738] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.185 [489/738] Linking target lib/librte_fib.so.23.0 00:02:59.442 [490/738] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:59.442 [491/738] Generating lib/rte_port_def with a custom command 00:02:59.442 [492/738] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:59.442 [493/738] Generating lib/rte_port_mingw with a custom command 00:02:59.442 [494/738] Generating lib/rte_pdump_def with a custom command 00:02:59.442 [495/738] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:59.442 [496/738] Generating lib/rte_pdump_mingw with a custom command 00:02:59.442 [497/738] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:59.442 [498/738] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:59.442 [499/738] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:59.700 [500/738] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:59.700 [501/738] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:59.700 [502/738] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:59.700 [503/738] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:59.957 [504/738] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:59.957 [505/738] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:59.957 [506/738] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:59.957 [507/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:59.957 [508/738] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:59.957 [509/738] Linking static target lib/librte_port.a 00:02:59.957 [510/738] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:03:00.215 [511/738] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:03:00.215 [512/738] Linking static target lib/librte_pdump.a 00:03:00.215 [513/738] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.215 [514/738] Linking target lib/librte_pdump.so.23.0 00:03:00.473 [515/738] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:03:00.473 [516/738] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.473 [517/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:03:00.473 [518/738] Linking target lib/librte_port.so.23.0 00:03:00.473 [519/738] Generating lib/rte_table_def with a custom command 00:03:00.473 [520/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:03:00.473 [521/738] Generating lib/rte_table_mingw with a custom command 00:03:00.473 [522/738] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:03:00.473 [523/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:03:00.731 [524/738] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:03:00.731 [525/738] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:03:00.731 [526/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:03:00.731 [527/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:03:00.731 [528/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:03:00.731 [529/738] Linking static target lib/librte_table.a 00:03:00.731 [530/738] Generating lib/rte_pipeline_def with a custom command 00:03:00.731 [531/738] Generating lib/rte_pipeline_mingw with a custom command 00:03:00.989 [532/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:03:00.989 [533/738] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:01.247 [534/738] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:03:01.247 [535/738] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.247 [536/738] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:03:01.247 [537/738] Linking target lib/librte_table.so.23.0 00:03:01.247 [538/738] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:03:01.247 [539/738] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:03:01.247 [540/738] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:03:01.247 [541/738] Generating lib/rte_graph_def with a custom command 00:03:01.247 [542/738] Generating lib/rte_graph_mingw with a custom command 00:03:01.505 [543/738] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:03:01.505 [544/738] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:03:01.505 [545/738] Linking static target lib/librte_graph.a 00:03:01.505 [546/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:03:01.505 [547/738] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:03:01.505 [548/738] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:03:01.764 [549/738] Compiling C object lib/librte_node.a.p/node_null.c.o 00:03:01.764 [550/738] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:03:01.764 [551/738] Compiling C object lib/librte_node.a.p/node_log.c.o 00:03:01.764 [552/738] Generating lib/rte_node_def with a custom command 00:03:01.764 [553/738] Generating lib/rte_node_mingw with a custom command 00:03:02.021 [554/738] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:03:02.021 [555/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:03:02.021 [556/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:02.021 [557/738] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.021 [558/738] Linking target lib/librte_graph.so.23.0 00:03:02.021 [559/738] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:03:02.021 [560/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:02.021 [561/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:02.021 [562/738] Generating drivers/rte_bus_pci_def with a custom command 00:03:02.021 [563/738] Generating drivers/rte_bus_pci_mingw with a custom command 00:03:02.021 [564/738] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:03:02.021 [565/738] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:03:02.280 [566/738] Generating drivers/rte_bus_vdev_def with a custom command 00:03:02.280 [567/738] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:02.280 [568/738] Generating drivers/rte_bus_vdev_mingw with a custom command 00:03:02.280 [569/738] Generating drivers/rte_mempool_ring_def with a custom command 00:03:02.280 [570/738] Generating drivers/rte_mempool_ring_mingw with a custom command 00:03:02.280 [571/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:02.280 [572/738] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:03:02.280 [573/738] Linking static target lib/librte_node.a 00:03:02.280 [574/738] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:02.280 [575/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:02.281 [576/738] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:02.281 [577/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:02.281 [578/738] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:02.539 [579/738] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.539 [580/738] Linking target lib/librte_node.so.23.0 00:03:02.539 [581/738] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:02.539 [582/738] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:02.539 [583/738] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:02.539 [584/738] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:02.539 [585/738] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:02.539 [586/738] Linking static target drivers/librte_bus_pci.a 00:03:02.539 [587/738] Linking static target drivers/librte_bus_vdev.a 00:03:02.539 [588/738] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.797 [589/738] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:02.797 [590/738] Linking target drivers/librte_bus_vdev.so.23.0 00:03:02.797 [591/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:03:02.797 [592/738] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.797 [593/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:03:02.797 [594/738] Linking target drivers/librte_bus_pci.so.23.0 00:03:02.797 [595/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:03:02.797 [596/738] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:03:02.797 [597/738] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:03:03.056 [598/738] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:03.056 [599/738] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:03.056 [600/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:03:03.056 [601/738] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:03.056 [602/738] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:03.056 [603/738] Linking static target drivers/librte_mempool_ring.a 00:03:03.056 [604/738] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:03.056 [605/738] Linking target drivers/librte_mempool_ring.so.23.0 00:03:03.314 [606/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:03:03.314 [607/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:03:03.574 [608/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:03:03.574 [609/738] Linking static target drivers/net/i40e/base/libi40e_base.a 00:03:03.834 [610/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:03:03.834 [611/738] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:03:03.834 [612/738] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:03:04.093 [613/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:03:04.354 [614/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:03:04.354 [615/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:03:04.354 [616/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:03:04.354 [617/738] Generating drivers/rte_net_i40e_def with a custom command 00:03:04.354 [618/738] Generating drivers/rte_net_i40e_mingw with a custom command 00:03:04.354 [619/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:03:04.974 [620/738] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:03:05.233 [621/738] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:03:05.233 [622/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:03:05.233 [623/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:03:05.233 [624/738] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:03:05.233 [625/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:03:05.490 [626/738] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:03:05.490 [627/738] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:03:05.490 [628/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:03:05.491 [629/738] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:03:05.491 [630/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:03:05.748 [631/738] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:03:05.748 [632/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:03:06.007 [633/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:03:06.007 [634/738] Linking static target drivers/libtmp_rte_net_i40e.a 00:03:06.007 [635/738] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:03:06.007 [636/738] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:03:06.007 [637/738] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:06.007 [638/738] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:06.265 [639/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:06.265 [640/738] Linking static target drivers/librte_net_i40e.a 00:03:06.265 [641/738] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:06.265 [642/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:06.265 [643/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:06.523 [644/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:06.523 [645/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:06.523 [646/738] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.523 [647/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:06.523 [648/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:06.523 [649/738] Linking target drivers/librte_net_i40e.so.23.0 00:03:06.782 [650/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:06.782 [651/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:07.040 [652/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:07.040 [653/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:07.040 [654/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:07.040 [655/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:07.040 [656/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:07.040 [657/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:07.040 [658/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:07.040 [659/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:07.299 [660/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:07.299 [661/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:07.557 [662/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:07.557 [663/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:07.557 [664/738] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:07.557 [665/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:07.557 [666/738] Linking static target lib/librte_vhost.a 00:03:07.815 [667/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:07.815 [668/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:08.072 [669/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:08.072 [670/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:08.072 [671/738] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:08.329 [672/738] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:08.329 [673/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:08.329 [674/738] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:08.329 [675/738] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.329 [676/738] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:08.329 [677/738] Linking target lib/librte_vhost.so.23.0 00:03:08.329 [678/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:08.587 [679/738] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:08.587 [680/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:08.587 [681/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:08.587 [682/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:08.845 [683/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:08.845 [684/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:08.845 [685/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:08.845 [686/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:08.845 [687/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:09.103 [688/738] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:09.103 [689/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:09.103 [690/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:09.360 [691/738] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:09.360 [692/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:09.360 [693/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:09.360 [694/738] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:09.618 [695/738] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:09.618 [696/738] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:09.877 [697/738] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:09.877 [698/738] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:09.877 [699/738] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:10.135 [700/738] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:10.135 [701/738] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:10.394 [702/738] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:10.394 [703/738] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:10.394 [704/738] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:10.394 [705/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:10.653 [706/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:10.653 [707/738] Linking static target lib/librte_pipeline.a 00:03:10.912 [708/738] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:10.912 [709/738] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:10.912 [710/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:10.912 [711/738] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:10.912 [712/738] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:10.912 [713/738] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:11.171 [714/738] Linking target app/dpdk-pdump 00:03:11.171 [715/738] Linking target app/dpdk-dumpcap 00:03:11.171 [716/738] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:11.171 [717/738] Linking target app/dpdk-proc-info 00:03:11.171 [718/738] Linking target app/dpdk-test-acl 00:03:11.171 [719/738] Linking target app/dpdk-test-bbdev 00:03:11.171 [720/738] Linking target app/dpdk-test-cmdline 00:03:11.429 [721/738] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:11.429 [722/738] Linking target app/dpdk-test-compress-perf 00:03:11.429 [723/738] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:11.429 [724/738] Linking target app/dpdk-test-crypto-perf 00:03:11.429 [725/738] Linking target app/dpdk-test-eventdev 00:03:11.429 [726/738] Linking target app/dpdk-test-fib 00:03:11.429 [727/738] Linking target app/dpdk-test-gpudev 00:03:11.686 [728/738] Linking target app/dpdk-test-regex 00:03:11.686 [729/738] Linking target app/dpdk-test-pipeline 00:03:11.686 [730/738] Linking target app/dpdk-test-flow-perf 00:03:11.686 [731/738] Linking target app/dpdk-testpmd 00:03:11.686 [732/738] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:11.686 [733/738] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:11.942 [734/738] Linking target app/dpdk-test-sad 00:03:12.200 [735/738] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:12.456 [736/738] Linking target app/dpdk-test-security-perf 00:03:13.390 [737/738] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.390 [738/738] Linking target lib/librte_pipeline.so.23.0 00:03:13.390 18:53:30 build_native_dpdk -- common/autobuild_common.sh@201 -- $ uname -s 00:03:13.390 18:53:30 build_native_dpdk -- common/autobuild_common.sh@201 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:13.390 18:53:30 build_native_dpdk -- common/autobuild_common.sh@214 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:13.390 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:13.390 [0/1] Installing files. 00:03:13.649 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:13.649 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:13.649 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:13.649 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:13.649 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:13.649 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:13.649 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:13.649 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:13.649 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:13.649 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:13.649 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:13.649 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:13.649 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:13.649 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:13.649 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:13.649 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:13.649 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:13.649 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:13.649 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:13.649 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:13.649 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:13.649 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:13.649 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:13.649 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:13.649 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:13.649 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:13.649 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:13.649 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:13.649 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:13.649 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:13.649 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:13.649 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:13.649 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:13.649 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:13.649 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:13.649 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:13.649 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:13.649 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:13.649 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:13.649 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:13.649 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:13.649 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:13.649 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:13.649 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:13.649 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:13.649 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:13.649 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:13.649 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:13.649 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:13.649 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:13.649 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:13.649 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:13.649 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:13.649 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:13.649 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:13.649 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/flow_classify.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:13.649 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/ipv4_rules_file.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:13.649 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:13.649 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:13.649 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:13.649 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:13.649 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:13.650 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:13.650 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:13.650 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.650 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.650 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.650 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.650 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.650 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.650 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.650 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.650 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.650 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.650 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.650 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.650 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.650 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.650 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.650 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.650 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.650 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.650 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.650 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.650 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.650 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.650 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.650 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.650 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.650 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.650 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.650 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.650 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.650 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:13.650 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:13.650 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:13.650 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/kni.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:13.650 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:13.650 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:13.650 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:13.650 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:13.650 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:13.650 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:13.650 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:13.650 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.650 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.650 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.650 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.650 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.650 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.650 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.650 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.650 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.910 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.910 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.910 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.910 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.910 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.910 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.910 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.910 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.910 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.910 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.910 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.910 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.910 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.910 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.910 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.910 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.910 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.910 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.910 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.910 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.910 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.910 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.910 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.910 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.910 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.910 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.910 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.910 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.910 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.910 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.910 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.910 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.910 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.910 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.910 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.911 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.911 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.911 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.911 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.911 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.911 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.911 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.911 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.911 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.911 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.911 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:13.911 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:13.911 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:13.911 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:13.911 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:13.911 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:13.911 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:13.911 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:13.911 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:13.911 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:13.911 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:13.911 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:13.911 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:13.911 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:13.911 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:13.911 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:13.911 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:13.911 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:13.911 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:13.911 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:13.911 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:13.911 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:13.911 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:13.911 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:13.911 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:13.911 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:13.911 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:13.911 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:13.911 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:13.911 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:13.911 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:13.911 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:13.911 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:13.911 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:13.911 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:13.911 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.911 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.911 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.911 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.911 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.911 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.911 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.911 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.911 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.911 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.911 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.911 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.911 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.911 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.911 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.911 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.911 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.911 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.911 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.911 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.911 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.911 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.911 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.912 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.912 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.912 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.912 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.912 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.912 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.912 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.912 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.912 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.912 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.912 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:13.912 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:13.912 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:13.912 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:13.912 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:13.912 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:13.912 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:13.912 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:13.912 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:13.912 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:13.912 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:13.912 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:13.912 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:13.912 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:13.912 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:13.912 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:13.912 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:13.912 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:13.912 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:13.912 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:13.912 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:13.912 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:13.912 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:13.912 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:13.912 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:13.912 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:13.912 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:13.912 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:13.912 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:13.912 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:13.912 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:13.912 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:13.912 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:13.912 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:13.912 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:13.912 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:13.912 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:13.912 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.912 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.912 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.912 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.912 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.912 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.912 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.912 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.912 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.912 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.912 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.912 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.912 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.912 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.912 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.912 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.912 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.912 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.913 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.913 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.913 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.913 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.913 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.913 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.913 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.913 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.913 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.913 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.913 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.913 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.913 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.913 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.913 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.913 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.913 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.913 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.913 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:13.913 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:13.913 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:13.913 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:13.913 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:13.913 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:13.913 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:13.913 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:13.913 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:13.913 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:13.913 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:13.913 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:13.913 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:13.913 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:13.913 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:13.913 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:13.913 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:13.913 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:13.913 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:13.913 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:13.913 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:13.913 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:13.913 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:13.913 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:13.913 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:13.913 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:13.913 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:13.913 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:13.913 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:13.913 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:13.913 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:13.913 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:13.913 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:13.913 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:13.913 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:13.913 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:13.913 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:13.913 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:13.913 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:13.913 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:13.913 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:13.913 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:13.913 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:13.913 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:13.913 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:13.913 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:13.913 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:13.913 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:13.913 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:13.914 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:13.914 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:13.914 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:13.914 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:13.914 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:13.914 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:13.914 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:13.914 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:13.914 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:13.914 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:13.914 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:13.914 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:13.914 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:13.914 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:13.914 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:13.914 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:13.914 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:13.914 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:13.914 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:13.914 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:13.914 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:13.914 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:13.914 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:13.914 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:13.914 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:13.914 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:13.914 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:13.914 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:13.914 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:13.914 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:13.914 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.914 Installing lib/librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.914 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.914 Installing lib/librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.914 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.914 Installing lib/librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.914 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.914 Installing lib/librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.914 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.914 Installing lib/librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.914 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.914 Installing lib/librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.914 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.914 Installing lib/librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.914 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.914 Installing lib/librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.914 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.914 Installing lib/librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.914 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.914 Installing lib/librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.914 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.914 Installing lib/librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.914 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.914 Installing lib/librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.914 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.914 Installing lib/librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.914 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.914 Installing lib/librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.914 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.914 Installing lib/librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.914 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.914 Installing lib/librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.914 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.914 Installing lib/librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.914 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.914 Installing lib/librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.914 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.914 Installing lib/librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.914 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.914 Installing lib/librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.914 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.914 Installing lib/librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.914 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.914 Installing lib/librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.914 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.914 Installing lib/librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.914 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.914 Installing lib/librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.914 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.914 Installing lib/librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.914 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.914 Installing lib/librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.914 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.914 Installing lib/librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.914 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.914 Installing lib/librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.915 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.915 Installing lib/librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.915 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.915 Installing lib/librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.915 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.915 Installing lib/librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.915 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.915 Installing lib/librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.915 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.915 Installing lib/librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.915 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.915 Installing lib/librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.915 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.915 Installing lib/librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.915 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.915 Installing lib/librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.915 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.915 Installing lib/librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.915 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.915 Installing lib/librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.915 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.915 Installing lib/librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.915 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.915 Installing lib/librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.915 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.915 Installing lib/librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.915 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.915 Installing lib/librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.915 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.915 Installing lib/librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.915 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.915 Installing lib/librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.915 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.915 Installing lib/librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.915 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.915 Installing lib/librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.915 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.915 Installing lib/librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.915 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.915 Installing lib/librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.915 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.915 Installing lib/librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.915 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.915 Installing lib/librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.915 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.915 Installing lib/librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.915 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.915 Installing lib/librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.915 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.915 Installing drivers/librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:13.915 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.915 Installing drivers/librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:13.915 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.915 Installing drivers/librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:13.915 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.915 Installing drivers/librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:13.915 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:13.915 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:13.915 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:13.915 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:13.915 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:13.915 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:13.915 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:13.915 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:13.915 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.175 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.175 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.175 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.175 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.175 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.175 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.175 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.175 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.175 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.175 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.175 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.175 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:14.175 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:14.175 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:14.175 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:14.175 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:14.175 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:14.175 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:14.175 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:14.175 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:14.175 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:14.175 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:14.175 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:14.175 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.175 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.175 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.175 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.175 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.175 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.175 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.176 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.176 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.176 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.176 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.176 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.176 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.176 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.176 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.176 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.176 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.176 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.176 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.176 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.176 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.176 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.176 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.176 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.176 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.176 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.176 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.176 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.176 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.176 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.176 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.176 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.176 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.176 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.176 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.176 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.176 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.176 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.176 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.176 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.176 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.176 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.176 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.176 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.176 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.176 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.176 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.176 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.176 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.176 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.176 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.176 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.176 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.176 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.176 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.176 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.176 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.176 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.176 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.176 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.176 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.176 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.176 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.176 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.176 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.176 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.176 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.176 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.176 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.176 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.176 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.176 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.176 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.176 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.176 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.176 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.176 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.176 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.176 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.176 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.176 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.176 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.176 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.176 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.176 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.176 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.176 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.177 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.177 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.177 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.177 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.177 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.177 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.177 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.177 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.177 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.177 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.177 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.177 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.177 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.177 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.177 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.177 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.177 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.177 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.177 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.177 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.177 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.177 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.177 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.177 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.177 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.177 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.177 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.177 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.177 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.177 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.177 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.177 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.177 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.177 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.177 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.177 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.177 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.177 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.177 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.177 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.177 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.177 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.177 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.177 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.177 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.177 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.177 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.177 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.177 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.177 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.177 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.177 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.177 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.177 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.177 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.177 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.177 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.177 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.177 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.177 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.177 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.177 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.177 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.177 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.177 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.177 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.177 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.177 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.177 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.177 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.177 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.177 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.177 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.177 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.177 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.177 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.177 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.177 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.177 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.177 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.177 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.177 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.177 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.177 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.177 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.178 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.178 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.178 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.178 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.178 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.178 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.178 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.178 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.178 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.178 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.178 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.178 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.178 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.178 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.178 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.178 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.178 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.178 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.178 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.178 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.178 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.178 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.178 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_empty_poll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.178 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_intel_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.178 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.178 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.178 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.178 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.178 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.178 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.178 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.178 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.178 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.178 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.178 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.178 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.178 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.178 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.178 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.178 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.178 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.178 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.178 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.178 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.178 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.178 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.178 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.178 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.178 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.178 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.178 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.178 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.178 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.178 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.178 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.178 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.178 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.178 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.178 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.178 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.178 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.178 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.178 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.178 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.178 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.178 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.178 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.178 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.178 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.178 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.178 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.178 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.178 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.178 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.178 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.178 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.178 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.178 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.178 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.179 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.179 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.179 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.179 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.179 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.179 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.179 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.179 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.179 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.179 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.179 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.179 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.179 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.179 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.179 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.179 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.179 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.179 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.179 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.179 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.179 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.179 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.179 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.179 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.179 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.179 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.179 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.179 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.179 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.179 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.179 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.179 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.179 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.179 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:14.179 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:14.179 Installing symlink pointing to librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.23 00:03:14.179 Installing symlink pointing to librte_kvargs.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:14.179 Installing symlink pointing to librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.23 00:03:14.179 Installing symlink pointing to librte_telemetry.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:14.179 Installing symlink pointing to librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.23 00:03:14.179 Installing symlink pointing to librte_eal.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:14.179 Installing symlink pointing to librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.23 00:03:14.179 Installing symlink pointing to librte_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:14.179 Installing symlink pointing to librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.23 00:03:14.179 Installing symlink pointing to librte_rcu.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:14.179 Installing symlink pointing to librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.23 00:03:14.179 Installing symlink pointing to librte_mempool.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:14.179 Installing symlink pointing to librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.23 00:03:14.179 Installing symlink pointing to librte_mbuf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:14.179 Installing symlink pointing to librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.23 00:03:14.179 Installing symlink pointing to librte_net.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:14.179 Installing symlink pointing to librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.23 00:03:14.179 Installing symlink pointing to librte_meter.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:14.179 Installing symlink pointing to librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.23 00:03:14.179 Installing symlink pointing to librte_ethdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:14.179 Installing symlink pointing to librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.23 00:03:14.179 Installing symlink pointing to librte_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:14.179 Installing symlink pointing to librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.23 00:03:14.179 Installing symlink pointing to librte_cmdline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:14.179 Installing symlink pointing to librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.23 00:03:14.179 Installing symlink pointing to librte_metrics.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:14.179 Installing symlink pointing to librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.23 00:03:14.179 Installing symlink pointing to librte_hash.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:14.179 Installing symlink pointing to librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.23 00:03:14.179 Installing symlink pointing to librte_timer.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:14.179 Installing symlink pointing to librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.23 00:03:14.179 Installing symlink pointing to librte_acl.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:14.179 Installing symlink pointing to librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.23 00:03:14.179 Installing symlink pointing to librte_bbdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:14.179 Installing symlink pointing to librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.23 00:03:14.179 Installing symlink pointing to librte_bitratestats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:14.179 Installing symlink pointing to librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.23 00:03:14.179 Installing symlink pointing to librte_bpf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:14.179 Installing symlink pointing to librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.23 00:03:14.179 Installing symlink pointing to librte_cfgfile.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:14.179 Installing symlink pointing to librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.23 00:03:14.179 Installing symlink pointing to librte_compressdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:14.179 Installing symlink pointing to librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.23 00:03:14.179 Installing symlink pointing to librte_cryptodev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:14.180 Installing symlink pointing to librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.23 00:03:14.180 Installing symlink pointing to librte_distributor.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:14.180 Installing symlink pointing to librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.23 00:03:14.180 Installing symlink pointing to librte_efd.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:14.180 Installing symlink pointing to librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.23 00:03:14.180 Installing symlink pointing to librte_eventdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:14.180 Installing symlink pointing to librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.23 00:03:14.180 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:03:14.180 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:03:14.180 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:03:14.180 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:03:14.180 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:03:14.180 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:03:14.180 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:03:14.180 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:03:14.180 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:03:14.180 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:03:14.180 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:03:14.180 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:03:14.180 Installing symlink pointing to librte_gpudev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:14.180 Installing symlink pointing to librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.23 00:03:14.180 Installing symlink pointing to librte_gro.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:14.180 Installing symlink pointing to librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.23 00:03:14.180 Installing symlink pointing to librte_gso.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:14.180 Installing symlink pointing to librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.23 00:03:14.180 Installing symlink pointing to librte_ip_frag.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:14.180 Installing symlink pointing to librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.23 00:03:14.180 Installing symlink pointing to librte_jobstats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:14.180 Installing symlink pointing to librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.23 00:03:14.180 Installing symlink pointing to librte_latencystats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:14.180 Installing symlink pointing to librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.23 00:03:14.180 Installing symlink pointing to librte_lpm.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:14.180 Installing symlink pointing to librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.23 00:03:14.180 Installing symlink pointing to librte_member.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:14.180 Installing symlink pointing to librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.23 00:03:14.180 Installing symlink pointing to librte_pcapng.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:14.180 Installing symlink pointing to librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.23 00:03:14.180 Installing symlink pointing to librte_power.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:14.180 Installing symlink pointing to librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.23 00:03:14.180 Installing symlink pointing to librte_rawdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:14.180 Installing symlink pointing to librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.23 00:03:14.180 Installing symlink pointing to librte_regexdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:14.180 Installing symlink pointing to librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.23 00:03:14.180 Installing symlink pointing to librte_dmadev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:14.180 Installing symlink pointing to librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.23 00:03:14.180 Installing symlink pointing to librte_rib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:14.180 Installing symlink pointing to librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.23 00:03:14.180 Installing symlink pointing to librte_reorder.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:14.180 Installing symlink pointing to librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.23 00:03:14.180 Installing symlink pointing to librte_sched.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:14.180 Installing symlink pointing to librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.23 00:03:14.180 Installing symlink pointing to librte_security.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:14.180 Installing symlink pointing to librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.23 00:03:14.180 Installing symlink pointing to librte_stack.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:14.180 Installing symlink pointing to librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.23 00:03:14.180 Installing symlink pointing to librte_vhost.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:14.180 Installing symlink pointing to librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.23 00:03:14.180 Installing symlink pointing to librte_ipsec.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:14.180 Installing symlink pointing to librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.23 00:03:14.180 Installing symlink pointing to librte_fib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:14.180 Installing symlink pointing to librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.23 00:03:14.180 Installing symlink pointing to librte_port.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:14.180 Installing symlink pointing to librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.23 00:03:14.180 Installing symlink pointing to librte_pdump.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:14.180 Installing symlink pointing to librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.23 00:03:14.180 Installing symlink pointing to librte_table.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:14.180 Installing symlink pointing to librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.23 00:03:14.180 Installing symlink pointing to librte_pipeline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:14.180 Installing symlink pointing to librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.23 00:03:14.180 Installing symlink pointing to librte_graph.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:14.180 Installing symlink pointing to librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.23 00:03:14.180 Installing symlink pointing to librte_node.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:14.180 Installing symlink pointing to librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:03:14.180 Installing symlink pointing to librte_bus_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:03:14.180 Installing symlink pointing to librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:03:14.180 Installing symlink pointing to librte_bus_vdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:03:14.180 Installing symlink pointing to librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:03:14.180 Installing symlink pointing to librte_mempool_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:03:14.180 Installing symlink pointing to librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:03:14.180 Installing symlink pointing to librte_net_i40e.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:03:14.180 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:03:14.180 18:53:31 build_native_dpdk -- common/autobuild_common.sh@220 -- $ cat 00:03:14.180 18:53:31 build_native_dpdk -- common/autobuild_common.sh@225 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:14.180 00:03:14.180 real 0m33.454s 00:03:14.180 user 3m39.467s 00:03:14.180 sys 0m34.403s 00:03:14.180 18:53:31 build_native_dpdk -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:14.180 18:53:31 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:14.180 ************************************ 00:03:14.180 END TEST build_native_dpdk 00:03:14.181 ************************************ 00:03:14.181 18:53:31 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:14.181 18:53:31 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:14.181 18:53:31 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:14.181 18:53:31 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:14.181 18:53:31 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:14.181 18:53:31 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:14.181 18:53:31 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:14.181 18:53:31 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme --with-shared 00:03:14.439 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:14.439 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.439 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:14.439 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:14.713 Using 'verbs' RDMA provider 00:03:25.620 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:37.852 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:37.853 Creating mk/config.mk...done. 00:03:37.853 Creating mk/cc.flags.mk...done. 00:03:37.853 Type 'make' to build. 00:03:37.853 18:53:53 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:37.853 18:53:53 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:37.853 18:53:53 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:37.853 18:53:53 -- common/autotest_common.sh@10 -- $ set +x 00:03:37.853 ************************************ 00:03:37.853 START TEST make 00:03:37.853 ************************************ 00:03:37.853 18:53:53 make -- common/autotest_common.sh@1129 -- $ make -j10 00:03:37.853 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:03:37.853 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:03:37.853 meson setup builddir \ 00:03:37.853 -Dwith-libaio=enabled \ 00:03:37.853 -Dwith-liburing=enabled \ 00:03:37.853 -Dwith-libvfn=disabled \ 00:03:37.853 -Dwith-spdk=disabled \ 00:03:37.853 -Dexamples=false \ 00:03:37.853 -Dtests=false \ 00:03:37.853 -Dtools=false && \ 00:03:37.853 meson compile -C builddir && \ 00:03:37.853 cd -) 00:03:37.853 make[1]: Nothing to be done for 'all'. 00:03:38.424 The Meson build system 00:03:38.424 Version: 1.5.0 00:03:38.424 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:03:38.424 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:38.424 Build type: native build 00:03:38.424 Project name: xnvme 00:03:38.424 Project version: 0.7.5 00:03:38.424 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:38.424 C linker for the host machine: gcc ld.bfd 2.40-14 00:03:38.424 Host machine cpu family: x86_64 00:03:38.424 Host machine cpu: x86_64 00:03:38.424 Message: host_machine.system: linux 00:03:38.424 Compiler for C supports arguments -Wno-missing-braces: YES 00:03:38.424 Compiler for C supports arguments -Wno-cast-function-type: YES 00:03:38.424 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:38.424 Run-time dependency threads found: YES 00:03:38.424 Has header "setupapi.h" : NO 00:03:38.424 Has header "linux/blkzoned.h" : YES 00:03:38.424 Has header "linux/blkzoned.h" : YES (cached) 00:03:38.424 Has header "libaio.h" : YES 00:03:38.424 Library aio found: YES 00:03:38.424 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:38.424 Run-time dependency liburing found: YES 2.2 00:03:38.424 Dependency libvfn skipped: feature with-libvfn disabled 00:03:38.424 Found CMake: /usr/bin/cmake (3.27.7) 00:03:38.424 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:03:38.424 Subproject spdk : skipped: feature with-spdk disabled 00:03:38.424 Run-time dependency appleframeworks found: NO (tried framework) 00:03:38.424 Run-time dependency appleframeworks found: NO (tried framework) 00:03:38.424 Library rt found: YES 00:03:38.424 Checking for function "clock_gettime" with dependency -lrt: YES 00:03:38.424 Configuring xnvme_config.h using configuration 00:03:38.424 Configuring xnvme.spec using configuration 00:03:38.424 Run-time dependency bash-completion found: YES 2.11 00:03:38.424 Message: Bash-completions: /usr/share/bash-completion/completions 00:03:38.424 Program cp found: YES (/usr/bin/cp) 00:03:38.424 Build targets in project: 3 00:03:38.424 00:03:38.424 xnvme 0.7.5 00:03:38.424 00:03:38.424 Subprojects 00:03:38.424 spdk : NO Feature 'with-spdk' disabled 00:03:38.424 00:03:38.424 User defined options 00:03:38.424 examples : false 00:03:38.424 tests : false 00:03:38.424 tools : false 00:03:38.424 with-libaio : enabled 00:03:38.424 with-liburing: enabled 00:03:38.424 with-libvfn : disabled 00:03:38.424 with-spdk : disabled 00:03:38.424 00:03:38.424 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:38.998 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:03:38.998 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:03:38.998 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:03:38.998 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:03:38.998 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:03:38.998 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:03:38.998 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:03:38.998 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:03:38.998 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:03:38.998 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:03:38.998 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:03:38.998 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:03:38.998 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:03:38.998 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:03:38.998 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:03:38.998 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:03:38.998 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:03:38.998 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:03:38.998 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:03:39.259 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:03:39.259 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:03:39.259 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:03:39.259 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:03:39.259 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:03:39.259 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:03:39.259 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:03:39.259 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:03:39.259 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:03:39.259 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:03:39.259 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:03:39.259 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:03:39.259 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:03:39.259 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:03:39.259 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:03:39.259 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:03:39.259 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:03:39.259 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:03:39.259 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:03:39.259 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:03:39.259 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:03:39.259 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:03:39.259 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:03:39.259 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:03:39.259 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:03:39.259 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:03:39.259 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:03:39.259 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:03:39.259 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:03:39.259 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:03:39.259 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:03:39.259 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:03:39.259 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:03:39.259 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:03:39.259 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:03:39.520 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:03:39.520 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:03:39.520 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:03:39.520 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:03:39.520 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:03:39.520 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:03:39.520 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:03:39.520 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:03:39.520 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:03:39.520 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:03:39.520 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:03:39.520 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:03:39.520 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:03:39.520 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:03:39.520 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:03:39.520 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:03:39.520 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:03:39.520 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:03:39.781 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:03:39.781 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:03:40.043 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:03:40.043 [75/76] Linking static target lib/libxnvme.a 00:03:40.043 [76/76] Linking target lib/libxnvme.so.0.7.5 00:03:40.043 INFO: autodetecting backend as ninja 00:03:40.043 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:40.043 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:04:12.132 CC lib/log/log.o 00:04:12.132 CC lib/log/log_deprecated.o 00:04:12.132 CC lib/log/log_flags.o 00:04:12.132 CC lib/ut_mock/mock.o 00:04:12.132 CC lib/ut/ut.o 00:04:12.132 LIB libspdk_ut_mock.a 00:04:12.132 LIB libspdk_log.a 00:04:12.132 SO libspdk_ut_mock.so.6.0 00:04:12.132 LIB libspdk_ut.a 00:04:12.132 SO libspdk_log.so.7.1 00:04:12.132 SO libspdk_ut.so.2.0 00:04:12.132 SYMLINK libspdk_ut_mock.so 00:04:12.132 SYMLINK libspdk_log.so 00:04:12.132 SYMLINK libspdk_ut.so 00:04:12.132 CC lib/dma/dma.o 00:04:12.132 CC lib/util/base64.o 00:04:12.132 CC lib/util/bit_array.o 00:04:12.132 CXX lib/trace_parser/trace.o 00:04:12.132 CC lib/util/cpuset.o 00:04:12.132 CC lib/util/crc16.o 00:04:12.132 CC lib/util/crc32.o 00:04:12.132 CC lib/ioat/ioat.o 00:04:12.132 CC lib/util/crc32c.o 00:04:12.132 CC lib/vfio_user/host/vfio_user_pci.o 00:04:12.132 CC lib/util/crc32_ieee.o 00:04:12.132 CC lib/util/crc64.o 00:04:12.132 CC lib/util/dif.o 00:04:12.132 CC lib/util/fd.o 00:04:12.132 CC lib/vfio_user/host/vfio_user.o 00:04:12.132 LIB libspdk_dma.a 00:04:12.132 SO libspdk_dma.so.5.0 00:04:12.132 CC lib/util/fd_group.o 00:04:12.132 CC lib/util/file.o 00:04:12.132 CC lib/util/hexlify.o 00:04:12.132 SYMLINK libspdk_dma.so 00:04:12.132 CC lib/util/iov.o 00:04:12.132 CC lib/util/math.o 00:04:12.132 LIB libspdk_ioat.a 00:04:12.132 SO libspdk_ioat.so.7.0 00:04:12.132 CC lib/util/net.o 00:04:12.132 LIB libspdk_vfio_user.a 00:04:12.132 CC lib/util/pipe.o 00:04:12.132 CC lib/util/strerror_tls.o 00:04:12.132 SO libspdk_vfio_user.so.5.0 00:04:12.132 SYMLINK libspdk_ioat.so 00:04:12.132 CC lib/util/string.o 00:04:12.132 CC lib/util/uuid.o 00:04:12.132 CC lib/util/xor.o 00:04:12.132 SYMLINK libspdk_vfio_user.so 00:04:12.132 CC lib/util/zipf.o 00:04:12.132 CC lib/util/md5.o 00:04:12.133 LIB libspdk_util.a 00:04:12.133 LIB libspdk_trace_parser.a 00:04:12.133 SO libspdk_util.so.10.1 00:04:12.133 SO libspdk_trace_parser.so.6.0 00:04:12.133 SYMLINK libspdk_trace_parser.so 00:04:12.133 SYMLINK libspdk_util.so 00:04:12.133 CC lib/json/json_parse.o 00:04:12.133 CC lib/conf/conf.o 00:04:12.133 CC lib/json/json_util.o 00:04:12.133 CC lib/json/json_write.o 00:04:12.133 CC lib/idxd/idxd.o 00:04:12.133 CC lib/rdma_utils/rdma_utils.o 00:04:12.133 CC lib/idxd/idxd_kernel.o 00:04:12.133 CC lib/idxd/idxd_user.o 00:04:12.133 CC lib/vmd/vmd.o 00:04:12.133 CC lib/env_dpdk/env.o 00:04:12.133 CC lib/env_dpdk/memory.o 00:04:12.133 CC lib/env_dpdk/pci.o 00:04:12.133 LIB libspdk_conf.a 00:04:12.133 LIB libspdk_rdma_utils.a 00:04:12.133 SO libspdk_conf.so.6.0 00:04:12.133 SO libspdk_rdma_utils.so.1.0 00:04:12.133 CC lib/env_dpdk/init.o 00:04:12.133 CC lib/env_dpdk/threads.o 00:04:12.133 SYMLINK libspdk_conf.so 00:04:12.133 CC lib/env_dpdk/pci_ioat.o 00:04:12.133 SYMLINK libspdk_rdma_utils.so 00:04:12.133 CC lib/vmd/led.o 00:04:12.133 LIB libspdk_json.a 00:04:12.133 SO libspdk_json.so.6.0 00:04:12.133 CC lib/env_dpdk/pci_virtio.o 00:04:12.133 CC lib/env_dpdk/pci_vmd.o 00:04:12.133 SYMLINK libspdk_json.so 00:04:12.133 CC lib/env_dpdk/pci_idxd.o 00:04:12.133 CC lib/env_dpdk/pci_event.o 00:04:12.133 CC lib/env_dpdk/sigbus_handler.o 00:04:12.133 CC lib/rdma_provider/common.o 00:04:12.133 CC lib/jsonrpc/jsonrpc_server.o 00:04:12.133 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:12.133 CC lib/jsonrpc/jsonrpc_client.o 00:04:12.133 CC lib/env_dpdk/pci_dpdk.o 00:04:12.133 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:12.133 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:12.133 LIB libspdk_idxd.a 00:04:12.133 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:12.133 SO libspdk_idxd.so.12.1 00:04:12.133 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:12.133 LIB libspdk_vmd.a 00:04:12.393 SO libspdk_vmd.so.6.0 00:04:12.393 SYMLINK libspdk_idxd.so 00:04:12.393 SYMLINK libspdk_vmd.so 00:04:12.393 LIB libspdk_jsonrpc.a 00:04:12.393 SO libspdk_jsonrpc.so.6.0 00:04:12.393 LIB libspdk_rdma_provider.a 00:04:12.393 SYMLINK libspdk_jsonrpc.so 00:04:12.393 SO libspdk_rdma_provider.so.7.0 00:04:12.393 SYMLINK libspdk_rdma_provider.so 00:04:12.653 CC lib/rpc/rpc.o 00:04:12.913 LIB libspdk_rpc.a 00:04:12.913 SO libspdk_rpc.so.6.0 00:04:12.913 LIB libspdk_env_dpdk.a 00:04:12.913 SYMLINK libspdk_rpc.so 00:04:12.913 SO libspdk_env_dpdk.so.15.1 00:04:13.173 CC lib/notify/notify.o 00:04:13.173 CC lib/notify/notify_rpc.o 00:04:13.173 SYMLINK libspdk_env_dpdk.so 00:04:13.173 CC lib/trace/trace_flags.o 00:04:13.173 CC lib/trace/trace_rpc.o 00:04:13.173 CC lib/trace/trace.o 00:04:13.173 CC lib/keyring/keyring.o 00:04:13.173 CC lib/keyring/keyring_rpc.o 00:04:13.173 LIB libspdk_notify.a 00:04:13.173 SO libspdk_notify.so.6.0 00:04:13.429 LIB libspdk_keyring.a 00:04:13.429 SYMLINK libspdk_notify.so 00:04:13.429 SO libspdk_keyring.so.2.0 00:04:13.429 LIB libspdk_trace.a 00:04:13.429 SYMLINK libspdk_keyring.so 00:04:13.429 SO libspdk_trace.so.11.0 00:04:13.429 SYMLINK libspdk_trace.so 00:04:13.685 CC lib/thread/thread.o 00:04:13.686 CC lib/thread/iobuf.o 00:04:13.686 CC lib/sock/sock_rpc.o 00:04:13.686 CC lib/sock/sock.o 00:04:13.942 LIB libspdk_sock.a 00:04:13.942 SO libspdk_sock.so.10.0 00:04:14.199 SYMLINK libspdk_sock.so 00:04:14.199 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:14.199 CC lib/nvme/nvme_fabric.o 00:04:14.199 CC lib/nvme/nvme_ctrlr.o 00:04:14.199 CC lib/nvme/nvme_ns_cmd.o 00:04:14.199 CC lib/nvme/nvme_ns.o 00:04:14.199 CC lib/nvme/nvme.o 00:04:14.199 CC lib/nvme/nvme_pcie.o 00:04:14.199 CC lib/nvme/nvme_pcie_common.o 00:04:14.199 CC lib/nvme/nvme_qpair.o 00:04:15.131 CC lib/nvme/nvme_quirks.o 00:04:15.131 CC lib/nvme/nvme_transport.o 00:04:15.131 CC lib/nvme/nvme_discovery.o 00:04:15.131 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:15.131 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:15.131 CC lib/nvme/nvme_tcp.o 00:04:15.131 CC lib/nvme/nvme_opal.o 00:04:15.131 CC lib/nvme/nvme_io_msg.o 00:04:15.131 LIB libspdk_thread.a 00:04:15.131 SO libspdk_thread.so.11.0 00:04:15.389 SYMLINK libspdk_thread.so 00:04:15.389 CC lib/nvme/nvme_poll_group.o 00:04:15.389 CC lib/nvme/nvme_zns.o 00:04:15.389 CC lib/nvme/nvme_stubs.o 00:04:15.389 CC lib/nvme/nvme_auth.o 00:04:15.389 CC lib/nvme/nvme_cuse.o 00:04:15.648 CC lib/nvme/nvme_rdma.o 00:04:15.648 CC lib/accel/accel.o 00:04:15.648 CC lib/blob/blobstore.o 00:04:15.906 CC lib/blob/request.o 00:04:15.906 CC lib/blob/zeroes.o 00:04:15.906 CC lib/accel/accel_rpc.o 00:04:16.164 CC lib/accel/accel_sw.o 00:04:16.164 CC lib/blob/blob_bs_dev.o 00:04:16.164 CC lib/init/json_config.o 00:04:16.164 CC lib/init/subsystem.o 00:04:16.164 CC lib/virtio/virtio.o 00:04:16.164 CC lib/fsdev/fsdev.o 00:04:16.421 CC lib/fsdev/fsdev_io.o 00:04:16.421 CC lib/virtio/virtio_vhost_user.o 00:04:16.421 CC lib/virtio/virtio_vfio_user.o 00:04:16.421 CC lib/init/subsystem_rpc.o 00:04:16.421 CC lib/init/rpc.o 00:04:16.421 CC lib/virtio/virtio_pci.o 00:04:16.421 LIB libspdk_accel.a 00:04:16.678 CC lib/fsdev/fsdev_rpc.o 00:04:16.678 SO libspdk_accel.so.16.0 00:04:16.678 LIB libspdk_init.a 00:04:16.678 SYMLINK libspdk_accel.so 00:04:16.678 SO libspdk_init.so.6.0 00:04:16.678 SYMLINK libspdk_init.so 00:04:16.678 LIB libspdk_virtio.a 00:04:16.678 CC lib/bdev/bdev_zone.o 00:04:16.678 CC lib/bdev/part.o 00:04:16.678 CC lib/bdev/bdev.o 00:04:16.678 CC lib/bdev/bdev_rpc.o 00:04:16.678 CC lib/bdev/scsi_nvme.o 00:04:16.678 SO libspdk_virtio.so.7.0 00:04:16.936 LIB libspdk_nvme.a 00:04:16.936 CC lib/event/app.o 00:04:16.936 SYMLINK libspdk_virtio.so 00:04:16.936 CC lib/event/reactor.o 00:04:16.936 LIB libspdk_fsdev.a 00:04:16.936 SO libspdk_fsdev.so.2.0 00:04:16.936 SO libspdk_nvme.so.15.0 00:04:16.936 CC lib/event/log_rpc.o 00:04:16.936 SYMLINK libspdk_fsdev.so 00:04:16.936 CC lib/event/app_rpc.o 00:04:16.936 CC lib/event/scheduler_static.o 00:04:17.194 SYMLINK libspdk_nvme.so 00:04:17.194 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:17.452 LIB libspdk_event.a 00:04:17.452 SO libspdk_event.so.14.0 00:04:17.452 SYMLINK libspdk_event.so 00:04:18.016 LIB libspdk_fuse_dispatcher.a 00:04:18.016 SO libspdk_fuse_dispatcher.so.1.0 00:04:18.016 SYMLINK libspdk_fuse_dispatcher.so 00:04:18.582 LIB libspdk_blob.a 00:04:18.582 SO libspdk_blob.so.12.0 00:04:18.582 SYMLINK libspdk_blob.so 00:04:18.841 CC lib/blobfs/blobfs.o 00:04:18.841 CC lib/blobfs/tree.o 00:04:18.841 CC lib/lvol/lvol.o 00:04:19.099 LIB libspdk_bdev.a 00:04:19.099 SO libspdk_bdev.so.17.0 00:04:19.099 SYMLINK libspdk_bdev.so 00:04:19.358 CC lib/scsi/dev.o 00:04:19.358 CC lib/nvmf/ctrlr.o 00:04:19.358 CC lib/scsi/port.o 00:04:19.358 CC lib/scsi/lun.o 00:04:19.358 CC lib/nvmf/ctrlr_discovery.o 00:04:19.358 CC lib/nbd/nbd.o 00:04:19.358 CC lib/ublk/ublk.o 00:04:19.358 CC lib/ftl/ftl_core.o 00:04:19.358 CC lib/ftl/ftl_init.o 00:04:19.358 LIB libspdk_blobfs.a 00:04:19.617 SO libspdk_blobfs.so.11.0 00:04:19.617 SYMLINK libspdk_blobfs.so 00:04:19.617 CC lib/ublk/ublk_rpc.o 00:04:19.617 CC lib/scsi/scsi.o 00:04:19.617 CC lib/nbd/nbd_rpc.o 00:04:19.617 CC lib/scsi/scsi_bdev.o 00:04:19.617 CC lib/scsi/scsi_pr.o 00:04:19.617 CC lib/ftl/ftl_layout.o 00:04:19.617 CC lib/ftl/ftl_debug.o 00:04:19.617 CC lib/nvmf/ctrlr_bdev.o 00:04:19.876 LIB libspdk_nbd.a 00:04:19.876 LIB libspdk_lvol.a 00:04:19.876 CC lib/ftl/ftl_io.o 00:04:19.876 SO libspdk_nbd.so.7.0 00:04:19.876 SO libspdk_lvol.so.11.0 00:04:19.876 SYMLINK libspdk_nbd.so 00:04:19.876 CC lib/nvmf/subsystem.o 00:04:19.876 SYMLINK libspdk_lvol.so 00:04:19.876 CC lib/nvmf/nvmf.o 00:04:19.876 LIB libspdk_ublk.a 00:04:19.876 SO libspdk_ublk.so.3.0 00:04:19.876 CC lib/nvmf/nvmf_rpc.o 00:04:19.876 CC lib/nvmf/transport.o 00:04:19.876 SYMLINK libspdk_ublk.so 00:04:19.876 CC lib/nvmf/tcp.o 00:04:20.134 CC lib/scsi/scsi_rpc.o 00:04:20.134 CC lib/ftl/ftl_sb.o 00:04:20.134 CC lib/ftl/ftl_l2p.o 00:04:20.134 CC lib/scsi/task.o 00:04:20.134 CC lib/ftl/ftl_l2p_flat.o 00:04:20.392 CC lib/ftl/ftl_nv_cache.o 00:04:20.392 CC lib/nvmf/stubs.o 00:04:20.392 LIB libspdk_scsi.a 00:04:20.392 CC lib/ftl/ftl_band.o 00:04:20.392 SO libspdk_scsi.so.9.0 00:04:20.392 SYMLINK libspdk_scsi.so 00:04:20.392 CC lib/nvmf/mdns_server.o 00:04:20.392 CC lib/nvmf/rdma.o 00:04:20.650 CC lib/nvmf/auth.o 00:04:20.650 CC lib/ftl/ftl_band_ops.o 00:04:20.650 CC lib/ftl/ftl_writer.o 00:04:20.908 CC lib/ftl/ftl_rq.o 00:04:20.908 CC lib/vhost/vhost.o 00:04:20.908 CC lib/iscsi/conn.o 00:04:20.908 CC lib/vhost/vhost_rpc.o 00:04:20.908 CC lib/vhost/vhost_scsi.o 00:04:20.908 CC lib/vhost/vhost_blk.o 00:04:21.166 CC lib/ftl/ftl_reloc.o 00:04:21.166 CC lib/vhost/rte_vhost_user.o 00:04:21.425 CC lib/ftl/ftl_l2p_cache.o 00:04:21.425 CC lib/iscsi/init_grp.o 00:04:21.425 CC lib/iscsi/iscsi.o 00:04:21.425 CC lib/iscsi/param.o 00:04:21.684 CC lib/iscsi/portal_grp.o 00:04:21.684 CC lib/iscsi/tgt_node.o 00:04:21.684 CC lib/iscsi/iscsi_subsystem.o 00:04:21.684 CC lib/iscsi/iscsi_rpc.o 00:04:21.684 CC lib/iscsi/task.o 00:04:21.943 CC lib/ftl/ftl_p2l.o 00:04:21.943 CC lib/ftl/ftl_p2l_log.o 00:04:21.943 CC lib/ftl/mngt/ftl_mngt.o 00:04:21.943 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:21.943 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:21.943 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:21.943 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:21.943 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:21.943 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:22.202 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:22.202 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:22.202 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:22.202 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:22.202 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:22.202 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:22.202 LIB libspdk_vhost.a 00:04:22.202 CC lib/ftl/utils/ftl_conf.o 00:04:22.202 CC lib/ftl/utils/ftl_md.o 00:04:22.202 SO libspdk_vhost.so.8.0 00:04:22.202 CC lib/ftl/utils/ftl_mempool.o 00:04:22.202 CC lib/ftl/utils/ftl_bitmap.o 00:04:22.459 CC lib/ftl/utils/ftl_property.o 00:04:22.460 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:22.460 SYMLINK libspdk_vhost.so 00:04:22.460 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:22.460 LIB libspdk_nvmf.a 00:04:22.460 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:22.460 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:22.460 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:22.460 SO libspdk_nvmf.so.20.0 00:04:22.460 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:22.460 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:22.460 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:22.460 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:22.718 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:22.718 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:22.718 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:22.718 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:22.718 SYMLINK libspdk_nvmf.so 00:04:22.718 CC lib/ftl/base/ftl_base_dev.o 00:04:22.718 CC lib/ftl/base/ftl_base_bdev.o 00:04:22.718 CC lib/ftl/ftl_trace.o 00:04:22.975 LIB libspdk_iscsi.a 00:04:22.975 LIB libspdk_ftl.a 00:04:22.975 SO libspdk_iscsi.so.8.0 00:04:22.975 SYMLINK libspdk_iscsi.so 00:04:22.975 SO libspdk_ftl.so.9.0 00:04:23.234 SYMLINK libspdk_ftl.so 00:04:23.492 CC module/env_dpdk/env_dpdk_rpc.o 00:04:23.750 CC module/keyring/file/keyring.o 00:04:23.750 CC module/blob/bdev/blob_bdev.o 00:04:23.750 CC module/keyring/linux/keyring.o 00:04:23.750 CC module/sock/posix/posix.o 00:04:23.750 CC module/fsdev/aio/fsdev_aio.o 00:04:23.750 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:23.750 CC module/accel/error/accel_error.o 00:04:23.750 CC module/scheduler/gscheduler/gscheduler.o 00:04:23.750 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:23.750 LIB libspdk_env_dpdk_rpc.a 00:04:23.750 SO libspdk_env_dpdk_rpc.so.6.0 00:04:23.750 CC module/keyring/file/keyring_rpc.o 00:04:23.750 SYMLINK libspdk_env_dpdk_rpc.so 00:04:23.750 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:23.750 CC module/keyring/linux/keyring_rpc.o 00:04:23.750 LIB libspdk_scheduler_dpdk_governor.a 00:04:23.750 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:23.750 LIB libspdk_scheduler_gscheduler.a 00:04:23.750 LIB libspdk_keyring_file.a 00:04:23.750 CC module/accel/error/accel_error_rpc.o 00:04:23.750 LIB libspdk_scheduler_dynamic.a 00:04:23.750 SO libspdk_scheduler_gscheduler.so.4.0 00:04:23.750 SO libspdk_keyring_file.so.2.0 00:04:23.750 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:24.053 SO libspdk_scheduler_dynamic.so.4.0 00:04:24.053 SYMLINK libspdk_scheduler_gscheduler.so 00:04:24.053 LIB libspdk_keyring_linux.a 00:04:24.053 LIB libspdk_blob_bdev.a 00:04:24.053 SYMLINK libspdk_keyring_file.so 00:04:24.053 CC module/fsdev/aio/linux_aio_mgr.o 00:04:24.053 SYMLINK libspdk_scheduler_dynamic.so 00:04:24.053 SO libspdk_blob_bdev.so.12.0 00:04:24.053 SO libspdk_keyring_linux.so.1.0 00:04:24.053 LIB libspdk_accel_error.a 00:04:24.053 SO libspdk_accel_error.so.2.0 00:04:24.053 SYMLINK libspdk_blob_bdev.so 00:04:24.053 SYMLINK libspdk_keyring_linux.so 00:04:24.053 CC module/accel/dsa/accel_dsa.o 00:04:24.053 CC module/accel/dsa/accel_dsa_rpc.o 00:04:24.053 SYMLINK libspdk_accel_error.so 00:04:24.053 CC module/accel/ioat/accel_ioat.o 00:04:24.053 CC module/accel/ioat/accel_ioat_rpc.o 00:04:24.053 CC module/accel/iaa/accel_iaa.o 00:04:24.053 CC module/accel/iaa/accel_iaa_rpc.o 00:04:24.334 LIB libspdk_accel_ioat.a 00:04:24.334 CC module/bdev/delay/vbdev_delay.o 00:04:24.334 CC module/blobfs/bdev/blobfs_bdev.o 00:04:24.334 SO libspdk_accel_ioat.so.6.0 00:04:24.334 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:24.334 LIB libspdk_accel_iaa.a 00:04:24.334 CC module/bdev/error/vbdev_error.o 00:04:24.334 SO libspdk_accel_iaa.so.3.0 00:04:24.334 SYMLINK libspdk_accel_ioat.so 00:04:24.334 CC module/bdev/gpt/gpt.o 00:04:24.334 LIB libspdk_sock_posix.a 00:04:24.334 SYMLINK libspdk_accel_iaa.so 00:04:24.334 LIB libspdk_accel_dsa.a 00:04:24.334 SO libspdk_sock_posix.so.6.0 00:04:24.334 SO libspdk_accel_dsa.so.5.0 00:04:24.334 LIB libspdk_fsdev_aio.a 00:04:24.334 SO libspdk_fsdev_aio.so.1.0 00:04:24.334 SYMLINK libspdk_sock_posix.so 00:04:24.334 SYMLINK libspdk_accel_dsa.so 00:04:24.334 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:24.334 CC module/bdev/error/vbdev_error_rpc.o 00:04:24.334 CC module/bdev/gpt/vbdev_gpt.o 00:04:24.334 CC module/bdev/lvol/vbdev_lvol.o 00:04:24.334 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:24.334 CC module/bdev/malloc/bdev_malloc.o 00:04:24.334 SYMLINK libspdk_fsdev_aio.so 00:04:24.334 CC module/bdev/null/bdev_null.o 00:04:24.599 LIB libspdk_bdev_error.a 00:04:24.599 LIB libspdk_bdev_delay.a 00:04:24.600 LIB libspdk_blobfs_bdev.a 00:04:24.600 SO libspdk_bdev_error.so.6.0 00:04:24.600 SO libspdk_bdev_delay.so.6.0 00:04:24.600 SO libspdk_blobfs_bdev.so.6.0 00:04:24.600 CC module/bdev/nvme/bdev_nvme.o 00:04:24.600 SYMLINK libspdk_blobfs_bdev.so 00:04:24.600 SYMLINK libspdk_bdev_delay.so 00:04:24.600 SYMLINK libspdk_bdev_error.so 00:04:24.600 LIB libspdk_bdev_gpt.a 00:04:24.600 CC module/bdev/passthru/vbdev_passthru.o 00:04:24.600 SO libspdk_bdev_gpt.so.6.0 00:04:24.600 SYMLINK libspdk_bdev_gpt.so 00:04:24.600 CC module/bdev/null/bdev_null_rpc.o 00:04:24.857 CC module/bdev/split/vbdev_split.o 00:04:24.857 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:24.857 CC module/bdev/raid/bdev_raid.o 00:04:24.857 LIB libspdk_bdev_lvol.a 00:04:24.857 SO libspdk_bdev_lvol.so.6.0 00:04:24.857 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:24.857 CC module/bdev/xnvme/bdev_xnvme.o 00:04:24.857 SYMLINK libspdk_bdev_lvol.so 00:04:24.857 CC module/bdev/aio/bdev_aio.o 00:04:24.857 CC module/bdev/aio/bdev_aio_rpc.o 00:04:24.857 LIB libspdk_bdev_null.a 00:04:24.857 CC module/bdev/split/vbdev_split_rpc.o 00:04:24.857 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:24.857 SO libspdk_bdev_null.so.6.0 00:04:24.857 LIB libspdk_bdev_malloc.a 00:04:24.857 SO libspdk_bdev_malloc.so.6.0 00:04:25.116 SYMLINK libspdk_bdev_null.so 00:04:25.116 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:04:25.116 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:25.116 SYMLINK libspdk_bdev_malloc.so 00:04:25.116 LIB libspdk_bdev_split.a 00:04:25.116 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:25.116 CC module/bdev/nvme/nvme_rpc.o 00:04:25.116 LIB libspdk_bdev_passthru.a 00:04:25.116 SO libspdk_bdev_split.so.6.0 00:04:25.116 SO libspdk_bdev_passthru.so.6.0 00:04:25.116 LIB libspdk_bdev_aio.a 00:04:25.116 SYMLINK libspdk_bdev_split.so 00:04:25.116 CC module/bdev/raid/bdev_raid_rpc.o 00:04:25.116 SYMLINK libspdk_bdev_passthru.so 00:04:25.116 CC module/bdev/raid/bdev_raid_sb.o 00:04:25.116 LIB libspdk_bdev_xnvme.a 00:04:25.116 SO libspdk_bdev_aio.so.6.0 00:04:25.116 LIB libspdk_bdev_zone_block.a 00:04:25.116 SO libspdk_bdev_xnvme.so.3.0 00:04:25.116 CC module/bdev/ftl/bdev_ftl.o 00:04:25.116 SO libspdk_bdev_zone_block.so.6.0 00:04:25.116 SYMLINK libspdk_bdev_aio.so 00:04:25.116 CC module/bdev/nvme/bdev_mdns_client.o 00:04:25.116 SYMLINK libspdk_bdev_xnvme.so 00:04:25.116 SYMLINK libspdk_bdev_zone_block.so 00:04:25.116 CC module/bdev/nvme/vbdev_opal.o 00:04:25.116 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:25.374 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:25.374 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:25.374 CC module/bdev/raid/raid0.o 00:04:25.374 CC module/bdev/raid/raid1.o 00:04:25.374 CC module/bdev/iscsi/bdev_iscsi.o 00:04:25.374 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:25.374 LIB libspdk_bdev_ftl.a 00:04:25.374 CC module/bdev/raid/concat.o 00:04:25.374 SO libspdk_bdev_ftl.so.6.0 00:04:25.374 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:25.633 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:25.633 SYMLINK libspdk_bdev_ftl.so 00:04:25.633 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:25.633 LIB libspdk_bdev_iscsi.a 00:04:25.633 SO libspdk_bdev_iscsi.so.6.0 00:04:25.890 SYMLINK libspdk_bdev_iscsi.so 00:04:25.890 LIB libspdk_bdev_raid.a 00:04:25.890 SO libspdk_bdev_raid.so.6.0 00:04:25.890 LIB libspdk_bdev_virtio.a 00:04:25.890 SYMLINK libspdk_bdev_raid.so 00:04:25.890 SO libspdk_bdev_virtio.so.6.0 00:04:25.890 SYMLINK libspdk_bdev_virtio.so 00:04:26.828 LIB libspdk_bdev_nvme.a 00:04:26.828 SO libspdk_bdev_nvme.so.7.1 00:04:27.088 SYMLINK libspdk_bdev_nvme.so 00:04:27.348 CC module/event/subsystems/iobuf/iobuf.o 00:04:27.348 CC module/event/subsystems/sock/sock.o 00:04:27.348 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:27.348 CC module/event/subsystems/keyring/keyring.o 00:04:27.348 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:27.348 CC module/event/subsystems/vmd/vmd.o 00:04:27.348 CC module/event/subsystems/scheduler/scheduler.o 00:04:27.348 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:27.348 CC module/event/subsystems/fsdev/fsdev.o 00:04:27.348 LIB libspdk_event_keyring.a 00:04:27.348 LIB libspdk_event_sock.a 00:04:27.348 SO libspdk_event_keyring.so.1.0 00:04:27.608 SO libspdk_event_sock.so.5.0 00:04:27.608 LIB libspdk_event_vmd.a 00:04:27.608 LIB libspdk_event_scheduler.a 00:04:27.608 LIB libspdk_event_fsdev.a 00:04:27.608 SO libspdk_event_vmd.so.6.0 00:04:27.608 SYMLINK libspdk_event_keyring.so 00:04:27.608 SYMLINK libspdk_event_sock.so 00:04:27.608 SO libspdk_event_scheduler.so.4.0 00:04:27.608 SO libspdk_event_fsdev.so.1.0 00:04:27.608 LIB libspdk_event_vhost_blk.a 00:04:27.608 LIB libspdk_event_iobuf.a 00:04:27.608 SO libspdk_event_vhost_blk.so.3.0 00:04:27.608 SYMLINK libspdk_event_fsdev.so 00:04:27.608 SYMLINK libspdk_event_vmd.so 00:04:27.608 SO libspdk_event_iobuf.so.3.0 00:04:27.608 SYMLINK libspdk_event_scheduler.so 00:04:27.608 SYMLINK libspdk_event_vhost_blk.so 00:04:27.608 SYMLINK libspdk_event_iobuf.so 00:04:27.868 CC module/event/subsystems/accel/accel.o 00:04:27.868 LIB libspdk_event_accel.a 00:04:27.868 SO libspdk_event_accel.so.6.0 00:04:28.128 SYMLINK libspdk_event_accel.so 00:04:28.388 CC module/event/subsystems/bdev/bdev.o 00:04:28.388 LIB libspdk_event_bdev.a 00:04:28.388 SO libspdk_event_bdev.so.6.0 00:04:28.388 SYMLINK libspdk_event_bdev.so 00:04:28.647 CC module/event/subsystems/ublk/ublk.o 00:04:28.647 CC module/event/subsystems/nbd/nbd.o 00:04:28.647 CC module/event/subsystems/scsi/scsi.o 00:04:28.647 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:28.647 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:28.906 LIB libspdk_event_nbd.a 00:04:28.906 LIB libspdk_event_ublk.a 00:04:28.906 LIB libspdk_event_scsi.a 00:04:28.906 SO libspdk_event_nbd.so.6.0 00:04:28.906 SO libspdk_event_ublk.so.3.0 00:04:28.906 SO libspdk_event_scsi.so.6.0 00:04:28.906 SYMLINK libspdk_event_nbd.so 00:04:28.906 SYMLINK libspdk_event_ublk.so 00:04:28.906 SYMLINK libspdk_event_scsi.so 00:04:28.906 LIB libspdk_event_nvmf.a 00:04:28.906 SO libspdk_event_nvmf.so.6.0 00:04:28.906 SYMLINK libspdk_event_nvmf.so 00:04:29.164 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:29.164 CC module/event/subsystems/iscsi/iscsi.o 00:04:29.164 LIB libspdk_event_vhost_scsi.a 00:04:29.164 LIB libspdk_event_iscsi.a 00:04:29.164 SO libspdk_event_vhost_scsi.so.3.0 00:04:29.164 SO libspdk_event_iscsi.so.6.0 00:04:29.164 SYMLINK libspdk_event_vhost_scsi.so 00:04:29.164 SYMLINK libspdk_event_iscsi.so 00:04:29.422 SO libspdk.so.6.0 00:04:29.422 SYMLINK libspdk.so 00:04:29.681 CXX app/trace/trace.o 00:04:29.681 CC app/spdk_nvme_perf/perf.o 00:04:29.681 CC app/spdk_nvme_identify/identify.o 00:04:29.681 CC app/trace_record/trace_record.o 00:04:29.681 CC app/spdk_lspci/spdk_lspci.o 00:04:29.681 CC app/iscsi_tgt/iscsi_tgt.o 00:04:29.681 CC app/nvmf_tgt/nvmf_main.o 00:04:29.681 CC app/spdk_tgt/spdk_tgt.o 00:04:29.681 CC examples/util/zipf/zipf.o 00:04:29.681 CC test/thread/poller_perf/poller_perf.o 00:04:29.681 LINK spdk_lspci 00:04:29.940 LINK poller_perf 00:04:29.940 LINK zipf 00:04:29.940 LINK nvmf_tgt 00:04:29.940 LINK spdk_tgt 00:04:29.940 LINK iscsi_tgt 00:04:29.940 LINK spdk_trace_record 00:04:29.940 LINK spdk_trace 00:04:29.940 CC test/dma/test_dma/test_dma.o 00:04:29.940 CC app/spdk_nvme_discover/discovery_aer.o 00:04:30.200 CC examples/ioat/perf/perf.o 00:04:30.200 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:30.200 CC test/app/bdev_svc/bdev_svc.o 00:04:30.200 CC examples/idxd/perf/perf.o 00:04:30.200 CC examples/vmd/lsvmd/lsvmd.o 00:04:30.200 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:30.200 LINK interrupt_tgt 00:04:30.200 LINK spdk_nvme_discover 00:04:30.200 LINK spdk_nvme_identify 00:04:30.200 LINK lsvmd 00:04:30.200 LINK ioat_perf 00:04:30.200 LINK bdev_svc 00:04:30.458 LINK spdk_nvme_perf 00:04:30.458 CC examples/vmd/led/led.o 00:04:30.458 TEST_HEADER include/spdk/accel.h 00:04:30.458 TEST_HEADER include/spdk/accel_module.h 00:04:30.458 LINK idxd_perf 00:04:30.458 TEST_HEADER include/spdk/assert.h 00:04:30.458 CC examples/ioat/verify/verify.o 00:04:30.458 TEST_HEADER include/spdk/barrier.h 00:04:30.458 TEST_HEADER include/spdk/base64.h 00:04:30.458 TEST_HEADER include/spdk/bdev.h 00:04:30.458 TEST_HEADER include/spdk/bdev_module.h 00:04:30.458 TEST_HEADER include/spdk/bdev_zone.h 00:04:30.458 TEST_HEADER include/spdk/bit_array.h 00:04:30.458 TEST_HEADER include/spdk/bit_pool.h 00:04:30.458 TEST_HEADER include/spdk/blob_bdev.h 00:04:30.458 LINK nvme_fuzz 00:04:30.458 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:30.458 TEST_HEADER include/spdk/blobfs.h 00:04:30.458 TEST_HEADER include/spdk/blob.h 00:04:30.458 TEST_HEADER include/spdk/conf.h 00:04:30.458 TEST_HEADER include/spdk/config.h 00:04:30.458 TEST_HEADER include/spdk/cpuset.h 00:04:30.459 TEST_HEADER include/spdk/crc16.h 00:04:30.459 TEST_HEADER include/spdk/crc32.h 00:04:30.459 TEST_HEADER include/spdk/crc64.h 00:04:30.459 CC examples/thread/thread/thread_ex.o 00:04:30.459 TEST_HEADER include/spdk/dif.h 00:04:30.459 TEST_HEADER include/spdk/dma.h 00:04:30.459 TEST_HEADER include/spdk/endian.h 00:04:30.459 CC app/spdk_top/spdk_top.o 00:04:30.459 LINK test_dma 00:04:30.459 TEST_HEADER include/spdk/env_dpdk.h 00:04:30.459 TEST_HEADER include/spdk/env.h 00:04:30.459 TEST_HEADER include/spdk/event.h 00:04:30.459 TEST_HEADER include/spdk/fd_group.h 00:04:30.459 TEST_HEADER include/spdk/fd.h 00:04:30.459 TEST_HEADER include/spdk/file.h 00:04:30.459 TEST_HEADER include/spdk/fsdev.h 00:04:30.459 TEST_HEADER include/spdk/fsdev_module.h 00:04:30.459 TEST_HEADER include/spdk/ftl.h 00:04:30.459 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:30.459 TEST_HEADER include/spdk/gpt_spec.h 00:04:30.459 TEST_HEADER include/spdk/hexlify.h 00:04:30.459 TEST_HEADER include/spdk/histogram_data.h 00:04:30.459 TEST_HEADER include/spdk/idxd.h 00:04:30.459 TEST_HEADER include/spdk/idxd_spec.h 00:04:30.459 TEST_HEADER include/spdk/init.h 00:04:30.459 TEST_HEADER include/spdk/ioat.h 00:04:30.459 TEST_HEADER include/spdk/ioat_spec.h 00:04:30.459 TEST_HEADER include/spdk/iscsi_spec.h 00:04:30.459 TEST_HEADER include/spdk/json.h 00:04:30.459 TEST_HEADER include/spdk/jsonrpc.h 00:04:30.459 TEST_HEADER include/spdk/keyring.h 00:04:30.459 TEST_HEADER include/spdk/keyring_module.h 00:04:30.459 TEST_HEADER include/spdk/likely.h 00:04:30.459 TEST_HEADER include/spdk/log.h 00:04:30.459 TEST_HEADER include/spdk/lvol.h 00:04:30.459 TEST_HEADER include/spdk/md5.h 00:04:30.459 TEST_HEADER include/spdk/memory.h 00:04:30.459 TEST_HEADER include/spdk/mmio.h 00:04:30.459 TEST_HEADER include/spdk/nbd.h 00:04:30.459 LINK led 00:04:30.718 TEST_HEADER include/spdk/net.h 00:04:30.718 TEST_HEADER include/spdk/notify.h 00:04:30.718 TEST_HEADER include/spdk/nvme.h 00:04:30.718 TEST_HEADER include/spdk/nvme_intel.h 00:04:30.718 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:30.718 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:30.718 TEST_HEADER include/spdk/nvme_spec.h 00:04:30.718 TEST_HEADER include/spdk/nvme_zns.h 00:04:30.718 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:30.718 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:30.718 TEST_HEADER include/spdk/nvmf.h 00:04:30.718 CC test/event/event_perf/event_perf.o 00:04:30.718 TEST_HEADER include/spdk/nvmf_spec.h 00:04:30.718 TEST_HEADER include/spdk/nvmf_transport.h 00:04:30.718 CC test/env/mem_callbacks/mem_callbacks.o 00:04:30.718 TEST_HEADER include/spdk/opal.h 00:04:30.718 TEST_HEADER include/spdk/opal_spec.h 00:04:30.718 TEST_HEADER include/spdk/pci_ids.h 00:04:30.718 TEST_HEADER include/spdk/pipe.h 00:04:30.718 TEST_HEADER include/spdk/queue.h 00:04:30.718 TEST_HEADER include/spdk/reduce.h 00:04:30.718 TEST_HEADER include/spdk/rpc.h 00:04:30.718 TEST_HEADER include/spdk/scheduler.h 00:04:30.718 TEST_HEADER include/spdk/scsi.h 00:04:30.718 TEST_HEADER include/spdk/scsi_spec.h 00:04:30.718 TEST_HEADER include/spdk/sock.h 00:04:30.718 TEST_HEADER include/spdk/stdinc.h 00:04:30.718 TEST_HEADER include/spdk/string.h 00:04:30.718 TEST_HEADER include/spdk/thread.h 00:04:30.718 TEST_HEADER include/spdk/trace.h 00:04:30.718 TEST_HEADER include/spdk/trace_parser.h 00:04:30.718 TEST_HEADER include/spdk/tree.h 00:04:30.718 TEST_HEADER include/spdk/ublk.h 00:04:30.718 TEST_HEADER include/spdk/util.h 00:04:30.718 TEST_HEADER include/spdk/uuid.h 00:04:30.718 TEST_HEADER include/spdk/version.h 00:04:30.718 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:30.718 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:30.718 TEST_HEADER include/spdk/vhost.h 00:04:30.718 TEST_HEADER include/spdk/vmd.h 00:04:30.718 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:30.718 TEST_HEADER include/spdk/xor.h 00:04:30.718 TEST_HEADER include/spdk/zipf.h 00:04:30.718 LINK verify 00:04:30.718 CXX test/cpp_headers/accel.o 00:04:30.718 CC examples/sock/hello_world/hello_sock.o 00:04:30.718 LINK thread 00:04:30.718 LINK event_perf 00:04:30.718 CC test/env/vtophys/vtophys.o 00:04:30.718 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:30.718 LINK mem_callbacks 00:04:30.976 CXX test/cpp_headers/accel_module.o 00:04:30.976 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:30.976 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:30.976 CC test/event/reactor/reactor.o 00:04:30.976 LINK vtophys 00:04:30.976 LINK hello_sock 00:04:30.976 CXX test/cpp_headers/assert.o 00:04:30.976 CC test/env/memory/memory_ut.o 00:04:30.976 LINK env_dpdk_post_init 00:04:30.976 CC app/vhost/vhost.o 00:04:30.976 LINK reactor 00:04:31.234 CXX test/cpp_headers/barrier.o 00:04:31.234 CC test/app/histogram_perf/histogram_perf.o 00:04:31.234 LINK vhost 00:04:31.234 CC examples/accel/perf/accel_perf.o 00:04:31.234 CC test/event/reactor_perf/reactor_perf.o 00:04:31.234 CC test/env/pci/pci_ut.o 00:04:31.234 CXX test/cpp_headers/base64.o 00:04:31.234 LINK vhost_fuzz 00:04:31.234 LINK histogram_perf 00:04:31.234 LINK reactor_perf 00:04:31.492 CC test/app/jsoncat/jsoncat.o 00:04:31.492 LINK spdk_top 00:04:31.492 CXX test/cpp_headers/bdev.o 00:04:31.492 CXX test/cpp_headers/bdev_module.o 00:04:31.492 CC test/event/app_repeat/app_repeat.o 00:04:31.492 LINK jsoncat 00:04:31.492 CC test/event/scheduler/scheduler.o 00:04:31.492 CXX test/cpp_headers/bdev_zone.o 00:04:31.492 LINK app_repeat 00:04:31.750 CXX test/cpp_headers/bit_array.o 00:04:31.750 CC app/spdk_dd/spdk_dd.o 00:04:31.750 LINK pci_ut 00:04:31.750 LINK accel_perf 00:04:31.750 LINK scheduler 00:04:31.750 CXX test/cpp_headers/bit_pool.o 00:04:31.750 CC app/fio/nvme/fio_plugin.o 00:04:31.750 CXX test/cpp_headers/blob_bdev.o 00:04:31.750 LINK memory_ut 00:04:31.750 CC test/app/stub/stub.o 00:04:32.008 CXX test/cpp_headers/blobfs_bdev.o 00:04:32.008 CXX test/cpp_headers/blobfs.o 00:04:32.008 CXX test/cpp_headers/blob.o 00:04:32.008 CC test/rpc_client/rpc_client_test.o 00:04:32.008 LINK stub 00:04:32.008 LINK spdk_dd 00:04:32.008 CC app/fio/bdev/fio_plugin.o 00:04:32.008 LINK iscsi_fuzz 00:04:32.008 CC examples/blob/hello_world/hello_blob.o 00:04:32.008 CXX test/cpp_headers/conf.o 00:04:32.008 CC examples/blob/cli/blobcli.o 00:04:32.008 LINK rpc_client_test 00:04:32.268 CXX test/cpp_headers/config.o 00:04:32.268 CXX test/cpp_headers/cpuset.o 00:04:32.268 LINK hello_blob 00:04:32.268 CC examples/nvme/hello_world/hello_world.o 00:04:32.268 CC examples/nvme/reconnect/reconnect.o 00:04:32.268 CC test/blobfs/mkfs/mkfs.o 00:04:32.268 LINK spdk_nvme 00:04:32.268 CC test/accel/dif/dif.o 00:04:32.527 CXX test/cpp_headers/crc16.o 00:04:32.527 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:32.527 CXX test/cpp_headers/crc32.o 00:04:32.527 CXX test/cpp_headers/crc64.o 00:04:32.527 LINK hello_world 00:04:32.527 LINK blobcli 00:04:32.527 LINK mkfs 00:04:32.527 LINK spdk_bdev 00:04:32.527 CXX test/cpp_headers/dif.o 00:04:32.527 CXX test/cpp_headers/dma.o 00:04:32.527 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:32.527 CXX test/cpp_headers/endian.o 00:04:32.527 CC examples/nvme/arbitration/arbitration.o 00:04:32.527 LINK reconnect 00:04:32.527 LINK hello_fsdev 00:04:32.785 CXX test/cpp_headers/env_dpdk.o 00:04:32.785 CC test/lvol/esnap/esnap.o 00:04:32.785 CXX test/cpp_headers/env.o 00:04:32.785 CC test/nvme/aer/aer.o 00:04:32.785 CC examples/nvme/hotplug/hotplug.o 00:04:32.785 CC examples/bdev/hello_world/hello_bdev.o 00:04:32.785 LINK arbitration 00:04:32.785 CC test/nvme/reset/reset.o 00:04:33.043 CC examples/bdev/bdevperf/bdevperf.o 00:04:33.043 CXX test/cpp_headers/event.o 00:04:33.043 LINK dif 00:04:33.043 CXX test/cpp_headers/fd_group.o 00:04:33.043 LINK aer 00:04:33.043 LINK hotplug 00:04:33.043 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:33.043 LINK nvme_manage 00:04:33.043 LINK hello_bdev 00:04:33.043 CXX test/cpp_headers/fd.o 00:04:33.043 LINK reset 00:04:33.043 CC test/nvme/sgl/sgl.o 00:04:33.303 LINK cmb_copy 00:04:33.303 CC test/nvme/e2edp/nvme_dp.o 00:04:33.303 CXX test/cpp_headers/file.o 00:04:33.303 CC test/bdev/bdevio/bdevio.o 00:04:33.303 CC examples/nvme/abort/abort.o 00:04:33.303 CC test/nvme/overhead/overhead.o 00:04:33.303 CXX test/cpp_headers/fsdev.o 00:04:33.303 CC test/nvme/err_injection/err_injection.o 00:04:33.303 LINK sgl 00:04:33.563 CXX test/cpp_headers/fsdev_module.o 00:04:33.563 CC test/nvme/startup/startup.o 00:04:33.563 LINK err_injection 00:04:33.563 LINK nvme_dp 00:04:33.563 LINK bdevio 00:04:33.563 LINK overhead 00:04:33.563 LINK abort 00:04:33.563 CC test/nvme/reserve/reserve.o 00:04:33.563 LINK startup 00:04:33.563 CXX test/cpp_headers/ftl.o 00:04:33.563 CC test/nvme/simple_copy/simple_copy.o 00:04:33.821 LINK bdevperf 00:04:33.821 CC test/nvme/connect_stress/connect_stress.o 00:04:33.821 LINK reserve 00:04:33.821 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:33.821 CXX test/cpp_headers/fuse_dispatcher.o 00:04:33.821 CC test/nvme/compliance/nvme_compliance.o 00:04:33.821 CC test/nvme/boot_partition/boot_partition.o 00:04:33.821 CC test/nvme/fused_ordering/fused_ordering.o 00:04:33.821 LINK simple_copy 00:04:33.821 CXX test/cpp_headers/gpt_spec.o 00:04:33.821 LINK boot_partition 00:04:33.821 LINK connect_stress 00:04:33.821 LINK pmr_persistence 00:04:33.821 CC test/nvme/fdp/fdp.o 00:04:33.821 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:34.080 LINK fused_ordering 00:04:34.080 CC test/nvme/cuse/cuse.o 00:04:34.080 CXX test/cpp_headers/hexlify.o 00:04:34.080 CXX test/cpp_headers/histogram_data.o 00:04:34.080 CXX test/cpp_headers/idxd.o 00:04:34.080 CXX test/cpp_headers/idxd_spec.o 00:04:34.080 LINK doorbell_aers 00:04:34.080 LINK nvme_compliance 00:04:34.080 CXX test/cpp_headers/init.o 00:04:34.080 CXX test/cpp_headers/ioat.o 00:04:34.080 CXX test/cpp_headers/ioat_spec.o 00:04:34.080 CXX test/cpp_headers/iscsi_spec.o 00:04:34.338 CXX test/cpp_headers/json.o 00:04:34.338 CC examples/nvmf/nvmf/nvmf.o 00:04:34.338 CXX test/cpp_headers/jsonrpc.o 00:04:34.338 LINK fdp 00:04:34.338 CXX test/cpp_headers/keyring.o 00:04:34.338 CXX test/cpp_headers/keyring_module.o 00:04:34.338 CXX test/cpp_headers/likely.o 00:04:34.338 CXX test/cpp_headers/log.o 00:04:34.338 CXX test/cpp_headers/lvol.o 00:04:34.338 CXX test/cpp_headers/md5.o 00:04:34.338 CXX test/cpp_headers/memory.o 00:04:34.338 CXX test/cpp_headers/mmio.o 00:04:34.338 CXX test/cpp_headers/nbd.o 00:04:34.338 CXX test/cpp_headers/net.o 00:04:34.338 CXX test/cpp_headers/notify.o 00:04:34.338 CXX test/cpp_headers/nvme_intel.o 00:04:34.338 CXX test/cpp_headers/nvme.o 00:04:34.597 CXX test/cpp_headers/nvme_ocssd.o 00:04:34.597 LINK nvmf 00:04:34.597 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:34.597 CXX test/cpp_headers/nvme_spec.o 00:04:34.597 CXX test/cpp_headers/nvme_zns.o 00:04:34.597 CXX test/cpp_headers/nvmf_cmd.o 00:04:34.597 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:34.597 CXX test/cpp_headers/nvmf.o 00:04:34.597 CXX test/cpp_headers/nvmf_spec.o 00:04:34.597 CXX test/cpp_headers/nvmf_transport.o 00:04:34.597 CXX test/cpp_headers/opal.o 00:04:34.597 CXX test/cpp_headers/opal_spec.o 00:04:34.597 CXX test/cpp_headers/pci_ids.o 00:04:34.597 CXX test/cpp_headers/pipe.o 00:04:34.597 CXX test/cpp_headers/queue.o 00:04:34.856 CXX test/cpp_headers/reduce.o 00:04:34.856 CXX test/cpp_headers/rpc.o 00:04:34.856 CXX test/cpp_headers/scheduler.o 00:04:34.856 CXX test/cpp_headers/scsi.o 00:04:34.856 CXX test/cpp_headers/scsi_spec.o 00:04:34.856 CXX test/cpp_headers/sock.o 00:04:34.856 CXX test/cpp_headers/stdinc.o 00:04:34.856 CXX test/cpp_headers/string.o 00:04:34.856 CXX test/cpp_headers/thread.o 00:04:34.856 CXX test/cpp_headers/trace.o 00:04:34.856 CXX test/cpp_headers/trace_parser.o 00:04:34.856 CXX test/cpp_headers/tree.o 00:04:34.856 CXX test/cpp_headers/ublk.o 00:04:34.856 CXX test/cpp_headers/util.o 00:04:34.856 CXX test/cpp_headers/uuid.o 00:04:34.856 CXX test/cpp_headers/version.o 00:04:34.856 CXX test/cpp_headers/vfio_user_pci.o 00:04:34.856 CXX test/cpp_headers/vfio_user_spec.o 00:04:35.115 CXX test/cpp_headers/vhost.o 00:04:35.115 CXX test/cpp_headers/vmd.o 00:04:35.115 CXX test/cpp_headers/xor.o 00:04:35.115 CXX test/cpp_headers/zipf.o 00:04:35.115 LINK cuse 00:04:37.667 LINK esnap 00:04:37.667 00:04:37.667 real 1m1.244s 00:04:37.667 user 5m6.863s 00:04:37.667 sys 0m51.521s 00:04:37.667 18:54:55 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:37.667 18:54:55 make -- common/autotest_common.sh@10 -- $ set +x 00:04:37.667 ************************************ 00:04:37.667 END TEST make 00:04:37.667 ************************************ 00:04:37.667 18:54:55 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:37.667 18:54:55 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:37.667 18:54:55 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:37.667 18:54:55 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:37.667 18:54:55 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:37.668 18:54:55 -- pm/common@44 -- $ pid=5803 00:04:37.668 18:54:55 -- pm/common@50 -- $ kill -TERM 5803 00:04:37.668 18:54:55 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:37.668 18:54:55 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:37.668 18:54:55 -- pm/common@44 -- $ pid=5804 00:04:37.668 18:54:55 -- pm/common@50 -- $ kill -TERM 5804 00:04:37.668 18:54:55 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:37.668 18:54:55 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:37.668 18:54:55 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:37.668 18:54:55 -- common/autotest_common.sh@1711 -- # lcov --version 00:04:37.668 18:54:55 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:37.929 18:54:55 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:37.929 18:54:55 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:37.929 18:54:55 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:37.929 18:54:55 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:37.929 18:54:55 -- scripts/common.sh@336 -- # IFS=.-: 00:04:37.929 18:54:55 -- scripts/common.sh@336 -- # read -ra ver1 00:04:37.929 18:54:55 -- scripts/common.sh@337 -- # IFS=.-: 00:04:37.929 18:54:55 -- scripts/common.sh@337 -- # read -ra ver2 00:04:37.929 18:54:55 -- scripts/common.sh@338 -- # local 'op=<' 00:04:37.929 18:54:55 -- scripts/common.sh@340 -- # ver1_l=2 00:04:37.929 18:54:55 -- scripts/common.sh@341 -- # ver2_l=1 00:04:37.929 18:54:55 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:37.929 18:54:55 -- scripts/common.sh@344 -- # case "$op" in 00:04:37.929 18:54:55 -- scripts/common.sh@345 -- # : 1 00:04:37.929 18:54:55 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:37.929 18:54:55 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:37.929 18:54:55 -- scripts/common.sh@365 -- # decimal 1 00:04:37.929 18:54:55 -- scripts/common.sh@353 -- # local d=1 00:04:37.929 18:54:55 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:37.929 18:54:55 -- scripts/common.sh@355 -- # echo 1 00:04:37.929 18:54:55 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:37.929 18:54:55 -- scripts/common.sh@366 -- # decimal 2 00:04:37.929 18:54:55 -- scripts/common.sh@353 -- # local d=2 00:04:37.929 18:54:55 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:37.929 18:54:55 -- scripts/common.sh@355 -- # echo 2 00:04:37.929 18:54:55 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:37.929 18:54:55 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:37.929 18:54:55 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:37.929 18:54:55 -- scripts/common.sh@368 -- # return 0 00:04:37.929 18:54:55 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:37.929 18:54:55 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:37.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.929 --rc genhtml_branch_coverage=1 00:04:37.929 --rc genhtml_function_coverage=1 00:04:37.929 --rc genhtml_legend=1 00:04:37.929 --rc geninfo_all_blocks=1 00:04:37.929 --rc geninfo_unexecuted_blocks=1 00:04:37.929 00:04:37.929 ' 00:04:37.929 18:54:55 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:37.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.929 --rc genhtml_branch_coverage=1 00:04:37.929 --rc genhtml_function_coverage=1 00:04:37.929 --rc genhtml_legend=1 00:04:37.929 --rc geninfo_all_blocks=1 00:04:37.929 --rc geninfo_unexecuted_blocks=1 00:04:37.929 00:04:37.929 ' 00:04:37.929 18:54:55 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:37.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.929 --rc genhtml_branch_coverage=1 00:04:37.929 --rc genhtml_function_coverage=1 00:04:37.929 --rc genhtml_legend=1 00:04:37.929 --rc geninfo_all_blocks=1 00:04:37.929 --rc geninfo_unexecuted_blocks=1 00:04:37.929 00:04:37.929 ' 00:04:37.929 18:54:55 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:37.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.929 --rc genhtml_branch_coverage=1 00:04:37.929 --rc genhtml_function_coverage=1 00:04:37.929 --rc genhtml_legend=1 00:04:37.929 --rc geninfo_all_blocks=1 00:04:37.929 --rc geninfo_unexecuted_blocks=1 00:04:37.929 00:04:37.929 ' 00:04:37.929 18:54:55 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:37.929 18:54:55 -- nvmf/common.sh@7 -- # uname -s 00:04:37.929 18:54:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:37.930 18:54:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:37.930 18:54:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:37.930 18:54:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:37.930 18:54:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:37.930 18:54:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:37.930 18:54:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:37.930 18:54:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:37.930 18:54:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:37.930 18:54:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:37.930 18:54:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1be40edb-665b-45cb-a0af-6c19b063a797 00:04:37.930 18:54:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=1be40edb-665b-45cb-a0af-6c19b063a797 00:04:37.930 18:54:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:37.930 18:54:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:37.930 18:54:55 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:37.930 18:54:55 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:37.930 18:54:55 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:37.930 18:54:55 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:37.930 18:54:55 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:37.930 18:54:55 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:37.930 18:54:55 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:37.930 18:54:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:37.930 18:54:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:37.930 18:54:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:37.930 18:54:55 -- paths/export.sh@5 -- # export PATH 00:04:37.930 18:54:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:37.930 18:54:55 -- nvmf/common.sh@51 -- # : 0 00:04:37.930 18:54:55 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:37.930 18:54:55 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:37.930 18:54:55 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:37.930 18:54:55 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:37.930 18:54:55 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:37.930 18:54:55 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:37.930 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:37.930 18:54:55 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:37.930 18:54:55 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:37.930 18:54:55 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:37.930 18:54:55 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:37.930 18:54:55 -- spdk/autotest.sh@32 -- # uname -s 00:04:37.930 18:54:55 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:37.930 18:54:55 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:37.930 18:54:55 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:37.930 18:54:55 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:37.930 18:54:55 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:37.930 18:54:55 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:37.930 18:54:55 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:37.930 18:54:55 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:37.930 18:54:55 -- spdk/autotest.sh@48 -- # udevadm_pid=66220 00:04:37.930 18:54:55 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:37.930 18:54:55 -- pm/common@17 -- # local monitor 00:04:37.930 18:54:55 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:37.930 18:54:55 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:37.930 18:54:55 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:37.930 18:54:55 -- pm/common@25 -- # sleep 1 00:04:37.930 18:54:55 -- pm/common@21 -- # date +%s 00:04:37.930 18:54:55 -- pm/common@21 -- # date +%s 00:04:37.930 18:54:55 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733424895 00:04:37.930 18:54:55 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733424895 00:04:37.930 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733424895_collect-vmstat.pm.log 00:04:37.930 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733424895_collect-cpu-load.pm.log 00:04:38.871 18:54:56 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:38.871 18:54:56 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:38.871 18:54:56 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:38.871 18:54:56 -- common/autotest_common.sh@10 -- # set +x 00:04:38.871 18:54:56 -- spdk/autotest.sh@59 -- # create_test_list 00:04:38.871 18:54:56 -- common/autotest_common.sh@752 -- # xtrace_disable 00:04:38.871 18:54:56 -- common/autotest_common.sh@10 -- # set +x 00:04:39.133 18:54:56 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:39.133 18:54:56 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:39.133 18:54:56 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:39.133 18:54:56 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:39.133 18:54:56 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:39.133 18:54:56 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:39.133 18:54:56 -- common/autotest_common.sh@1457 -- # uname 00:04:39.133 18:54:56 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:04:39.133 18:54:56 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:39.133 18:54:56 -- common/autotest_common.sh@1477 -- # uname 00:04:39.133 18:54:56 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:04:39.133 18:54:56 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:39.133 18:54:56 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:39.133 lcov: LCOV version 1.15 00:04:39.133 18:54:56 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:54.034 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:54.034 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:09.012 18:55:25 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:09.012 18:55:25 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:09.012 18:55:25 -- common/autotest_common.sh@10 -- # set +x 00:05:09.012 18:55:25 -- spdk/autotest.sh@78 -- # rm -f 00:05:09.012 18:55:25 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:09.012 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:09.012 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:09.012 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:09.012 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:05:09.012 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:05:09.012 18:55:26 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:09.012 18:55:26 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:05:09.012 18:55:26 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:05:09.012 18:55:26 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:05:09.012 18:55:26 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:05:09.012 18:55:26 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:05:09.012 18:55:26 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:05:09.012 18:55:26 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:05:09.012 18:55:26 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:09.012 18:55:26 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:05:09.012 18:55:26 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:05:09.012 18:55:26 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:09.012 18:55:26 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:09.012 18:55:26 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:05:09.012 18:55:26 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:05:09.012 18:55:26 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:09.012 18:55:26 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:05:09.012 18:55:26 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:05:09.012 18:55:26 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:09.012 18:55:26 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:09.012 18:55:26 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:05:09.012 18:55:26 -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:05:09.012 18:55:26 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:09.012 18:55:26 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:05:09.012 18:55:26 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:05:09.012 18:55:26 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:05:09.012 18:55:26 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:09.012 18:55:26 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:09.012 18:55:26 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n2 00:05:09.012 18:55:26 -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:05:09.012 18:55:26 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:05:09.012 18:55:26 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:09.012 18:55:26 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:09.012 18:55:26 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n3 00:05:09.012 18:55:26 -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:05:09.012 18:55:26 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:05:09.012 18:55:26 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:09.012 18:55:26 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:05:09.012 18:55:26 -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:05:09.013 18:55:26 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:09.013 18:55:26 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3c3n1 00:05:09.013 18:55:26 -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:05:09.013 18:55:26 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:05:09.013 18:55:26 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:09.013 18:55:26 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:09.013 18:55:26 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:09.013 18:55:26 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:09.013 18:55:26 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:09.013 18:55:26 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:09.013 18:55:26 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:09.013 No valid GPT data, bailing 00:05:09.013 18:55:26 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:09.013 18:55:26 -- scripts/common.sh@394 -- # pt= 00:05:09.013 18:55:26 -- scripts/common.sh@395 -- # return 1 00:05:09.013 18:55:26 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:09.013 1+0 records in 00:05:09.013 1+0 records out 00:05:09.013 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.027157 s, 38.6 MB/s 00:05:09.013 18:55:26 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:09.013 18:55:26 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:09.013 18:55:26 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:09.013 18:55:26 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:09.013 18:55:26 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:09.013 No valid GPT data, bailing 00:05:09.013 18:55:26 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:09.013 18:55:26 -- scripts/common.sh@394 -- # pt= 00:05:09.013 18:55:26 -- scripts/common.sh@395 -- # return 1 00:05:09.013 18:55:26 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:09.013 1+0 records in 00:05:09.013 1+0 records out 00:05:09.013 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00627155 s, 167 MB/s 00:05:09.013 18:55:26 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:09.013 18:55:26 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:09.013 18:55:26 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:05:09.013 18:55:26 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:05:09.013 18:55:26 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:05:09.013 No valid GPT data, bailing 00:05:09.013 18:55:26 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:05:09.013 18:55:26 -- scripts/common.sh@394 -- # pt= 00:05:09.013 18:55:26 -- scripts/common.sh@395 -- # return 1 00:05:09.013 18:55:26 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:05:09.013 1+0 records in 00:05:09.013 1+0 records out 00:05:09.013 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00650742 s, 161 MB/s 00:05:09.013 18:55:26 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:09.013 18:55:26 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:09.013 18:55:26 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:05:09.013 18:55:26 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:05:09.013 18:55:26 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:05:09.013 No valid GPT data, bailing 00:05:09.013 18:55:26 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:05:09.013 18:55:26 -- scripts/common.sh@394 -- # pt= 00:05:09.013 18:55:26 -- scripts/common.sh@395 -- # return 1 00:05:09.013 18:55:26 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:05:09.013 1+0 records in 00:05:09.013 1+0 records out 00:05:09.013 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00624793 s, 168 MB/s 00:05:09.013 18:55:26 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:09.013 18:55:26 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:09.013 18:55:26 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:05:09.013 18:55:26 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:05:09.013 18:55:26 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:05:09.013 No valid GPT data, bailing 00:05:09.013 18:55:26 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:05:09.013 18:55:26 -- scripts/common.sh@394 -- # pt= 00:05:09.013 18:55:26 -- scripts/common.sh@395 -- # return 1 00:05:09.013 18:55:26 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:05:09.275 1+0 records in 00:05:09.275 1+0 records out 00:05:09.275 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00653927 s, 160 MB/s 00:05:09.275 18:55:26 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:09.275 18:55:26 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:09.275 18:55:26 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:05:09.275 18:55:26 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:05:09.275 18:55:26 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:05:09.275 No valid GPT data, bailing 00:05:09.275 18:55:26 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:05:09.275 18:55:26 -- scripts/common.sh@394 -- # pt= 00:05:09.275 18:55:26 -- scripts/common.sh@395 -- # return 1 00:05:09.275 18:55:26 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:05:09.275 1+0 records in 00:05:09.275 1+0 records out 00:05:09.275 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00550054 s, 191 MB/s 00:05:09.275 18:55:26 -- spdk/autotest.sh@105 -- # sync 00:05:09.275 18:55:26 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:09.275 18:55:26 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:09.275 18:55:26 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:11.193 18:55:28 -- spdk/autotest.sh@111 -- # uname -s 00:05:11.193 18:55:28 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:11.193 18:55:28 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:11.193 18:55:28 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:11.454 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:12.027 Hugepages 00:05:12.027 node hugesize free / total 00:05:12.027 node0 1048576kB 0 / 0 00:05:12.027 node0 2048kB 0 / 0 00:05:12.027 00:05:12.027 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:12.027 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:12.027 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:12.289 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:05:12.289 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:05:12.289 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:05:12.289 18:55:29 -- spdk/autotest.sh@117 -- # uname -s 00:05:12.289 18:55:29 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:12.289 18:55:29 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:12.289 18:55:29 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:12.863 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:13.435 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:13.435 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:13.435 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:13.435 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:13.435 18:55:30 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:14.377 18:55:31 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:14.377 18:55:31 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:14.377 18:55:31 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:05:14.377 18:55:31 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:05:14.377 18:55:31 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:14.377 18:55:31 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:14.377 18:55:31 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:14.377 18:55:31 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:14.377 18:55:31 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:14.639 18:55:31 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:05:14.639 18:55:31 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:14.639 18:55:31 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:14.901 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:14.901 Waiting for block devices as requested 00:05:15.163 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:15.163 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:15.164 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:05:15.426 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:05:20.724 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:05:20.724 18:55:37 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:20.724 18:55:37 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:20.724 18:55:37 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:05:20.724 18:55:37 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:20.724 18:55:37 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:20.724 18:55:37 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:20.724 18:55:37 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:20.724 18:55:37 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:05:20.724 18:55:37 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:05:20.724 18:55:37 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:05:20.724 18:55:37 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:20.724 18:55:37 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:20.724 18:55:37 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:05:20.724 18:55:37 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:20.724 18:55:37 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:20.724 18:55:37 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:20.724 18:55:37 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:20.724 18:55:37 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:05:20.724 18:55:37 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:20.724 18:55:37 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:20.724 18:55:37 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:20.724 18:55:37 -- common/autotest_common.sh@1543 -- # continue 00:05:20.724 18:55:37 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:20.724 18:55:37 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:20.724 18:55:37 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:20.724 18:55:37 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:05:20.724 18:55:37 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:20.724 18:55:37 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:20.724 18:55:37 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:20.724 18:55:37 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:20.724 18:55:37 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:05:20.724 18:55:37 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:05:20.724 18:55:37 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:05:20.724 18:55:37 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:20.724 18:55:37 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:20.724 18:55:37 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:20.724 18:55:37 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:20.724 18:55:37 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:20.724 18:55:37 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:20.724 18:55:37 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:20.724 18:55:37 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:20.724 18:55:37 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:20.724 18:55:37 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:20.724 18:55:37 -- common/autotest_common.sh@1543 -- # continue 00:05:20.724 18:55:37 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:20.724 18:55:37 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:05:20.724 18:55:37 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:05:20.724 18:55:37 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:20.724 18:55:37 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:20.724 18:55:37 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:05:20.724 18:55:37 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:20.724 18:55:37 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:05:20.724 18:55:37 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:05:20.724 18:55:37 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:05:20.724 18:55:37 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:05:20.724 18:55:37 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:20.724 18:55:37 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:20.724 18:55:37 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:20.724 18:55:37 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:20.724 18:55:37 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:20.724 18:55:37 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:05:20.724 18:55:37 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:20.724 18:55:37 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:20.724 18:55:37 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:20.724 18:55:37 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:20.724 18:55:37 -- common/autotest_common.sh@1543 -- # continue 00:05:20.724 18:55:37 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:20.724 18:55:37 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:05:20.724 18:55:37 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:20.724 18:55:37 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:05:20.724 18:55:37 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:20.724 18:55:37 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:05:20.724 18:55:37 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:20.724 18:55:37 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:05:20.724 18:55:37 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:05:20.724 18:55:37 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:05:20.724 18:55:37 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:20.724 18:55:37 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:05:20.724 18:55:37 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:20.724 18:55:37 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:20.724 18:55:37 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:20.724 18:55:37 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:20.724 18:55:37 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:05:20.724 18:55:37 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:20.724 18:55:37 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:20.724 18:55:37 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:20.724 18:55:37 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:20.724 18:55:37 -- common/autotest_common.sh@1543 -- # continue 00:05:20.724 18:55:37 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:20.724 18:55:37 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:20.724 18:55:37 -- common/autotest_common.sh@10 -- # set +x 00:05:20.724 18:55:37 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:20.724 18:55:37 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:20.724 18:55:37 -- common/autotest_common.sh@10 -- # set +x 00:05:20.724 18:55:37 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:20.981 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:21.547 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:21.547 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:21.547 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:21.547 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:21.547 18:55:38 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:21.547 18:55:38 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:21.547 18:55:38 -- common/autotest_common.sh@10 -- # set +x 00:05:21.547 18:55:38 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:21.547 18:55:38 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:05:21.547 18:55:38 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:05:21.547 18:55:38 -- common/autotest_common.sh@1563 -- # bdfs=() 00:05:21.547 18:55:38 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:05:21.547 18:55:38 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:05:21.547 18:55:38 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:05:21.547 18:55:38 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:21.547 18:55:38 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:21.547 18:55:38 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:21.547 18:55:38 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:21.547 18:55:38 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:21.547 18:55:38 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:21.547 18:55:39 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:05:21.547 18:55:39 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:21.547 18:55:39 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:21.547 18:55:39 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:21.547 18:55:39 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:21.547 18:55:39 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:21.547 18:55:39 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:21.547 18:55:39 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:21.547 18:55:39 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:21.547 18:55:39 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:21.547 18:55:39 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:21.547 18:55:39 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:05:21.547 18:55:39 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:21.547 18:55:39 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:21.547 18:55:39 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:21.547 18:55:39 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:05:21.547 18:55:39 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:21.547 18:55:39 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:21.547 18:55:39 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:05:21.547 18:55:39 -- common/autotest_common.sh@1572 -- # return 0 00:05:21.547 18:55:39 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:05:21.547 18:55:39 -- common/autotest_common.sh@1580 -- # return 0 00:05:21.547 18:55:39 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:21.547 18:55:39 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:21.547 18:55:39 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:21.547 18:55:39 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:21.547 18:55:39 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:21.547 18:55:39 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:21.547 18:55:39 -- common/autotest_common.sh@10 -- # set +x 00:05:21.547 18:55:39 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:21.547 18:55:39 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:21.547 18:55:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:21.547 18:55:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:21.547 18:55:39 -- common/autotest_common.sh@10 -- # set +x 00:05:21.547 ************************************ 00:05:21.547 START TEST env 00:05:21.547 ************************************ 00:05:21.547 18:55:39 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:21.806 * Looking for test storage... 00:05:21.806 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:21.806 18:55:39 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:21.806 18:55:39 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:21.806 18:55:39 env -- common/autotest_common.sh@1711 -- # lcov --version 00:05:21.806 18:55:39 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:21.806 18:55:39 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:21.806 18:55:39 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:21.806 18:55:39 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:21.806 18:55:39 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:21.806 18:55:39 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:21.806 18:55:39 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:21.806 18:55:39 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:21.806 18:55:39 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:21.806 18:55:39 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:21.806 18:55:39 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:21.806 18:55:39 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:21.806 18:55:39 env -- scripts/common.sh@344 -- # case "$op" in 00:05:21.806 18:55:39 env -- scripts/common.sh@345 -- # : 1 00:05:21.806 18:55:39 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:21.806 18:55:39 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:21.806 18:55:39 env -- scripts/common.sh@365 -- # decimal 1 00:05:21.806 18:55:39 env -- scripts/common.sh@353 -- # local d=1 00:05:21.806 18:55:39 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:21.806 18:55:39 env -- scripts/common.sh@355 -- # echo 1 00:05:21.806 18:55:39 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:21.806 18:55:39 env -- scripts/common.sh@366 -- # decimal 2 00:05:21.806 18:55:39 env -- scripts/common.sh@353 -- # local d=2 00:05:21.806 18:55:39 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:21.806 18:55:39 env -- scripts/common.sh@355 -- # echo 2 00:05:21.806 18:55:39 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:21.806 18:55:39 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:21.806 18:55:39 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:21.806 18:55:39 env -- scripts/common.sh@368 -- # return 0 00:05:21.806 18:55:39 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:21.806 18:55:39 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:21.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.806 --rc genhtml_branch_coverage=1 00:05:21.806 --rc genhtml_function_coverage=1 00:05:21.806 --rc genhtml_legend=1 00:05:21.806 --rc geninfo_all_blocks=1 00:05:21.806 --rc geninfo_unexecuted_blocks=1 00:05:21.806 00:05:21.806 ' 00:05:21.806 18:55:39 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:21.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.806 --rc genhtml_branch_coverage=1 00:05:21.806 --rc genhtml_function_coverage=1 00:05:21.806 --rc genhtml_legend=1 00:05:21.806 --rc geninfo_all_blocks=1 00:05:21.806 --rc geninfo_unexecuted_blocks=1 00:05:21.806 00:05:21.806 ' 00:05:21.806 18:55:39 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:21.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.806 --rc genhtml_branch_coverage=1 00:05:21.806 --rc genhtml_function_coverage=1 00:05:21.806 --rc genhtml_legend=1 00:05:21.806 --rc geninfo_all_blocks=1 00:05:21.806 --rc geninfo_unexecuted_blocks=1 00:05:21.806 00:05:21.806 ' 00:05:21.806 18:55:39 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:21.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.806 --rc genhtml_branch_coverage=1 00:05:21.806 --rc genhtml_function_coverage=1 00:05:21.806 --rc genhtml_legend=1 00:05:21.806 --rc geninfo_all_blocks=1 00:05:21.806 --rc geninfo_unexecuted_blocks=1 00:05:21.806 00:05:21.806 ' 00:05:21.806 18:55:39 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:21.806 18:55:39 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:21.806 18:55:39 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:21.806 18:55:39 env -- common/autotest_common.sh@10 -- # set +x 00:05:21.806 ************************************ 00:05:21.806 START TEST env_memory 00:05:21.806 ************************************ 00:05:21.806 18:55:39 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:21.806 00:05:21.806 00:05:21.806 CUnit - A unit testing framework for C - Version 2.1-3 00:05:21.806 http://cunit.sourceforge.net/ 00:05:21.806 00:05:21.806 00:05:21.806 Suite: memory 00:05:21.806 Test: alloc and free memory map ...[2024-12-05 18:55:39.271768] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:21.806 passed 00:05:21.806 Test: mem map translation ...[2024-12-05 18:55:39.310601] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:21.806 [2024-12-05 18:55:39.310710] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:21.806 [2024-12-05 18:55:39.310817] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:21.806 [2024-12-05 18:55:39.310852] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:22.065 passed 00:05:22.065 Test: mem map registration ...[2024-12-05 18:55:39.378752] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:22.065 [2024-12-05 18:55:39.378787] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:22.065 passed 00:05:22.065 Test: mem map adjacent registrations ...passed 00:05:22.065 00:05:22.065 Run Summary: Type Total Ran Passed Failed Inactive 00:05:22.065 suites 1 1 n/a 0 0 00:05:22.065 tests 4 4 4 0 0 00:05:22.065 asserts 152 152 152 0 n/a 00:05:22.065 00:05:22.065 Elapsed time = 0.236 seconds 00:05:22.065 00:05:22.065 real 0m0.272s 00:05:22.065 user 0m0.241s 00:05:22.065 sys 0m0.022s 00:05:22.065 18:55:39 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:22.065 18:55:39 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:22.065 ************************************ 00:05:22.065 END TEST env_memory 00:05:22.065 ************************************ 00:05:22.065 18:55:39 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:22.065 18:55:39 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:22.065 18:55:39 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:22.065 18:55:39 env -- common/autotest_common.sh@10 -- # set +x 00:05:22.065 ************************************ 00:05:22.065 START TEST env_vtophys 00:05:22.065 ************************************ 00:05:22.065 18:55:39 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:22.065 EAL: lib.eal log level changed from notice to debug 00:05:22.065 EAL: Detected lcore 0 as core 0 on socket 0 00:05:22.065 EAL: Detected lcore 1 as core 0 on socket 0 00:05:22.065 EAL: Detected lcore 2 as core 0 on socket 0 00:05:22.065 EAL: Detected lcore 3 as core 0 on socket 0 00:05:22.065 EAL: Detected lcore 4 as core 0 on socket 0 00:05:22.065 EAL: Detected lcore 5 as core 0 on socket 0 00:05:22.065 EAL: Detected lcore 6 as core 0 on socket 0 00:05:22.065 EAL: Detected lcore 7 as core 0 on socket 0 00:05:22.065 EAL: Detected lcore 8 as core 0 on socket 0 00:05:22.065 EAL: Detected lcore 9 as core 0 on socket 0 00:05:22.065 EAL: Maximum logical cores by configuration: 128 00:05:22.065 EAL: Detected CPU lcores: 10 00:05:22.065 EAL: Detected NUMA nodes: 1 00:05:22.065 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:05:22.065 EAL: Detected shared linkage of DPDK 00:05:22.065 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:05:22.065 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:05:22.065 EAL: Registered [vdev] bus. 00:05:22.065 EAL: bus.vdev log level changed from disabled to notice 00:05:22.065 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:05:22.065 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:05:22.065 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:22.065 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:22.065 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:05:22.065 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:05:22.065 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:05:22.065 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:05:22.065 EAL: No shared files mode enabled, IPC will be disabled 00:05:22.065 EAL: No shared files mode enabled, IPC is disabled 00:05:22.065 EAL: Selected IOVA mode 'PA' 00:05:22.065 EAL: Probing VFIO support... 00:05:22.065 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:22.065 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:22.065 EAL: Ask a virtual area of 0x2e000 bytes 00:05:22.065 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:22.065 EAL: Setting up physically contiguous memory... 00:05:22.065 EAL: Setting maximum number of open files to 524288 00:05:22.065 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:22.065 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:22.065 EAL: Ask a virtual area of 0x61000 bytes 00:05:22.065 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:22.065 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:22.065 EAL: Ask a virtual area of 0x400000000 bytes 00:05:22.065 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:22.065 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:22.065 EAL: Ask a virtual area of 0x61000 bytes 00:05:22.065 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:22.065 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:22.065 EAL: Ask a virtual area of 0x400000000 bytes 00:05:22.065 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:22.065 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:22.065 EAL: Ask a virtual area of 0x61000 bytes 00:05:22.065 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:22.065 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:22.065 EAL: Ask a virtual area of 0x400000000 bytes 00:05:22.065 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:22.065 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:22.065 EAL: Ask a virtual area of 0x61000 bytes 00:05:22.065 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:22.065 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:22.065 EAL: Ask a virtual area of 0x400000000 bytes 00:05:22.065 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:22.065 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:22.065 EAL: Hugepages will be freed exactly as allocated. 00:05:22.065 EAL: No shared files mode enabled, IPC is disabled 00:05:22.065 EAL: No shared files mode enabled, IPC is disabled 00:05:22.323 EAL: TSC frequency is ~2600000 KHz 00:05:22.323 EAL: Main lcore 0 is ready (tid=7f0452a2da40;cpuset=[0]) 00:05:22.323 EAL: Trying to obtain current memory policy. 00:05:22.323 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:22.323 EAL: Restoring previous memory policy: 0 00:05:22.323 EAL: request: mp_malloc_sync 00:05:22.324 EAL: No shared files mode enabled, IPC is disabled 00:05:22.324 EAL: Heap on socket 0 was expanded by 2MB 00:05:22.324 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:22.324 EAL: No shared files mode enabled, IPC is disabled 00:05:22.324 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:22.324 EAL: Mem event callback 'spdk:(nil)' registered 00:05:22.324 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:22.324 00:05:22.324 00:05:22.324 CUnit - A unit testing framework for C - Version 2.1-3 00:05:22.324 http://cunit.sourceforge.net/ 00:05:22.324 00:05:22.324 00:05:22.324 Suite: components_suite 00:05:22.582 Test: vtophys_malloc_test ...passed 00:05:22.582 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:22.582 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:22.582 EAL: Restoring previous memory policy: 4 00:05:22.582 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.582 EAL: request: mp_malloc_sync 00:05:22.582 EAL: No shared files mode enabled, IPC is disabled 00:05:22.582 EAL: Heap on socket 0 was expanded by 4MB 00:05:22.582 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.582 EAL: request: mp_malloc_sync 00:05:22.582 EAL: No shared files mode enabled, IPC is disabled 00:05:22.582 EAL: Heap on socket 0 was shrunk by 4MB 00:05:22.582 EAL: Trying to obtain current memory policy. 00:05:22.582 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:22.582 EAL: Restoring previous memory policy: 4 00:05:22.582 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.582 EAL: request: mp_malloc_sync 00:05:22.582 EAL: No shared files mode enabled, IPC is disabled 00:05:22.582 EAL: Heap on socket 0 was expanded by 6MB 00:05:22.582 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.582 EAL: request: mp_malloc_sync 00:05:22.582 EAL: No shared files mode enabled, IPC is disabled 00:05:22.582 EAL: Heap on socket 0 was shrunk by 6MB 00:05:22.582 EAL: Trying to obtain current memory policy. 00:05:22.582 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:22.582 EAL: Restoring previous memory policy: 4 00:05:22.582 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.582 EAL: request: mp_malloc_sync 00:05:22.582 EAL: No shared files mode enabled, IPC is disabled 00:05:22.582 EAL: Heap on socket 0 was expanded by 10MB 00:05:22.582 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.582 EAL: request: mp_malloc_sync 00:05:22.582 EAL: No shared files mode enabled, IPC is disabled 00:05:22.582 EAL: Heap on socket 0 was shrunk by 10MB 00:05:22.582 EAL: Trying to obtain current memory policy. 00:05:22.582 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:22.582 EAL: Restoring previous memory policy: 4 00:05:22.582 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.582 EAL: request: mp_malloc_sync 00:05:22.582 EAL: No shared files mode enabled, IPC is disabled 00:05:22.582 EAL: Heap on socket 0 was expanded by 18MB 00:05:22.582 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.582 EAL: request: mp_malloc_sync 00:05:22.582 EAL: No shared files mode enabled, IPC is disabled 00:05:22.582 EAL: Heap on socket 0 was shrunk by 18MB 00:05:22.582 EAL: Trying to obtain current memory policy. 00:05:22.582 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:22.582 EAL: Restoring previous memory policy: 4 00:05:22.582 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.582 EAL: request: mp_malloc_sync 00:05:22.582 EAL: No shared files mode enabled, IPC is disabled 00:05:22.582 EAL: Heap on socket 0 was expanded by 34MB 00:05:22.582 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.582 EAL: request: mp_malloc_sync 00:05:22.582 EAL: No shared files mode enabled, IPC is disabled 00:05:22.582 EAL: Heap on socket 0 was shrunk by 34MB 00:05:22.582 EAL: Trying to obtain current memory policy. 00:05:22.582 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:22.582 EAL: Restoring previous memory policy: 4 00:05:22.582 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.582 EAL: request: mp_malloc_sync 00:05:22.582 EAL: No shared files mode enabled, IPC is disabled 00:05:22.582 EAL: Heap on socket 0 was expanded by 66MB 00:05:22.582 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.582 EAL: request: mp_malloc_sync 00:05:22.582 EAL: No shared files mode enabled, IPC is disabled 00:05:22.582 EAL: Heap on socket 0 was shrunk by 66MB 00:05:22.582 EAL: Trying to obtain current memory policy. 00:05:22.582 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:22.582 EAL: Restoring previous memory policy: 4 00:05:22.582 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.582 EAL: request: mp_malloc_sync 00:05:22.582 EAL: No shared files mode enabled, IPC is disabled 00:05:22.582 EAL: Heap on socket 0 was expanded by 130MB 00:05:22.582 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.582 EAL: request: mp_malloc_sync 00:05:22.582 EAL: No shared files mode enabled, IPC is disabled 00:05:22.582 EAL: Heap on socket 0 was shrunk by 130MB 00:05:22.582 EAL: Trying to obtain current memory policy. 00:05:22.582 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:22.582 EAL: Restoring previous memory policy: 4 00:05:22.582 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.582 EAL: request: mp_malloc_sync 00:05:22.582 EAL: No shared files mode enabled, IPC is disabled 00:05:22.582 EAL: Heap on socket 0 was expanded by 258MB 00:05:22.582 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.582 EAL: request: mp_malloc_sync 00:05:22.582 EAL: No shared files mode enabled, IPC is disabled 00:05:22.582 EAL: Heap on socket 0 was shrunk by 258MB 00:05:22.582 EAL: Trying to obtain current memory policy. 00:05:22.582 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:22.840 EAL: Restoring previous memory policy: 4 00:05:22.840 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.840 EAL: request: mp_malloc_sync 00:05:22.840 EAL: No shared files mode enabled, IPC is disabled 00:05:22.840 EAL: Heap on socket 0 was expanded by 514MB 00:05:22.840 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.840 EAL: request: mp_malloc_sync 00:05:22.840 EAL: No shared files mode enabled, IPC is disabled 00:05:22.840 EAL: Heap on socket 0 was shrunk by 514MB 00:05:22.840 EAL: Trying to obtain current memory policy. 00:05:22.840 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:23.098 EAL: Restoring previous memory policy: 4 00:05:23.098 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.098 EAL: request: mp_malloc_sync 00:05:23.098 EAL: No shared files mode enabled, IPC is disabled 00:05:23.098 EAL: Heap on socket 0 was expanded by 1026MB 00:05:23.098 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.357 EAL: request: mp_malloc_sync 00:05:23.357 passed 00:05:23.357 00:05:23.357 Run Summary: Type Total Ran Passed Failed Inactive 00:05:23.357 suites 1 1 n/a 0 0 00:05:23.357 tests 2 2 2 0 0 00:05:23.357 asserts 5442 5442 5442 0 n/a 00:05:23.357 00:05:23.357 Elapsed time = 0.934 seconds 00:05:23.357 EAL: No shared files mode enabled, IPC is disabled 00:05:23.357 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:23.357 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.357 EAL: request: mp_malloc_sync 00:05:23.357 EAL: No shared files mode enabled, IPC is disabled 00:05:23.357 EAL: Heap on socket 0 was shrunk by 2MB 00:05:23.357 EAL: No shared files mode enabled, IPC is disabled 00:05:23.357 EAL: No shared files mode enabled, IPC is disabled 00:05:23.357 EAL: No shared files mode enabled, IPC is disabled 00:05:23.357 00:05:23.357 real 0m1.150s 00:05:23.357 user 0m0.459s 00:05:23.357 sys 0m0.561s 00:05:23.357 18:55:40 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:23.357 ************************************ 00:05:23.357 END TEST env_vtophys 00:05:23.357 ************************************ 00:05:23.357 18:55:40 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:23.357 18:55:40 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:23.357 18:55:40 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:23.357 18:55:40 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:23.357 18:55:40 env -- common/autotest_common.sh@10 -- # set +x 00:05:23.357 ************************************ 00:05:23.357 START TEST env_pci 00:05:23.357 ************************************ 00:05:23.357 18:55:40 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:23.357 00:05:23.357 00:05:23.357 CUnit - A unit testing framework for C - Version 2.1-3 00:05:23.357 http://cunit.sourceforge.net/ 00:05:23.357 00:05:23.357 00:05:23.357 Suite: pci 00:05:23.357 Test: pci_hook ...[2024-12-05 18:55:40.737235] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 68938 has claimed it 00:05:23.357 passed 00:05:23.357 00:05:23.357 Run Summary: Type Total Ran Passed Failed Inactive 00:05:23.357 suites 1 1 n/a 0 0 00:05:23.357 tests 1 1 1 0 0 00:05:23.357 asserts 25 25 25 0 n/a 00:05:23.357 00:05:23.357 Elapsed time = 0.006 seconds 00:05:23.357 EAL: Cannot find device (10000:00:01.0) 00:05:23.357 EAL: Failed to attach device on primary process 00:05:23.357 00:05:23.357 real 0m0.062s 00:05:23.357 user 0m0.023s 00:05:23.357 sys 0m0.038s 00:05:23.357 ************************************ 00:05:23.357 END TEST env_pci 00:05:23.357 ************************************ 00:05:23.357 18:55:40 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:23.357 18:55:40 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:23.357 18:55:40 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:23.357 18:55:40 env -- env/env.sh@15 -- # uname 00:05:23.357 18:55:40 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:23.357 18:55:40 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:23.357 18:55:40 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:23.358 18:55:40 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:23.358 18:55:40 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:23.358 18:55:40 env -- common/autotest_common.sh@10 -- # set +x 00:05:23.358 ************************************ 00:05:23.358 START TEST env_dpdk_post_init 00:05:23.358 ************************************ 00:05:23.358 18:55:40 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:23.358 EAL: Detected CPU lcores: 10 00:05:23.358 EAL: Detected NUMA nodes: 1 00:05:23.358 EAL: Detected shared linkage of DPDK 00:05:23.358 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:23.358 EAL: Selected IOVA mode 'PA' 00:05:23.616 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:23.616 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:23.616 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:23.616 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:05:23.616 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:05:23.616 Starting DPDK initialization... 00:05:23.616 Starting SPDK post initialization... 00:05:23.616 SPDK NVMe probe 00:05:23.616 Attaching to 0000:00:10.0 00:05:23.616 Attaching to 0000:00:11.0 00:05:23.616 Attaching to 0000:00:12.0 00:05:23.616 Attaching to 0000:00:13.0 00:05:23.616 Attached to 0000:00:10.0 00:05:23.616 Attached to 0000:00:11.0 00:05:23.616 Attached to 0000:00:13.0 00:05:23.616 Attached to 0000:00:12.0 00:05:23.616 Cleaning up... 00:05:23.616 ************************************ 00:05:23.616 END TEST env_dpdk_post_init 00:05:23.616 ************************************ 00:05:23.616 00:05:23.616 real 0m0.206s 00:05:23.616 user 0m0.050s 00:05:23.616 sys 0m0.057s 00:05:23.616 18:55:41 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:23.616 18:55:41 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:23.616 18:55:41 env -- env/env.sh@26 -- # uname 00:05:23.616 18:55:41 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:23.616 18:55:41 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:23.616 18:55:41 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:23.616 18:55:41 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:23.616 18:55:41 env -- common/autotest_common.sh@10 -- # set +x 00:05:23.616 ************************************ 00:05:23.616 START TEST env_mem_callbacks 00:05:23.616 ************************************ 00:05:23.616 18:55:41 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:23.616 EAL: Detected CPU lcores: 10 00:05:23.616 EAL: Detected NUMA nodes: 1 00:05:23.616 EAL: Detected shared linkage of DPDK 00:05:23.616 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:23.616 EAL: Selected IOVA mode 'PA' 00:05:23.875 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:23.875 00:05:23.875 00:05:23.875 CUnit - A unit testing framework for C - Version 2.1-3 00:05:23.875 http://cunit.sourceforge.net/ 00:05:23.875 00:05:23.875 00:05:23.875 Suite: memory 00:05:23.875 Test: test ... 00:05:23.875 register 0x200000200000 2097152 00:05:23.875 malloc 3145728 00:05:23.875 register 0x200000400000 4194304 00:05:23.875 buf 0x200000500000 len 3145728 PASSED 00:05:23.875 malloc 64 00:05:23.875 buf 0x2000004fff40 len 64 PASSED 00:05:23.875 malloc 4194304 00:05:23.875 register 0x200000800000 6291456 00:05:23.875 buf 0x200000a00000 len 4194304 PASSED 00:05:23.875 free 0x200000500000 3145728 00:05:23.875 free 0x2000004fff40 64 00:05:23.875 unregister 0x200000400000 4194304 PASSED 00:05:23.875 free 0x200000a00000 4194304 00:05:23.875 unregister 0x200000800000 6291456 PASSED 00:05:23.875 malloc 8388608 00:05:23.875 register 0x200000400000 10485760 00:05:23.875 buf 0x200000600000 len 8388608 PASSED 00:05:23.875 free 0x200000600000 8388608 00:05:23.875 unregister 0x200000400000 10485760 PASSED 00:05:23.875 passed 00:05:23.875 00:05:23.875 Run Summary: Type Total Ran Passed Failed Inactive 00:05:23.875 suites 1 1 n/a 0 0 00:05:23.875 tests 1 1 1 0 0 00:05:23.875 asserts 15 15 15 0 n/a 00:05:23.875 00:05:23.875 Elapsed time = 0.009 seconds 00:05:23.875 00:05:23.875 real 0m0.150s 00:05:23.875 user 0m0.013s 00:05:23.875 sys 0m0.036s 00:05:23.875 18:55:41 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:23.875 18:55:41 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:23.875 ************************************ 00:05:23.875 END TEST env_mem_callbacks 00:05:23.875 ************************************ 00:05:23.875 00:05:23.875 real 0m2.184s 00:05:23.875 user 0m0.941s 00:05:23.875 sys 0m0.907s 00:05:23.875 18:55:41 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:23.875 18:55:41 env -- common/autotest_common.sh@10 -- # set +x 00:05:23.875 ************************************ 00:05:23.875 END TEST env 00:05:23.875 ************************************ 00:05:23.875 18:55:41 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:23.875 18:55:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:23.875 18:55:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:23.875 18:55:41 -- common/autotest_common.sh@10 -- # set +x 00:05:23.875 ************************************ 00:05:23.875 START TEST rpc 00:05:23.875 ************************************ 00:05:23.875 18:55:41 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:23.875 * Looking for test storage... 00:05:23.875 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:23.875 18:55:41 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:23.875 18:55:41 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:23.875 18:55:41 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:23.875 18:55:41 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:23.875 18:55:41 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:23.875 18:55:41 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:23.875 18:55:41 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:23.875 18:55:41 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:23.875 18:55:41 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:23.875 18:55:41 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:23.875 18:55:41 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:23.875 18:55:41 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:23.875 18:55:41 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:23.875 18:55:41 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:23.875 18:55:41 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:23.875 18:55:41 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:23.875 18:55:41 rpc -- scripts/common.sh@345 -- # : 1 00:05:23.875 18:55:41 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:23.875 18:55:41 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:23.875 18:55:41 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:23.875 18:55:41 rpc -- scripts/common.sh@353 -- # local d=1 00:05:23.875 18:55:41 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:23.875 18:55:41 rpc -- scripts/common.sh@355 -- # echo 1 00:05:23.875 18:55:41 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:23.875 18:55:41 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:23.875 18:55:41 rpc -- scripts/common.sh@353 -- # local d=2 00:05:23.875 18:55:41 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:23.875 18:55:41 rpc -- scripts/common.sh@355 -- # echo 2 00:05:23.875 18:55:41 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:23.875 18:55:41 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:23.875 18:55:41 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:23.875 18:55:41 rpc -- scripts/common.sh@368 -- # return 0 00:05:23.875 18:55:41 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:23.875 18:55:41 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:23.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.875 --rc genhtml_branch_coverage=1 00:05:23.875 --rc genhtml_function_coverage=1 00:05:23.875 --rc genhtml_legend=1 00:05:23.875 --rc geninfo_all_blocks=1 00:05:23.875 --rc geninfo_unexecuted_blocks=1 00:05:23.875 00:05:23.875 ' 00:05:23.875 18:55:41 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:23.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.875 --rc genhtml_branch_coverage=1 00:05:23.875 --rc genhtml_function_coverage=1 00:05:23.875 --rc genhtml_legend=1 00:05:23.875 --rc geninfo_all_blocks=1 00:05:23.875 --rc geninfo_unexecuted_blocks=1 00:05:23.875 00:05:23.875 ' 00:05:23.875 18:55:41 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:23.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.875 --rc genhtml_branch_coverage=1 00:05:23.875 --rc genhtml_function_coverage=1 00:05:23.875 --rc genhtml_legend=1 00:05:23.875 --rc geninfo_all_blocks=1 00:05:23.875 --rc geninfo_unexecuted_blocks=1 00:05:23.875 00:05:23.875 ' 00:05:23.876 18:55:41 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:23.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.876 --rc genhtml_branch_coverage=1 00:05:23.876 --rc genhtml_function_coverage=1 00:05:23.876 --rc genhtml_legend=1 00:05:23.876 --rc geninfo_all_blocks=1 00:05:23.876 --rc geninfo_unexecuted_blocks=1 00:05:23.876 00:05:23.876 ' 00:05:23.876 18:55:41 rpc -- rpc/rpc.sh@65 -- # spdk_pid=69059 00:05:23.876 18:55:41 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:23.876 18:55:41 rpc -- rpc/rpc.sh@67 -- # waitforlisten 69059 00:05:23.876 18:55:41 rpc -- common/autotest_common.sh@835 -- # '[' -z 69059 ']' 00:05:23.876 18:55:41 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:23.876 18:55:41 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.876 18:55:41 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:23.876 18:55:41 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.876 18:55:41 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:23.876 18:55:41 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.134 [2024-12-05 18:55:41.490800] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:05:24.134 [2024-12-05 18:55:41.490910] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69059 ] 00:05:24.134 [2024-12-05 18:55:41.637358] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.134 [2024-12-05 18:55:41.654764] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:24.134 [2024-12-05 18:55:41.654810] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 69059' to capture a snapshot of events at runtime. 00:05:24.134 [2024-12-05 18:55:41.654822] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:24.134 [2024-12-05 18:55:41.654830] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:24.134 [2024-12-05 18:55:41.654839] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid69059 for offline analysis/debug. 00:05:24.134 [2024-12-05 18:55:41.655130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.069 18:55:42 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:25.069 18:55:42 rpc -- common/autotest_common.sh@868 -- # return 0 00:05:25.069 18:55:42 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:25.069 18:55:42 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:25.069 18:55:42 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:25.069 18:55:42 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:25.069 18:55:42 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:25.069 18:55:42 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:25.069 18:55:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.069 ************************************ 00:05:25.069 START TEST rpc_integrity 00:05:25.069 ************************************ 00:05:25.069 18:55:42 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:25.069 18:55:42 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:25.069 18:55:42 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.069 18:55:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:25.069 18:55:42 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.069 18:55:42 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:25.069 18:55:42 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:25.069 18:55:42 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:25.069 18:55:42 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:25.069 18:55:42 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.069 18:55:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:25.069 18:55:42 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.069 18:55:42 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:25.069 18:55:42 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:25.069 18:55:42 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.069 18:55:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:25.069 18:55:42 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.069 18:55:42 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:25.069 { 00:05:25.069 "name": "Malloc0", 00:05:25.069 "aliases": [ 00:05:25.069 "d62345de-84f3-437f-9fbe-815cc64657fa" 00:05:25.069 ], 00:05:25.069 "product_name": "Malloc disk", 00:05:25.069 "block_size": 512, 00:05:25.069 "num_blocks": 16384, 00:05:25.069 "uuid": "d62345de-84f3-437f-9fbe-815cc64657fa", 00:05:25.069 "assigned_rate_limits": { 00:05:25.069 "rw_ios_per_sec": 0, 00:05:25.069 "rw_mbytes_per_sec": 0, 00:05:25.069 "r_mbytes_per_sec": 0, 00:05:25.069 "w_mbytes_per_sec": 0 00:05:25.069 }, 00:05:25.069 "claimed": false, 00:05:25.069 "zoned": false, 00:05:25.069 "supported_io_types": { 00:05:25.069 "read": true, 00:05:25.069 "write": true, 00:05:25.069 "unmap": true, 00:05:25.069 "flush": true, 00:05:25.069 "reset": true, 00:05:25.069 "nvme_admin": false, 00:05:25.069 "nvme_io": false, 00:05:25.069 "nvme_io_md": false, 00:05:25.069 "write_zeroes": true, 00:05:25.069 "zcopy": true, 00:05:25.069 "get_zone_info": false, 00:05:25.069 "zone_management": false, 00:05:25.069 "zone_append": false, 00:05:25.069 "compare": false, 00:05:25.069 "compare_and_write": false, 00:05:25.069 "abort": true, 00:05:25.069 "seek_hole": false, 00:05:25.069 "seek_data": false, 00:05:25.069 "copy": true, 00:05:25.069 "nvme_iov_md": false 00:05:25.069 }, 00:05:25.069 "memory_domains": [ 00:05:25.069 { 00:05:25.069 "dma_device_id": "system", 00:05:25.069 "dma_device_type": 1 00:05:25.069 }, 00:05:25.069 { 00:05:25.069 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:25.069 "dma_device_type": 2 00:05:25.069 } 00:05:25.069 ], 00:05:25.069 "driver_specific": {} 00:05:25.069 } 00:05:25.069 ]' 00:05:25.069 18:55:42 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:25.069 18:55:42 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:25.069 18:55:42 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:25.069 18:55:42 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.069 18:55:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:25.069 [2024-12-05 18:55:42.445214] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:25.069 [2024-12-05 18:55:42.445287] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:25.069 [2024-12-05 18:55:42.445317] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:05:25.069 [2024-12-05 18:55:42.445326] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:25.069 [2024-12-05 18:55:42.447560] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:25.069 [2024-12-05 18:55:42.447598] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:25.069 Passthru0 00:05:25.069 18:55:42 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.069 18:55:42 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:25.069 18:55:42 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.069 18:55:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:25.069 18:55:42 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.069 18:55:42 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:25.069 { 00:05:25.069 "name": "Malloc0", 00:05:25.069 "aliases": [ 00:05:25.069 "d62345de-84f3-437f-9fbe-815cc64657fa" 00:05:25.069 ], 00:05:25.069 "product_name": "Malloc disk", 00:05:25.069 "block_size": 512, 00:05:25.069 "num_blocks": 16384, 00:05:25.069 "uuid": "d62345de-84f3-437f-9fbe-815cc64657fa", 00:05:25.069 "assigned_rate_limits": { 00:05:25.069 "rw_ios_per_sec": 0, 00:05:25.069 "rw_mbytes_per_sec": 0, 00:05:25.069 "r_mbytes_per_sec": 0, 00:05:25.069 "w_mbytes_per_sec": 0 00:05:25.069 }, 00:05:25.069 "claimed": true, 00:05:25.069 "claim_type": "exclusive_write", 00:05:25.069 "zoned": false, 00:05:25.069 "supported_io_types": { 00:05:25.069 "read": true, 00:05:25.069 "write": true, 00:05:25.069 "unmap": true, 00:05:25.069 "flush": true, 00:05:25.069 "reset": true, 00:05:25.069 "nvme_admin": false, 00:05:25.069 "nvme_io": false, 00:05:25.069 "nvme_io_md": false, 00:05:25.069 "write_zeroes": true, 00:05:25.069 "zcopy": true, 00:05:25.069 "get_zone_info": false, 00:05:25.069 "zone_management": false, 00:05:25.069 "zone_append": false, 00:05:25.069 "compare": false, 00:05:25.069 "compare_and_write": false, 00:05:25.069 "abort": true, 00:05:25.069 "seek_hole": false, 00:05:25.069 "seek_data": false, 00:05:25.069 "copy": true, 00:05:25.069 "nvme_iov_md": false 00:05:25.069 }, 00:05:25.069 "memory_domains": [ 00:05:25.069 { 00:05:25.069 "dma_device_id": "system", 00:05:25.069 "dma_device_type": 1 00:05:25.070 }, 00:05:25.070 { 00:05:25.070 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:25.070 "dma_device_type": 2 00:05:25.070 } 00:05:25.070 ], 00:05:25.070 "driver_specific": {} 00:05:25.070 }, 00:05:25.070 { 00:05:25.070 "name": "Passthru0", 00:05:25.070 "aliases": [ 00:05:25.070 "f79b0edd-51c5-5416-9b59-d34194850775" 00:05:25.070 ], 00:05:25.070 "product_name": "passthru", 00:05:25.070 "block_size": 512, 00:05:25.070 "num_blocks": 16384, 00:05:25.070 "uuid": "f79b0edd-51c5-5416-9b59-d34194850775", 00:05:25.070 "assigned_rate_limits": { 00:05:25.070 "rw_ios_per_sec": 0, 00:05:25.070 "rw_mbytes_per_sec": 0, 00:05:25.070 "r_mbytes_per_sec": 0, 00:05:25.070 "w_mbytes_per_sec": 0 00:05:25.070 }, 00:05:25.070 "claimed": false, 00:05:25.070 "zoned": false, 00:05:25.070 "supported_io_types": { 00:05:25.070 "read": true, 00:05:25.070 "write": true, 00:05:25.070 "unmap": true, 00:05:25.070 "flush": true, 00:05:25.070 "reset": true, 00:05:25.070 "nvme_admin": false, 00:05:25.070 "nvme_io": false, 00:05:25.070 "nvme_io_md": false, 00:05:25.070 "write_zeroes": true, 00:05:25.070 "zcopy": true, 00:05:25.070 "get_zone_info": false, 00:05:25.070 "zone_management": false, 00:05:25.070 "zone_append": false, 00:05:25.070 "compare": false, 00:05:25.070 "compare_and_write": false, 00:05:25.070 "abort": true, 00:05:25.070 "seek_hole": false, 00:05:25.070 "seek_data": false, 00:05:25.070 "copy": true, 00:05:25.070 "nvme_iov_md": false 00:05:25.070 }, 00:05:25.070 "memory_domains": [ 00:05:25.070 { 00:05:25.070 "dma_device_id": "system", 00:05:25.070 "dma_device_type": 1 00:05:25.070 }, 00:05:25.070 { 00:05:25.070 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:25.070 "dma_device_type": 2 00:05:25.070 } 00:05:25.070 ], 00:05:25.070 "driver_specific": { 00:05:25.070 "passthru": { 00:05:25.070 "name": "Passthru0", 00:05:25.070 "base_bdev_name": "Malloc0" 00:05:25.070 } 00:05:25.070 } 00:05:25.070 } 00:05:25.070 ]' 00:05:25.070 18:55:42 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:25.070 18:55:42 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:25.070 18:55:42 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:25.070 18:55:42 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.070 18:55:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:25.070 18:55:42 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.070 18:55:42 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:25.070 18:55:42 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.070 18:55:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:25.070 18:55:42 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.070 18:55:42 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:25.070 18:55:42 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.070 18:55:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:25.070 18:55:42 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.070 18:55:42 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:25.070 18:55:42 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:25.070 18:55:42 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:25.070 00:05:25.070 real 0m0.233s 00:05:25.070 user 0m0.131s 00:05:25.070 sys 0m0.037s 00:05:25.070 18:55:42 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:25.070 18:55:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:25.070 ************************************ 00:05:25.070 END TEST rpc_integrity 00:05:25.070 ************************************ 00:05:25.070 18:55:42 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:25.070 18:55:42 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:25.070 18:55:42 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:25.070 18:55:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.070 ************************************ 00:05:25.070 START TEST rpc_plugins 00:05:25.070 ************************************ 00:05:25.070 18:55:42 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:05:25.070 18:55:42 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:25.070 18:55:42 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.070 18:55:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:25.070 18:55:42 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.070 18:55:42 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:25.070 18:55:42 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:25.070 18:55:42 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.070 18:55:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:25.070 18:55:42 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.070 18:55:42 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:25.070 { 00:05:25.070 "name": "Malloc1", 00:05:25.070 "aliases": [ 00:05:25.070 "6628a8d5-c43e-4f52-9e00-94f3decb5c9c" 00:05:25.070 ], 00:05:25.070 "product_name": "Malloc disk", 00:05:25.070 "block_size": 4096, 00:05:25.070 "num_blocks": 256, 00:05:25.070 "uuid": "6628a8d5-c43e-4f52-9e00-94f3decb5c9c", 00:05:25.070 "assigned_rate_limits": { 00:05:25.070 "rw_ios_per_sec": 0, 00:05:25.070 "rw_mbytes_per_sec": 0, 00:05:25.070 "r_mbytes_per_sec": 0, 00:05:25.070 "w_mbytes_per_sec": 0 00:05:25.070 }, 00:05:25.070 "claimed": false, 00:05:25.070 "zoned": false, 00:05:25.070 "supported_io_types": { 00:05:25.070 "read": true, 00:05:25.070 "write": true, 00:05:25.070 "unmap": true, 00:05:25.070 "flush": true, 00:05:25.070 "reset": true, 00:05:25.070 "nvme_admin": false, 00:05:25.070 "nvme_io": false, 00:05:25.070 "nvme_io_md": false, 00:05:25.070 "write_zeroes": true, 00:05:25.070 "zcopy": true, 00:05:25.070 "get_zone_info": false, 00:05:25.070 "zone_management": false, 00:05:25.070 "zone_append": false, 00:05:25.070 "compare": false, 00:05:25.070 "compare_and_write": false, 00:05:25.070 "abort": true, 00:05:25.070 "seek_hole": false, 00:05:25.070 "seek_data": false, 00:05:25.070 "copy": true, 00:05:25.070 "nvme_iov_md": false 00:05:25.070 }, 00:05:25.070 "memory_domains": [ 00:05:25.070 { 00:05:25.070 "dma_device_id": "system", 00:05:25.070 "dma_device_type": 1 00:05:25.070 }, 00:05:25.070 { 00:05:25.070 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:25.071 "dma_device_type": 2 00:05:25.071 } 00:05:25.071 ], 00:05:25.071 "driver_specific": {} 00:05:25.071 } 00:05:25.071 ]' 00:05:25.071 18:55:42 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:25.329 18:55:42 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:25.329 18:55:42 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:25.329 18:55:42 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.329 18:55:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:25.329 18:55:42 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.329 18:55:42 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:25.329 18:55:42 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.329 18:55:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:25.329 18:55:42 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.329 18:55:42 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:25.329 18:55:42 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:25.329 18:55:42 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:25.329 00:05:25.329 real 0m0.109s 00:05:25.329 user 0m0.062s 00:05:25.329 sys 0m0.015s 00:05:25.329 18:55:42 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:25.329 18:55:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:25.329 ************************************ 00:05:25.329 END TEST rpc_plugins 00:05:25.329 ************************************ 00:05:25.329 18:55:42 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:25.329 18:55:42 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:25.329 18:55:42 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:25.329 18:55:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.329 ************************************ 00:05:25.329 START TEST rpc_trace_cmd_test 00:05:25.329 ************************************ 00:05:25.329 18:55:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:05:25.329 18:55:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:25.329 18:55:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:25.329 18:55:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.329 18:55:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:25.329 18:55:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.329 18:55:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:25.329 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid69059", 00:05:25.329 "tpoint_group_mask": "0x8", 00:05:25.329 "iscsi_conn": { 00:05:25.329 "mask": "0x2", 00:05:25.329 "tpoint_mask": "0x0" 00:05:25.329 }, 00:05:25.329 "scsi": { 00:05:25.329 "mask": "0x4", 00:05:25.329 "tpoint_mask": "0x0" 00:05:25.329 }, 00:05:25.329 "bdev": { 00:05:25.329 "mask": "0x8", 00:05:25.329 "tpoint_mask": "0xffffffffffffffff" 00:05:25.329 }, 00:05:25.329 "nvmf_rdma": { 00:05:25.329 "mask": "0x10", 00:05:25.329 "tpoint_mask": "0x0" 00:05:25.329 }, 00:05:25.329 "nvmf_tcp": { 00:05:25.329 "mask": "0x20", 00:05:25.329 "tpoint_mask": "0x0" 00:05:25.329 }, 00:05:25.329 "ftl": { 00:05:25.329 "mask": "0x40", 00:05:25.329 "tpoint_mask": "0x0" 00:05:25.329 }, 00:05:25.329 "blobfs": { 00:05:25.329 "mask": "0x80", 00:05:25.329 "tpoint_mask": "0x0" 00:05:25.329 }, 00:05:25.329 "dsa": { 00:05:25.329 "mask": "0x200", 00:05:25.329 "tpoint_mask": "0x0" 00:05:25.329 }, 00:05:25.329 "thread": { 00:05:25.329 "mask": "0x400", 00:05:25.329 "tpoint_mask": "0x0" 00:05:25.329 }, 00:05:25.329 "nvme_pcie": { 00:05:25.329 "mask": "0x800", 00:05:25.329 "tpoint_mask": "0x0" 00:05:25.329 }, 00:05:25.329 "iaa": { 00:05:25.329 "mask": "0x1000", 00:05:25.329 "tpoint_mask": "0x0" 00:05:25.329 }, 00:05:25.329 "nvme_tcp": { 00:05:25.329 "mask": "0x2000", 00:05:25.329 "tpoint_mask": "0x0" 00:05:25.329 }, 00:05:25.329 "bdev_nvme": { 00:05:25.329 "mask": "0x4000", 00:05:25.329 "tpoint_mask": "0x0" 00:05:25.329 }, 00:05:25.329 "sock": { 00:05:25.329 "mask": "0x8000", 00:05:25.329 "tpoint_mask": "0x0" 00:05:25.329 }, 00:05:25.329 "blob": { 00:05:25.329 "mask": "0x10000", 00:05:25.329 "tpoint_mask": "0x0" 00:05:25.329 }, 00:05:25.329 "bdev_raid": { 00:05:25.329 "mask": "0x20000", 00:05:25.329 "tpoint_mask": "0x0" 00:05:25.329 }, 00:05:25.329 "scheduler": { 00:05:25.329 "mask": "0x40000", 00:05:25.329 "tpoint_mask": "0x0" 00:05:25.329 } 00:05:25.329 }' 00:05:25.329 18:55:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:25.329 18:55:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:25.329 18:55:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:25.329 18:55:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:25.329 18:55:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:25.329 18:55:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:25.329 18:55:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:25.329 18:55:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:25.329 18:55:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:25.589 18:55:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:25.589 00:05:25.589 real 0m0.164s 00:05:25.589 user 0m0.122s 00:05:25.589 sys 0m0.025s 00:05:25.589 18:55:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:25.589 18:55:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:25.589 ************************************ 00:05:25.589 END TEST rpc_trace_cmd_test 00:05:25.589 ************************************ 00:05:25.589 18:55:42 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:25.589 18:55:42 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:25.589 18:55:42 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:25.589 18:55:42 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:25.589 18:55:42 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:25.589 18:55:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.589 ************************************ 00:05:25.589 START TEST rpc_daemon_integrity 00:05:25.589 ************************************ 00:05:25.589 18:55:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:25.589 18:55:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:25.589 18:55:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.589 18:55:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:25.589 18:55:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.589 18:55:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:25.589 18:55:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:25.589 18:55:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:25.589 18:55:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:25.589 18:55:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.589 18:55:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:25.589 18:55:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.589 18:55:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:25.589 18:55:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:25.589 18:55:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.589 18:55:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:25.589 18:55:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.589 18:55:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:25.589 { 00:05:25.589 "name": "Malloc2", 00:05:25.589 "aliases": [ 00:05:25.589 "432bb12f-1417-4e9d-b75d-f055f97a28b5" 00:05:25.589 ], 00:05:25.589 "product_name": "Malloc disk", 00:05:25.589 "block_size": 512, 00:05:25.589 "num_blocks": 16384, 00:05:25.589 "uuid": "432bb12f-1417-4e9d-b75d-f055f97a28b5", 00:05:25.589 "assigned_rate_limits": { 00:05:25.589 "rw_ios_per_sec": 0, 00:05:25.589 "rw_mbytes_per_sec": 0, 00:05:25.589 "r_mbytes_per_sec": 0, 00:05:25.589 "w_mbytes_per_sec": 0 00:05:25.589 }, 00:05:25.589 "claimed": false, 00:05:25.589 "zoned": false, 00:05:25.589 "supported_io_types": { 00:05:25.589 "read": true, 00:05:25.589 "write": true, 00:05:25.589 "unmap": true, 00:05:25.589 "flush": true, 00:05:25.589 "reset": true, 00:05:25.589 "nvme_admin": false, 00:05:25.589 "nvme_io": false, 00:05:25.589 "nvme_io_md": false, 00:05:25.589 "write_zeroes": true, 00:05:25.589 "zcopy": true, 00:05:25.589 "get_zone_info": false, 00:05:25.589 "zone_management": false, 00:05:25.589 "zone_append": false, 00:05:25.589 "compare": false, 00:05:25.589 "compare_and_write": false, 00:05:25.589 "abort": true, 00:05:25.589 "seek_hole": false, 00:05:25.589 "seek_data": false, 00:05:25.589 "copy": true, 00:05:25.589 "nvme_iov_md": false 00:05:25.589 }, 00:05:25.589 "memory_domains": [ 00:05:25.589 { 00:05:25.589 "dma_device_id": "system", 00:05:25.589 "dma_device_type": 1 00:05:25.589 }, 00:05:25.589 { 00:05:25.589 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:25.589 "dma_device_type": 2 00:05:25.589 } 00:05:25.589 ], 00:05:25.589 "driver_specific": {} 00:05:25.589 } 00:05:25.589 ]' 00:05:25.589 18:55:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:25.589 18:55:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:25.589 18:55:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:25.589 18:55:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.589 18:55:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:25.589 [2024-12-05 18:55:43.053480] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:25.589 [2024-12-05 18:55:43.053532] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:25.589 [2024-12-05 18:55:43.053552] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:05:25.589 [2024-12-05 18:55:43.053560] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:25.589 [2024-12-05 18:55:43.055653] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:25.589 [2024-12-05 18:55:43.055687] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:25.589 Passthru0 00:05:25.589 18:55:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.589 18:55:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:25.589 18:55:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.589 18:55:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:25.589 18:55:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.589 18:55:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:25.589 { 00:05:25.589 "name": "Malloc2", 00:05:25.589 "aliases": [ 00:05:25.589 "432bb12f-1417-4e9d-b75d-f055f97a28b5" 00:05:25.589 ], 00:05:25.589 "product_name": "Malloc disk", 00:05:25.589 "block_size": 512, 00:05:25.589 "num_blocks": 16384, 00:05:25.589 "uuid": "432bb12f-1417-4e9d-b75d-f055f97a28b5", 00:05:25.589 "assigned_rate_limits": { 00:05:25.589 "rw_ios_per_sec": 0, 00:05:25.589 "rw_mbytes_per_sec": 0, 00:05:25.589 "r_mbytes_per_sec": 0, 00:05:25.589 "w_mbytes_per_sec": 0 00:05:25.589 }, 00:05:25.589 "claimed": true, 00:05:25.589 "claim_type": "exclusive_write", 00:05:25.589 "zoned": false, 00:05:25.589 "supported_io_types": { 00:05:25.589 "read": true, 00:05:25.589 "write": true, 00:05:25.589 "unmap": true, 00:05:25.589 "flush": true, 00:05:25.589 "reset": true, 00:05:25.589 "nvme_admin": false, 00:05:25.589 "nvme_io": false, 00:05:25.589 "nvme_io_md": false, 00:05:25.589 "write_zeroes": true, 00:05:25.589 "zcopy": true, 00:05:25.589 "get_zone_info": false, 00:05:25.589 "zone_management": false, 00:05:25.590 "zone_append": false, 00:05:25.590 "compare": false, 00:05:25.590 "compare_and_write": false, 00:05:25.590 "abort": true, 00:05:25.590 "seek_hole": false, 00:05:25.590 "seek_data": false, 00:05:25.590 "copy": true, 00:05:25.590 "nvme_iov_md": false 00:05:25.590 }, 00:05:25.590 "memory_domains": [ 00:05:25.590 { 00:05:25.590 "dma_device_id": "system", 00:05:25.590 "dma_device_type": 1 00:05:25.590 }, 00:05:25.590 { 00:05:25.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:25.590 "dma_device_type": 2 00:05:25.590 } 00:05:25.590 ], 00:05:25.590 "driver_specific": {} 00:05:25.590 }, 00:05:25.590 { 00:05:25.590 "name": "Passthru0", 00:05:25.590 "aliases": [ 00:05:25.590 "c29c2943-4ee8-540e-8dd5-97d2444c6be8" 00:05:25.590 ], 00:05:25.590 "product_name": "passthru", 00:05:25.590 "block_size": 512, 00:05:25.590 "num_blocks": 16384, 00:05:25.590 "uuid": "c29c2943-4ee8-540e-8dd5-97d2444c6be8", 00:05:25.590 "assigned_rate_limits": { 00:05:25.590 "rw_ios_per_sec": 0, 00:05:25.590 "rw_mbytes_per_sec": 0, 00:05:25.590 "r_mbytes_per_sec": 0, 00:05:25.590 "w_mbytes_per_sec": 0 00:05:25.590 }, 00:05:25.590 "claimed": false, 00:05:25.590 "zoned": false, 00:05:25.590 "supported_io_types": { 00:05:25.590 "read": true, 00:05:25.590 "write": true, 00:05:25.590 "unmap": true, 00:05:25.590 "flush": true, 00:05:25.590 "reset": true, 00:05:25.590 "nvme_admin": false, 00:05:25.590 "nvme_io": false, 00:05:25.590 "nvme_io_md": false, 00:05:25.590 "write_zeroes": true, 00:05:25.590 "zcopy": true, 00:05:25.590 "get_zone_info": false, 00:05:25.590 "zone_management": false, 00:05:25.590 "zone_append": false, 00:05:25.590 "compare": false, 00:05:25.590 "compare_and_write": false, 00:05:25.590 "abort": true, 00:05:25.590 "seek_hole": false, 00:05:25.590 "seek_data": false, 00:05:25.590 "copy": true, 00:05:25.590 "nvme_iov_md": false 00:05:25.590 }, 00:05:25.590 "memory_domains": [ 00:05:25.590 { 00:05:25.590 "dma_device_id": "system", 00:05:25.590 "dma_device_type": 1 00:05:25.590 }, 00:05:25.590 { 00:05:25.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:25.590 "dma_device_type": 2 00:05:25.590 } 00:05:25.590 ], 00:05:25.590 "driver_specific": { 00:05:25.590 "passthru": { 00:05:25.590 "name": "Passthru0", 00:05:25.590 "base_bdev_name": "Malloc2" 00:05:25.590 } 00:05:25.590 } 00:05:25.590 } 00:05:25.590 ]' 00:05:25.590 18:55:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:25.590 18:55:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:25.590 18:55:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:25.590 18:55:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.590 18:55:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:25.590 18:55:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.590 18:55:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:25.590 18:55:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.590 18:55:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:25.590 18:55:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.590 18:55:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:25.590 18:55:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.590 18:55:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:25.590 18:55:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.590 18:55:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:25.590 18:55:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:25.848 18:55:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:25.848 00:05:25.848 real 0m0.212s 00:05:25.848 user 0m0.123s 00:05:25.848 sys 0m0.031s 00:05:25.848 18:55:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:25.848 18:55:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:25.848 ************************************ 00:05:25.848 END TEST rpc_daemon_integrity 00:05:25.848 ************************************ 00:05:25.848 18:55:43 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:25.848 18:55:43 rpc -- rpc/rpc.sh@84 -- # killprocess 69059 00:05:25.848 18:55:43 rpc -- common/autotest_common.sh@954 -- # '[' -z 69059 ']' 00:05:25.848 18:55:43 rpc -- common/autotest_common.sh@958 -- # kill -0 69059 00:05:25.848 18:55:43 rpc -- common/autotest_common.sh@959 -- # uname 00:05:25.848 18:55:43 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:25.848 18:55:43 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69059 00:05:25.848 18:55:43 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:25.848 18:55:43 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:25.848 killing process with pid 69059 00:05:25.848 18:55:43 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69059' 00:05:25.848 18:55:43 rpc -- common/autotest_common.sh@973 -- # kill 69059 00:05:25.848 18:55:43 rpc -- common/autotest_common.sh@978 -- # wait 69059 00:05:26.107 00:05:26.107 real 0m2.174s 00:05:26.107 user 0m2.635s 00:05:26.107 sys 0m0.538s 00:05:26.107 18:55:43 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:26.107 18:55:43 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.107 ************************************ 00:05:26.107 END TEST rpc 00:05:26.107 ************************************ 00:05:26.107 18:55:43 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:26.107 18:55:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:26.107 18:55:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.107 18:55:43 -- common/autotest_common.sh@10 -- # set +x 00:05:26.107 ************************************ 00:05:26.107 START TEST skip_rpc 00:05:26.107 ************************************ 00:05:26.107 18:55:43 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:26.107 * Looking for test storage... 00:05:26.107 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:26.107 18:55:43 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:26.107 18:55:43 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:26.107 18:55:43 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:26.107 18:55:43 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:26.107 18:55:43 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:26.107 18:55:43 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:26.107 18:55:43 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:26.107 18:55:43 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:26.107 18:55:43 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:26.107 18:55:43 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:26.107 18:55:43 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:26.107 18:55:43 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:26.107 18:55:43 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:26.107 18:55:43 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:26.107 18:55:43 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:26.107 18:55:43 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:26.107 18:55:43 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:26.107 18:55:43 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:26.107 18:55:43 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:26.107 18:55:43 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:26.107 18:55:43 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:26.107 18:55:43 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:26.107 18:55:43 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:26.107 18:55:43 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:26.107 18:55:43 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:26.107 18:55:43 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:26.107 18:55:43 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:26.107 18:55:43 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:26.107 18:55:43 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:26.107 18:55:43 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:26.107 18:55:43 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:26.107 18:55:43 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:26.107 18:55:43 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:26.107 18:55:43 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:26.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.107 --rc genhtml_branch_coverage=1 00:05:26.107 --rc genhtml_function_coverage=1 00:05:26.107 --rc genhtml_legend=1 00:05:26.107 --rc geninfo_all_blocks=1 00:05:26.107 --rc geninfo_unexecuted_blocks=1 00:05:26.107 00:05:26.107 ' 00:05:26.107 18:55:43 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:26.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.107 --rc genhtml_branch_coverage=1 00:05:26.107 --rc genhtml_function_coverage=1 00:05:26.107 --rc genhtml_legend=1 00:05:26.107 --rc geninfo_all_blocks=1 00:05:26.107 --rc geninfo_unexecuted_blocks=1 00:05:26.107 00:05:26.107 ' 00:05:26.107 18:55:43 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:26.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.107 --rc genhtml_branch_coverage=1 00:05:26.107 --rc genhtml_function_coverage=1 00:05:26.107 --rc genhtml_legend=1 00:05:26.107 --rc geninfo_all_blocks=1 00:05:26.107 --rc geninfo_unexecuted_blocks=1 00:05:26.107 00:05:26.107 ' 00:05:26.107 18:55:43 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:26.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.107 --rc genhtml_branch_coverage=1 00:05:26.107 --rc genhtml_function_coverage=1 00:05:26.107 --rc genhtml_legend=1 00:05:26.107 --rc geninfo_all_blocks=1 00:05:26.107 --rc geninfo_unexecuted_blocks=1 00:05:26.107 00:05:26.107 ' 00:05:26.107 18:55:43 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:26.107 18:55:43 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:26.107 18:55:43 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:26.107 18:55:43 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:26.107 18:55:43 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.107 18:55:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.107 ************************************ 00:05:26.107 START TEST skip_rpc 00:05:26.107 ************************************ 00:05:26.107 18:55:43 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:26.107 18:55:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=69261 00:05:26.107 18:55:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:26.107 18:55:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:26.107 18:55:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:26.367 [2024-12-05 18:55:43.721195] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:05:26.367 [2024-12-05 18:55:43.721340] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69261 ] 00:05:26.367 [2024-12-05 18:55:43.866636] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.367 [2024-12-05 18:55:43.886172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.664 18:55:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:31.664 18:55:48 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:31.664 18:55:48 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:31.664 18:55:48 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:31.664 18:55:48 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:31.664 18:55:48 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:31.664 18:55:48 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:31.664 18:55:48 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:31.664 18:55:48 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.664 18:55:48 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.664 18:55:48 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:31.664 18:55:48 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:31.664 18:55:48 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:31.664 18:55:48 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:31.664 18:55:48 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:31.664 18:55:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:31.664 18:55:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 69261 00:05:31.664 18:55:48 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 69261 ']' 00:05:31.664 18:55:48 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 69261 00:05:31.664 18:55:48 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:31.664 18:55:48 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:31.664 18:55:48 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69261 00:05:31.664 18:55:48 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:31.664 killing process with pid 69261 00:05:31.664 18:55:48 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:31.664 18:55:48 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69261' 00:05:31.664 18:55:48 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 69261 00:05:31.664 18:55:48 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 69261 00:05:31.664 00:05:31.664 real 0m5.249s 00:05:31.664 user 0m4.910s 00:05:31.664 sys 0m0.232s 00:05:31.664 18:55:48 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:31.664 18:55:48 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.664 ************************************ 00:05:31.664 END TEST skip_rpc 00:05:31.664 ************************************ 00:05:31.664 18:55:48 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:31.664 18:55:48 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:31.664 18:55:48 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:31.664 18:55:48 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.664 ************************************ 00:05:31.664 START TEST skip_rpc_with_json 00:05:31.664 ************************************ 00:05:31.664 18:55:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:31.664 18:55:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:31.664 18:55:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=69343 00:05:31.664 18:55:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:31.664 18:55:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 69343 00:05:31.664 18:55:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 69343 ']' 00:05:31.664 18:55:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.664 18:55:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:31.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.664 18:55:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:31.664 18:55:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.664 18:55:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:31.664 18:55:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:31.664 [2024-12-05 18:55:49.025415] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:05:31.664 [2024-12-05 18:55:49.025537] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69343 ] 00:05:31.664 [2024-12-05 18:55:49.167905] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.664 [2024-12-05 18:55:49.190230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.606 18:55:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:32.606 18:55:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:32.606 18:55:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:32.606 18:55:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.606 18:55:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:32.606 [2024-12-05 18:55:49.870709] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:32.606 request: 00:05:32.606 { 00:05:32.606 "trtype": "tcp", 00:05:32.606 "method": "nvmf_get_transports", 00:05:32.606 "req_id": 1 00:05:32.606 } 00:05:32.606 Got JSON-RPC error response 00:05:32.606 response: 00:05:32.606 { 00:05:32.606 "code": -19, 00:05:32.606 "message": "No such device" 00:05:32.606 } 00:05:32.606 18:55:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:32.606 18:55:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:32.606 18:55:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.606 18:55:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:32.606 [2024-12-05 18:55:49.882802] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:32.606 18:55:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.606 18:55:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:32.606 18:55:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.606 18:55:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:32.606 18:55:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.606 18:55:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:32.606 { 00:05:32.606 "subsystems": [ 00:05:32.606 { 00:05:32.606 "subsystem": "fsdev", 00:05:32.606 "config": [ 00:05:32.606 { 00:05:32.606 "method": "fsdev_set_opts", 00:05:32.606 "params": { 00:05:32.606 "fsdev_io_pool_size": 65535, 00:05:32.606 "fsdev_io_cache_size": 256 00:05:32.606 } 00:05:32.606 } 00:05:32.606 ] 00:05:32.606 }, 00:05:32.606 { 00:05:32.606 "subsystem": "keyring", 00:05:32.606 "config": [] 00:05:32.606 }, 00:05:32.606 { 00:05:32.606 "subsystem": "iobuf", 00:05:32.606 "config": [ 00:05:32.606 { 00:05:32.606 "method": "iobuf_set_options", 00:05:32.606 "params": { 00:05:32.606 "small_pool_count": 8192, 00:05:32.606 "large_pool_count": 1024, 00:05:32.606 "small_bufsize": 8192, 00:05:32.606 "large_bufsize": 135168, 00:05:32.606 "enable_numa": false 00:05:32.606 } 00:05:32.606 } 00:05:32.606 ] 00:05:32.606 }, 00:05:32.606 { 00:05:32.606 "subsystem": "sock", 00:05:32.606 "config": [ 00:05:32.606 { 00:05:32.606 "method": "sock_set_default_impl", 00:05:32.606 "params": { 00:05:32.606 "impl_name": "posix" 00:05:32.606 } 00:05:32.606 }, 00:05:32.606 { 00:05:32.606 "method": "sock_impl_set_options", 00:05:32.606 "params": { 00:05:32.606 "impl_name": "ssl", 00:05:32.606 "recv_buf_size": 4096, 00:05:32.606 "send_buf_size": 4096, 00:05:32.606 "enable_recv_pipe": true, 00:05:32.606 "enable_quickack": false, 00:05:32.606 "enable_placement_id": 0, 00:05:32.606 "enable_zerocopy_send_server": true, 00:05:32.606 "enable_zerocopy_send_client": false, 00:05:32.606 "zerocopy_threshold": 0, 00:05:32.606 "tls_version": 0, 00:05:32.606 "enable_ktls": false 00:05:32.606 } 00:05:32.606 }, 00:05:32.606 { 00:05:32.606 "method": "sock_impl_set_options", 00:05:32.606 "params": { 00:05:32.606 "impl_name": "posix", 00:05:32.606 "recv_buf_size": 2097152, 00:05:32.606 "send_buf_size": 2097152, 00:05:32.606 "enable_recv_pipe": true, 00:05:32.606 "enable_quickack": false, 00:05:32.606 "enable_placement_id": 0, 00:05:32.606 "enable_zerocopy_send_server": true, 00:05:32.606 "enable_zerocopy_send_client": false, 00:05:32.606 "zerocopy_threshold": 0, 00:05:32.606 "tls_version": 0, 00:05:32.606 "enable_ktls": false 00:05:32.606 } 00:05:32.606 } 00:05:32.606 ] 00:05:32.606 }, 00:05:32.606 { 00:05:32.606 "subsystem": "vmd", 00:05:32.606 "config": [] 00:05:32.606 }, 00:05:32.606 { 00:05:32.606 "subsystem": "accel", 00:05:32.606 "config": [ 00:05:32.606 { 00:05:32.606 "method": "accel_set_options", 00:05:32.606 "params": { 00:05:32.606 "small_cache_size": 128, 00:05:32.606 "large_cache_size": 16, 00:05:32.606 "task_count": 2048, 00:05:32.606 "sequence_count": 2048, 00:05:32.606 "buf_count": 2048 00:05:32.606 } 00:05:32.606 } 00:05:32.606 ] 00:05:32.606 }, 00:05:32.606 { 00:05:32.606 "subsystem": "bdev", 00:05:32.606 "config": [ 00:05:32.606 { 00:05:32.606 "method": "bdev_set_options", 00:05:32.606 "params": { 00:05:32.606 "bdev_io_pool_size": 65535, 00:05:32.606 "bdev_io_cache_size": 256, 00:05:32.606 "bdev_auto_examine": true, 00:05:32.606 "iobuf_small_cache_size": 128, 00:05:32.606 "iobuf_large_cache_size": 16 00:05:32.606 } 00:05:32.606 }, 00:05:32.606 { 00:05:32.606 "method": "bdev_raid_set_options", 00:05:32.606 "params": { 00:05:32.606 "process_window_size_kb": 1024, 00:05:32.606 "process_max_bandwidth_mb_sec": 0 00:05:32.606 } 00:05:32.606 }, 00:05:32.606 { 00:05:32.606 "method": "bdev_iscsi_set_options", 00:05:32.606 "params": { 00:05:32.606 "timeout_sec": 30 00:05:32.606 } 00:05:32.606 }, 00:05:32.606 { 00:05:32.606 "method": "bdev_nvme_set_options", 00:05:32.606 "params": { 00:05:32.606 "action_on_timeout": "none", 00:05:32.606 "timeout_us": 0, 00:05:32.606 "timeout_admin_us": 0, 00:05:32.606 "keep_alive_timeout_ms": 10000, 00:05:32.606 "arbitration_burst": 0, 00:05:32.606 "low_priority_weight": 0, 00:05:32.606 "medium_priority_weight": 0, 00:05:32.606 "high_priority_weight": 0, 00:05:32.606 "nvme_adminq_poll_period_us": 10000, 00:05:32.606 "nvme_ioq_poll_period_us": 0, 00:05:32.606 "io_queue_requests": 0, 00:05:32.606 "delay_cmd_submit": true, 00:05:32.606 "transport_retry_count": 4, 00:05:32.606 "bdev_retry_count": 3, 00:05:32.606 "transport_ack_timeout": 0, 00:05:32.606 "ctrlr_loss_timeout_sec": 0, 00:05:32.606 "reconnect_delay_sec": 0, 00:05:32.606 "fast_io_fail_timeout_sec": 0, 00:05:32.606 "disable_auto_failback": false, 00:05:32.606 "generate_uuids": false, 00:05:32.606 "transport_tos": 0, 00:05:32.606 "nvme_error_stat": false, 00:05:32.606 "rdma_srq_size": 0, 00:05:32.606 "io_path_stat": false, 00:05:32.606 "allow_accel_sequence": false, 00:05:32.606 "rdma_max_cq_size": 0, 00:05:32.606 "rdma_cm_event_timeout_ms": 0, 00:05:32.606 "dhchap_digests": [ 00:05:32.606 "sha256", 00:05:32.606 "sha384", 00:05:32.606 "sha512" 00:05:32.606 ], 00:05:32.606 "dhchap_dhgroups": [ 00:05:32.606 "null", 00:05:32.606 "ffdhe2048", 00:05:32.606 "ffdhe3072", 00:05:32.606 "ffdhe4096", 00:05:32.606 "ffdhe6144", 00:05:32.606 "ffdhe8192" 00:05:32.606 ] 00:05:32.606 } 00:05:32.606 }, 00:05:32.606 { 00:05:32.606 "method": "bdev_nvme_set_hotplug", 00:05:32.606 "params": { 00:05:32.606 "period_us": 100000, 00:05:32.606 "enable": false 00:05:32.606 } 00:05:32.606 }, 00:05:32.606 { 00:05:32.606 "method": "bdev_wait_for_examine" 00:05:32.606 } 00:05:32.606 ] 00:05:32.606 }, 00:05:32.606 { 00:05:32.606 "subsystem": "scsi", 00:05:32.606 "config": null 00:05:32.606 }, 00:05:32.606 { 00:05:32.606 "subsystem": "scheduler", 00:05:32.606 "config": [ 00:05:32.606 { 00:05:32.606 "method": "framework_set_scheduler", 00:05:32.606 "params": { 00:05:32.606 "name": "static" 00:05:32.606 } 00:05:32.606 } 00:05:32.606 ] 00:05:32.606 }, 00:05:32.606 { 00:05:32.606 "subsystem": "vhost_scsi", 00:05:32.606 "config": [] 00:05:32.606 }, 00:05:32.606 { 00:05:32.606 "subsystem": "vhost_blk", 00:05:32.606 "config": [] 00:05:32.606 }, 00:05:32.606 { 00:05:32.606 "subsystem": "ublk", 00:05:32.606 "config": [] 00:05:32.606 }, 00:05:32.606 { 00:05:32.606 "subsystem": "nbd", 00:05:32.606 "config": [] 00:05:32.606 }, 00:05:32.606 { 00:05:32.606 "subsystem": "nvmf", 00:05:32.606 "config": [ 00:05:32.606 { 00:05:32.606 "method": "nvmf_set_config", 00:05:32.607 "params": { 00:05:32.607 "discovery_filter": "match_any", 00:05:32.607 "admin_cmd_passthru": { 00:05:32.607 "identify_ctrlr": false 00:05:32.607 }, 00:05:32.607 "dhchap_digests": [ 00:05:32.607 "sha256", 00:05:32.607 "sha384", 00:05:32.607 "sha512" 00:05:32.607 ], 00:05:32.607 "dhchap_dhgroups": [ 00:05:32.607 "null", 00:05:32.607 "ffdhe2048", 00:05:32.607 "ffdhe3072", 00:05:32.607 "ffdhe4096", 00:05:32.607 "ffdhe6144", 00:05:32.607 "ffdhe8192" 00:05:32.607 ] 00:05:32.607 } 00:05:32.607 }, 00:05:32.607 { 00:05:32.607 "method": "nvmf_set_max_subsystems", 00:05:32.607 "params": { 00:05:32.607 "max_subsystems": 1024 00:05:32.607 } 00:05:32.607 }, 00:05:32.607 { 00:05:32.607 "method": "nvmf_set_crdt", 00:05:32.607 "params": { 00:05:32.607 "crdt1": 0, 00:05:32.607 "crdt2": 0, 00:05:32.607 "crdt3": 0 00:05:32.607 } 00:05:32.607 }, 00:05:32.607 { 00:05:32.607 "method": "nvmf_create_transport", 00:05:32.607 "params": { 00:05:32.607 "trtype": "TCP", 00:05:32.607 "max_queue_depth": 128, 00:05:32.607 "max_io_qpairs_per_ctrlr": 127, 00:05:32.607 "in_capsule_data_size": 4096, 00:05:32.607 "max_io_size": 131072, 00:05:32.607 "io_unit_size": 131072, 00:05:32.607 "max_aq_depth": 128, 00:05:32.607 "num_shared_buffers": 511, 00:05:32.607 "buf_cache_size": 4294967295, 00:05:32.607 "dif_insert_or_strip": false, 00:05:32.607 "zcopy": false, 00:05:32.607 "c2h_success": true, 00:05:32.607 "sock_priority": 0, 00:05:32.607 "abort_timeout_sec": 1, 00:05:32.607 "ack_timeout": 0, 00:05:32.607 "data_wr_pool_size": 0 00:05:32.607 } 00:05:32.607 } 00:05:32.607 ] 00:05:32.607 }, 00:05:32.607 { 00:05:32.607 "subsystem": "iscsi", 00:05:32.607 "config": [ 00:05:32.607 { 00:05:32.607 "method": "iscsi_set_options", 00:05:32.607 "params": { 00:05:32.607 "node_base": "iqn.2016-06.io.spdk", 00:05:32.607 "max_sessions": 128, 00:05:32.607 "max_connections_per_session": 2, 00:05:32.607 "max_queue_depth": 64, 00:05:32.607 "default_time2wait": 2, 00:05:32.607 "default_time2retain": 20, 00:05:32.607 "first_burst_length": 8192, 00:05:32.607 "immediate_data": true, 00:05:32.607 "allow_duplicated_isid": false, 00:05:32.607 "error_recovery_level": 0, 00:05:32.607 "nop_timeout": 60, 00:05:32.607 "nop_in_interval": 30, 00:05:32.607 "disable_chap": false, 00:05:32.607 "require_chap": false, 00:05:32.607 "mutual_chap": false, 00:05:32.607 "chap_group": 0, 00:05:32.607 "max_large_datain_per_connection": 64, 00:05:32.607 "max_r2t_per_connection": 4, 00:05:32.607 "pdu_pool_size": 36864, 00:05:32.607 "immediate_data_pool_size": 16384, 00:05:32.607 "data_out_pool_size": 2048 00:05:32.607 } 00:05:32.607 } 00:05:32.607 ] 00:05:32.607 } 00:05:32.607 ] 00:05:32.607 } 00:05:32.607 18:55:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:32.607 18:55:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 69343 00:05:32.607 18:55:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 69343 ']' 00:05:32.607 18:55:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 69343 00:05:32.607 18:55:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:32.607 18:55:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:32.607 18:55:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69343 00:05:32.607 18:55:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:32.607 18:55:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:32.607 killing process with pid 69343 00:05:32.607 18:55:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69343' 00:05:32.607 18:55:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 69343 00:05:32.607 18:55:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 69343 00:05:32.868 18:55:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=69371 00:05:32.868 18:55:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:32.868 18:55:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:38.140 18:55:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 69371 00:05:38.140 18:55:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 69371 ']' 00:05:38.140 18:55:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 69371 00:05:38.140 18:55:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:38.140 18:55:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:38.140 18:55:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69371 00:05:38.140 18:55:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:38.140 18:55:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:38.140 killing process with pid 69371 00:05:38.140 18:55:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69371' 00:05:38.140 18:55:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 69371 00:05:38.140 18:55:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 69371 00:05:38.140 18:55:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:38.140 18:55:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:38.140 00:05:38.140 real 0m6.603s 00:05:38.140 user 0m6.324s 00:05:38.140 sys 0m0.523s 00:05:38.140 18:55:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:38.140 ************************************ 00:05:38.140 END TEST skip_rpc_with_json 00:05:38.140 ************************************ 00:05:38.140 18:55:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:38.140 18:55:55 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:38.140 18:55:55 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:38.140 18:55:55 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:38.140 18:55:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.140 ************************************ 00:05:38.140 START TEST skip_rpc_with_delay 00:05:38.140 ************************************ 00:05:38.140 18:55:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:38.140 18:55:55 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:38.140 18:55:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:38.140 18:55:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:38.140 18:55:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:38.140 18:55:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:38.140 18:55:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:38.140 18:55:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:38.140 18:55:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:38.140 18:55:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:38.140 18:55:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:38.140 18:55:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:38.140 18:55:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:38.140 [2024-12-05 18:55:55.675513] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:38.400 18:55:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:38.400 18:55:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:38.400 18:55:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:38.400 18:55:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:38.400 00:05:38.400 real 0m0.105s 00:05:38.400 user 0m0.053s 00:05:38.400 sys 0m0.050s 00:05:38.400 18:55:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:38.400 18:55:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:38.400 ************************************ 00:05:38.400 END TEST skip_rpc_with_delay 00:05:38.400 ************************************ 00:05:38.400 18:55:55 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:38.400 18:55:55 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:38.400 18:55:55 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:38.400 18:55:55 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:38.400 18:55:55 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:38.400 18:55:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.400 ************************************ 00:05:38.400 START TEST exit_on_failed_rpc_init 00:05:38.400 ************************************ 00:05:38.400 18:55:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:38.400 18:55:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=69477 00:05:38.400 18:55:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 69477 00:05:38.400 18:55:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 69477 ']' 00:05:38.400 18:55:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.400 18:55:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:38.400 18:55:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.400 18:55:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:38.400 18:55:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:38.400 18:55:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:38.400 [2024-12-05 18:55:55.845858] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:05:38.400 [2024-12-05 18:55:55.845989] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69477 ] 00:05:38.660 [2024-12-05 18:55:55.994238] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.660 [2024-12-05 18:55:56.027024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.230 18:55:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:39.230 18:55:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:39.230 18:55:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:39.230 18:55:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:39.230 18:55:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:39.230 18:55:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:39.230 18:55:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:39.230 18:55:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:39.230 18:55:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:39.230 18:55:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:39.230 18:55:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:39.230 18:55:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:39.230 18:55:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:39.230 18:55:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:39.230 18:55:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:39.230 [2024-12-05 18:55:56.788243] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:05:39.230 [2024-12-05 18:55:56.788410] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69495 ] 00:05:39.490 [2024-12-05 18:55:56.932974] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.490 [2024-12-05 18:55:56.961020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:39.490 [2024-12-05 18:55:56.961115] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:39.490 [2024-12-05 18:55:56.961135] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:39.490 [2024-12-05 18:55:56.961146] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:39.490 18:55:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:39.490 18:55:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:39.490 18:55:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:39.490 18:55:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:39.490 18:55:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:39.490 18:55:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:39.490 18:55:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:39.490 18:55:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 69477 00:05:39.490 18:55:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 69477 ']' 00:05:39.490 18:55:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 69477 00:05:39.490 18:55:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:39.490 18:55:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:39.490 18:55:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69477 00:05:39.749 18:55:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:39.749 18:55:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:39.749 18:55:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69477' 00:05:39.749 killing process with pid 69477 00:05:39.749 18:55:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 69477 00:05:39.749 18:55:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 69477 00:05:40.010 00:05:40.010 real 0m1.541s 00:05:40.010 user 0m1.629s 00:05:40.010 sys 0m0.452s 00:05:40.010 18:55:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.010 ************************************ 00:05:40.010 END TEST exit_on_failed_rpc_init 00:05:40.010 ************************************ 00:05:40.010 18:55:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:40.010 18:55:57 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:40.010 00:05:40.010 real 0m13.871s 00:05:40.010 user 0m13.071s 00:05:40.010 sys 0m1.427s 00:05:40.010 18:55:57 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.010 18:55:57 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.010 ************************************ 00:05:40.010 END TEST skip_rpc 00:05:40.010 ************************************ 00:05:40.010 18:55:57 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:40.010 18:55:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:40.011 18:55:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.011 18:55:57 -- common/autotest_common.sh@10 -- # set +x 00:05:40.011 ************************************ 00:05:40.011 START TEST rpc_client 00:05:40.011 ************************************ 00:05:40.011 18:55:57 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:40.011 * Looking for test storage... 00:05:40.011 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:40.011 18:55:57 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:40.011 18:55:57 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:05:40.011 18:55:57 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:40.011 18:55:57 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:40.011 18:55:57 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:40.011 18:55:57 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:40.011 18:55:57 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:40.011 18:55:57 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:40.011 18:55:57 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:40.011 18:55:57 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:40.011 18:55:57 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:40.011 18:55:57 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:40.011 18:55:57 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:40.011 18:55:57 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:40.011 18:55:57 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:40.011 18:55:57 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:40.011 18:55:57 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:40.011 18:55:57 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:40.011 18:55:57 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:40.011 18:55:57 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:40.011 18:55:57 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:40.011 18:55:57 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:40.011 18:55:57 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:40.011 18:55:57 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:40.273 18:55:57 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:40.273 18:55:57 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:40.273 18:55:57 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:40.273 18:55:57 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:40.273 18:55:57 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:40.273 18:55:57 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:40.273 18:55:57 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:40.273 18:55:57 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:40.273 18:55:57 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:40.273 18:55:57 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:40.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.273 --rc genhtml_branch_coverage=1 00:05:40.273 --rc genhtml_function_coverage=1 00:05:40.273 --rc genhtml_legend=1 00:05:40.273 --rc geninfo_all_blocks=1 00:05:40.273 --rc geninfo_unexecuted_blocks=1 00:05:40.273 00:05:40.273 ' 00:05:40.273 18:55:57 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:40.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.273 --rc genhtml_branch_coverage=1 00:05:40.273 --rc genhtml_function_coverage=1 00:05:40.273 --rc genhtml_legend=1 00:05:40.273 --rc geninfo_all_blocks=1 00:05:40.273 --rc geninfo_unexecuted_blocks=1 00:05:40.273 00:05:40.273 ' 00:05:40.273 18:55:57 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:40.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.273 --rc genhtml_branch_coverage=1 00:05:40.273 --rc genhtml_function_coverage=1 00:05:40.273 --rc genhtml_legend=1 00:05:40.273 --rc geninfo_all_blocks=1 00:05:40.273 --rc geninfo_unexecuted_blocks=1 00:05:40.273 00:05:40.273 ' 00:05:40.273 18:55:57 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:40.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.273 --rc genhtml_branch_coverage=1 00:05:40.273 --rc genhtml_function_coverage=1 00:05:40.273 --rc genhtml_legend=1 00:05:40.273 --rc geninfo_all_blocks=1 00:05:40.273 --rc geninfo_unexecuted_blocks=1 00:05:40.273 00:05:40.273 ' 00:05:40.273 18:55:57 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:40.273 OK 00:05:40.273 18:55:57 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:40.273 00:05:40.273 real 0m0.199s 00:05:40.273 user 0m0.102s 00:05:40.273 sys 0m0.098s 00:05:40.273 ************************************ 00:05:40.273 18:55:57 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.273 18:55:57 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:40.273 END TEST rpc_client 00:05:40.273 ************************************ 00:05:40.273 18:55:57 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:40.273 18:55:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:40.273 18:55:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.273 18:55:57 -- common/autotest_common.sh@10 -- # set +x 00:05:40.273 ************************************ 00:05:40.273 START TEST json_config 00:05:40.273 ************************************ 00:05:40.273 18:55:57 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:40.273 18:55:57 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:40.273 18:55:57 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:05:40.273 18:55:57 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:40.273 18:55:57 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:40.273 18:55:57 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:40.273 18:55:57 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:40.273 18:55:57 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:40.273 18:55:57 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:40.273 18:55:57 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:40.273 18:55:57 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:40.273 18:55:57 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:40.273 18:55:57 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:40.273 18:55:57 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:40.273 18:55:57 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:40.273 18:55:57 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:40.273 18:55:57 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:40.273 18:55:57 json_config -- scripts/common.sh@345 -- # : 1 00:05:40.273 18:55:57 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:40.273 18:55:57 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:40.273 18:55:57 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:40.273 18:55:57 json_config -- scripts/common.sh@353 -- # local d=1 00:05:40.273 18:55:57 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:40.273 18:55:57 json_config -- scripts/common.sh@355 -- # echo 1 00:05:40.273 18:55:57 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:40.273 18:55:57 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:40.273 18:55:57 json_config -- scripts/common.sh@353 -- # local d=2 00:05:40.273 18:55:57 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:40.273 18:55:57 json_config -- scripts/common.sh@355 -- # echo 2 00:05:40.273 18:55:57 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:40.273 18:55:57 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:40.273 18:55:57 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:40.273 18:55:57 json_config -- scripts/common.sh@368 -- # return 0 00:05:40.273 18:55:57 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:40.273 18:55:57 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:40.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.273 --rc genhtml_branch_coverage=1 00:05:40.273 --rc genhtml_function_coverage=1 00:05:40.273 --rc genhtml_legend=1 00:05:40.273 --rc geninfo_all_blocks=1 00:05:40.273 --rc geninfo_unexecuted_blocks=1 00:05:40.273 00:05:40.273 ' 00:05:40.273 18:55:57 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:40.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.273 --rc genhtml_branch_coverage=1 00:05:40.273 --rc genhtml_function_coverage=1 00:05:40.273 --rc genhtml_legend=1 00:05:40.273 --rc geninfo_all_blocks=1 00:05:40.273 --rc geninfo_unexecuted_blocks=1 00:05:40.273 00:05:40.273 ' 00:05:40.273 18:55:57 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:40.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.273 --rc genhtml_branch_coverage=1 00:05:40.273 --rc genhtml_function_coverage=1 00:05:40.273 --rc genhtml_legend=1 00:05:40.273 --rc geninfo_all_blocks=1 00:05:40.273 --rc geninfo_unexecuted_blocks=1 00:05:40.273 00:05:40.273 ' 00:05:40.274 18:55:57 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:40.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.274 --rc genhtml_branch_coverage=1 00:05:40.274 --rc genhtml_function_coverage=1 00:05:40.274 --rc genhtml_legend=1 00:05:40.274 --rc geninfo_all_blocks=1 00:05:40.274 --rc geninfo_unexecuted_blocks=1 00:05:40.274 00:05:40.274 ' 00:05:40.274 18:55:57 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:40.274 18:55:57 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:40.274 18:55:57 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:40.274 18:55:57 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:40.274 18:55:57 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:40.274 18:55:57 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:40.274 18:55:57 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:40.274 18:55:57 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:40.274 18:55:57 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:40.274 18:55:57 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:40.274 18:55:57 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:40.274 18:55:57 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:40.274 18:55:57 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1be40edb-665b-45cb-a0af-6c19b063a797 00:05:40.274 18:55:57 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=1be40edb-665b-45cb-a0af-6c19b063a797 00:05:40.274 18:55:57 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:40.274 18:55:57 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:40.274 18:55:57 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:40.274 18:55:57 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:40.274 18:55:57 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:40.274 18:55:57 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:40.274 18:55:57 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:40.274 18:55:57 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:40.274 18:55:57 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:40.274 18:55:57 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.274 18:55:57 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.274 18:55:57 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.274 18:55:57 json_config -- paths/export.sh@5 -- # export PATH 00:05:40.274 18:55:57 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.274 18:55:57 json_config -- nvmf/common.sh@51 -- # : 0 00:05:40.274 18:55:57 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:40.274 18:55:57 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:40.274 18:55:57 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:40.274 18:55:57 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:40.274 18:55:57 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:40.274 18:55:57 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:40.274 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:40.274 18:55:57 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:40.274 18:55:57 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:40.274 18:55:57 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:40.274 18:55:57 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:40.274 18:55:57 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:40.274 WARNING: No tests are enabled so not running JSON configuration tests 00:05:40.274 18:55:57 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:40.274 18:55:57 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:40.274 18:55:57 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:40.274 18:55:57 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:40.274 18:55:57 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:40.274 00:05:40.274 real 0m0.139s 00:05:40.274 user 0m0.084s 00:05:40.274 sys 0m0.058s 00:05:40.274 ************************************ 00:05:40.274 END TEST json_config 00:05:40.274 ************************************ 00:05:40.274 18:55:57 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.274 18:55:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:40.535 18:55:57 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:40.535 18:55:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:40.535 18:55:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.535 18:55:57 -- common/autotest_common.sh@10 -- # set +x 00:05:40.535 ************************************ 00:05:40.535 START TEST json_config_extra_key 00:05:40.535 ************************************ 00:05:40.535 18:55:57 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:40.535 18:55:57 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:40.535 18:55:57 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:40.536 18:55:57 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:05:40.536 18:55:57 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:40.536 18:55:57 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:40.536 18:55:57 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:40.536 18:55:57 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:40.536 18:55:57 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:40.536 18:55:57 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:40.536 18:55:57 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:40.536 18:55:57 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:40.536 18:55:57 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:40.536 18:55:57 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:40.536 18:55:57 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:40.536 18:55:57 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:40.536 18:55:57 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:40.536 18:55:57 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:40.536 18:55:57 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:40.536 18:55:57 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:40.536 18:55:57 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:40.536 18:55:57 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:40.536 18:55:57 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:40.536 18:55:57 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:40.536 18:55:57 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:40.536 18:55:57 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:40.536 18:55:57 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:40.536 18:55:57 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:40.536 18:55:57 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:40.536 18:55:57 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:40.536 18:55:57 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:40.536 18:55:57 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:40.536 18:55:57 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:40.536 18:55:57 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:40.536 18:55:57 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:40.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.536 --rc genhtml_branch_coverage=1 00:05:40.536 --rc genhtml_function_coverage=1 00:05:40.536 --rc genhtml_legend=1 00:05:40.536 --rc geninfo_all_blocks=1 00:05:40.536 --rc geninfo_unexecuted_blocks=1 00:05:40.536 00:05:40.536 ' 00:05:40.536 18:55:57 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:40.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.536 --rc genhtml_branch_coverage=1 00:05:40.536 --rc genhtml_function_coverage=1 00:05:40.536 --rc genhtml_legend=1 00:05:40.536 --rc geninfo_all_blocks=1 00:05:40.536 --rc geninfo_unexecuted_blocks=1 00:05:40.536 00:05:40.536 ' 00:05:40.536 18:55:57 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:40.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.536 --rc genhtml_branch_coverage=1 00:05:40.536 --rc genhtml_function_coverage=1 00:05:40.536 --rc genhtml_legend=1 00:05:40.536 --rc geninfo_all_blocks=1 00:05:40.536 --rc geninfo_unexecuted_blocks=1 00:05:40.536 00:05:40.536 ' 00:05:40.536 18:55:57 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:40.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.536 --rc genhtml_branch_coverage=1 00:05:40.536 --rc genhtml_function_coverage=1 00:05:40.536 --rc genhtml_legend=1 00:05:40.536 --rc geninfo_all_blocks=1 00:05:40.536 --rc geninfo_unexecuted_blocks=1 00:05:40.536 00:05:40.536 ' 00:05:40.536 18:55:57 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:40.536 18:55:57 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:40.536 18:55:57 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:40.536 18:55:57 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:40.536 18:55:57 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:40.536 18:55:57 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:40.536 18:55:57 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:40.536 18:55:57 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:40.536 18:55:57 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:40.536 18:55:57 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:40.536 18:55:57 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:40.536 18:55:57 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:40.536 18:55:57 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1be40edb-665b-45cb-a0af-6c19b063a797 00:05:40.536 18:55:57 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=1be40edb-665b-45cb-a0af-6c19b063a797 00:05:40.536 18:55:57 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:40.536 18:55:57 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:40.536 18:55:57 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:40.536 18:55:57 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:40.536 18:55:57 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:40.536 18:55:57 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:40.536 18:55:57 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:40.536 18:55:57 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:40.536 18:55:57 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:40.536 18:55:57 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.536 18:55:57 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.536 18:55:57 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.536 18:55:57 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:40.536 18:55:57 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.536 18:55:57 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:40.536 18:55:57 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:40.536 18:55:57 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:40.536 18:55:57 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:40.536 18:55:57 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:40.536 18:55:58 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:40.536 18:55:58 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:40.536 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:40.536 18:55:58 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:40.536 18:55:58 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:40.536 18:55:58 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:40.536 18:55:58 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:40.536 18:55:58 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:40.536 18:55:58 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:40.536 18:55:58 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:40.536 18:55:58 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:40.536 18:55:58 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:40.536 18:55:58 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:40.536 18:55:58 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:40.536 18:55:58 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:40.536 18:55:58 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:40.536 18:55:58 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:40.536 INFO: launching applications... 00:05:40.536 18:55:58 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:40.536 18:55:58 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:40.536 18:55:58 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:40.536 18:55:58 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:40.536 18:55:58 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:40.536 18:55:58 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:40.536 18:55:58 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:40.536 18:55:58 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:40.536 18:55:58 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=69678 00:05:40.536 18:55:58 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:40.536 Waiting for target to run... 00:05:40.537 18:55:58 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 69678 /var/tmp/spdk_tgt.sock 00:05:40.537 18:55:58 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 69678 ']' 00:05:40.537 18:55:58 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:40.537 18:55:58 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:40.537 18:55:58 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:40.537 18:55:58 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:40.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:40.537 18:55:58 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:40.537 18:55:58 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:40.537 [2024-12-05 18:55:58.076814] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:05:40.537 [2024-12-05 18:55:58.077116] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69678 ] 00:05:41.105 [2024-12-05 18:55:58.412635] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.105 [2024-12-05 18:55:58.424651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.366 18:55:58 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:41.366 18:55:58 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:41.366 18:55:58 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:41.366 00:05:41.366 INFO: shutting down applications... 00:05:41.366 18:55:58 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:41.366 18:55:58 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:41.366 18:55:58 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:41.366 18:55:58 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:41.366 18:55:58 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 69678 ]] 00:05:41.366 18:55:58 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 69678 00:05:41.366 18:55:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:41.366 18:55:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:41.366 18:55:58 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 69678 00:05:41.366 18:55:58 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:41.933 18:55:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:41.933 18:55:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:41.933 18:55:59 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 69678 00:05:41.933 18:55:59 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:41.933 18:55:59 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:41.933 18:55:59 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:41.933 18:55:59 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:41.933 SPDK target shutdown done 00:05:41.933 18:55:59 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:41.933 Success 00:05:41.933 00:05:41.933 real 0m1.557s 00:05:41.933 user 0m1.247s 00:05:41.933 sys 0m0.362s 00:05:41.933 18:55:59 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.933 18:55:59 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:41.933 ************************************ 00:05:41.933 END TEST json_config_extra_key 00:05:41.933 ************************************ 00:05:41.933 18:55:59 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:41.933 18:55:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:41.933 18:55:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.933 18:55:59 -- common/autotest_common.sh@10 -- # set +x 00:05:41.933 ************************************ 00:05:41.933 START TEST alias_rpc 00:05:41.933 ************************************ 00:05:41.933 18:55:59 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:42.192 * Looking for test storage... 00:05:42.192 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:42.192 18:55:59 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:42.192 18:55:59 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:42.192 18:55:59 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:42.192 18:55:59 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:42.192 18:55:59 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:42.192 18:55:59 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:42.192 18:55:59 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:42.192 18:55:59 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:42.192 18:55:59 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:42.192 18:55:59 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:42.192 18:55:59 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:42.192 18:55:59 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:42.192 18:55:59 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:42.192 18:55:59 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:42.192 18:55:59 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:42.192 18:55:59 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:42.192 18:55:59 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:42.192 18:55:59 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:42.192 18:55:59 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:42.192 18:55:59 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:42.192 18:55:59 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:42.192 18:55:59 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:42.192 18:55:59 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:42.192 18:55:59 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:42.192 18:55:59 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:42.192 18:55:59 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:42.192 18:55:59 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:42.192 18:55:59 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:42.192 18:55:59 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:42.192 18:55:59 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:42.192 18:55:59 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:42.192 18:55:59 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:42.192 18:55:59 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:42.192 18:55:59 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:42.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.192 --rc genhtml_branch_coverage=1 00:05:42.192 --rc genhtml_function_coverage=1 00:05:42.192 --rc genhtml_legend=1 00:05:42.192 --rc geninfo_all_blocks=1 00:05:42.192 --rc geninfo_unexecuted_blocks=1 00:05:42.192 00:05:42.192 ' 00:05:42.192 18:55:59 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:42.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.192 --rc genhtml_branch_coverage=1 00:05:42.192 --rc genhtml_function_coverage=1 00:05:42.192 --rc genhtml_legend=1 00:05:42.192 --rc geninfo_all_blocks=1 00:05:42.192 --rc geninfo_unexecuted_blocks=1 00:05:42.192 00:05:42.192 ' 00:05:42.192 18:55:59 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:42.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.192 --rc genhtml_branch_coverage=1 00:05:42.192 --rc genhtml_function_coverage=1 00:05:42.192 --rc genhtml_legend=1 00:05:42.192 --rc geninfo_all_blocks=1 00:05:42.192 --rc geninfo_unexecuted_blocks=1 00:05:42.192 00:05:42.192 ' 00:05:42.192 18:55:59 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:42.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.192 --rc genhtml_branch_coverage=1 00:05:42.192 --rc genhtml_function_coverage=1 00:05:42.192 --rc genhtml_legend=1 00:05:42.192 --rc geninfo_all_blocks=1 00:05:42.192 --rc geninfo_unexecuted_blocks=1 00:05:42.192 00:05:42.192 ' 00:05:42.192 18:55:59 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:42.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.192 18:55:59 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=69751 00:05:42.192 18:55:59 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 69751 00:05:42.192 18:55:59 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 69751 ']' 00:05:42.192 18:55:59 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.192 18:55:59 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:42.192 18:55:59 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:42.192 18:55:59 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.192 18:55:59 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:42.192 18:55:59 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.192 [2024-12-05 18:55:59.667754] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:05:42.192 [2024-12-05 18:55:59.667866] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69751 ] 00:05:42.450 [2024-12-05 18:55:59.813810] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.450 [2024-12-05 18:55:59.833270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.016 18:56:00 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:43.016 18:56:00 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:43.016 18:56:00 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:43.273 18:56:00 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 69751 00:05:43.273 18:56:00 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 69751 ']' 00:05:43.273 18:56:00 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 69751 00:05:43.273 18:56:00 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:43.273 18:56:00 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:43.273 18:56:00 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69751 00:05:43.273 killing process with pid 69751 00:05:43.273 18:56:00 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:43.273 18:56:00 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:43.273 18:56:00 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69751' 00:05:43.273 18:56:00 alias_rpc -- common/autotest_common.sh@973 -- # kill 69751 00:05:43.273 18:56:00 alias_rpc -- common/autotest_common.sh@978 -- # wait 69751 00:05:43.531 ************************************ 00:05:43.531 END TEST alias_rpc 00:05:43.531 ************************************ 00:05:43.531 00:05:43.531 real 0m1.534s 00:05:43.531 user 0m1.629s 00:05:43.531 sys 0m0.387s 00:05:43.531 18:56:00 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:43.531 18:56:00 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.531 18:56:01 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:43.531 18:56:01 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:43.531 18:56:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:43.531 18:56:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:43.531 18:56:01 -- common/autotest_common.sh@10 -- # set +x 00:05:43.531 ************************************ 00:05:43.531 START TEST spdkcli_tcp 00:05:43.531 ************************************ 00:05:43.531 18:56:01 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:43.531 * Looking for test storage... 00:05:43.531 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:43.531 18:56:01 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:43.531 18:56:01 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:05:43.531 18:56:01 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:43.789 18:56:01 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:43.789 18:56:01 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:43.789 18:56:01 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:43.789 18:56:01 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:43.789 18:56:01 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:43.789 18:56:01 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:43.789 18:56:01 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:43.789 18:56:01 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:43.789 18:56:01 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:43.789 18:56:01 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:43.789 18:56:01 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:43.789 18:56:01 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:43.789 18:56:01 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:43.789 18:56:01 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:43.789 18:56:01 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:43.789 18:56:01 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:43.789 18:56:01 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:43.789 18:56:01 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:43.789 18:56:01 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:43.789 18:56:01 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:43.789 18:56:01 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:43.789 18:56:01 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:43.789 18:56:01 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:43.789 18:56:01 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:43.789 18:56:01 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:43.789 18:56:01 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:43.789 18:56:01 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:43.789 18:56:01 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:43.789 18:56:01 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:43.789 18:56:01 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:43.789 18:56:01 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:43.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.789 --rc genhtml_branch_coverage=1 00:05:43.789 --rc genhtml_function_coverage=1 00:05:43.789 --rc genhtml_legend=1 00:05:43.789 --rc geninfo_all_blocks=1 00:05:43.789 --rc geninfo_unexecuted_blocks=1 00:05:43.789 00:05:43.789 ' 00:05:43.789 18:56:01 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:43.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.789 --rc genhtml_branch_coverage=1 00:05:43.789 --rc genhtml_function_coverage=1 00:05:43.789 --rc genhtml_legend=1 00:05:43.789 --rc geninfo_all_blocks=1 00:05:43.789 --rc geninfo_unexecuted_blocks=1 00:05:43.789 00:05:43.789 ' 00:05:43.789 18:56:01 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:43.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.789 --rc genhtml_branch_coverage=1 00:05:43.789 --rc genhtml_function_coverage=1 00:05:43.789 --rc genhtml_legend=1 00:05:43.789 --rc geninfo_all_blocks=1 00:05:43.789 --rc geninfo_unexecuted_blocks=1 00:05:43.789 00:05:43.789 ' 00:05:43.789 18:56:01 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:43.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.790 --rc genhtml_branch_coverage=1 00:05:43.790 --rc genhtml_function_coverage=1 00:05:43.790 --rc genhtml_legend=1 00:05:43.790 --rc geninfo_all_blocks=1 00:05:43.790 --rc geninfo_unexecuted_blocks=1 00:05:43.790 00:05:43.790 ' 00:05:43.790 18:56:01 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:43.790 18:56:01 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:43.790 18:56:01 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:43.790 18:56:01 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:43.790 18:56:01 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:43.790 18:56:01 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:43.790 18:56:01 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:43.790 18:56:01 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:43.790 18:56:01 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:43.790 18:56:01 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=69831 00:05:43.790 18:56:01 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 69831 00:05:43.790 18:56:01 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 69831 ']' 00:05:43.790 18:56:01 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:43.790 18:56:01 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.790 18:56:01 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:43.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.790 18:56:01 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.790 18:56:01 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:43.790 18:56:01 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:43.790 [2024-12-05 18:56:01.233690] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:05:43.790 [2024-12-05 18:56:01.233807] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69831 ] 00:05:44.047 [2024-12-05 18:56:01.381492] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:44.047 [2024-12-05 18:56:01.402246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.047 [2024-12-05 18:56:01.402347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:44.614 18:56:02 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:44.614 18:56:02 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:44.614 18:56:02 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=69848 00:05:44.614 18:56:02 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:44.614 18:56:02 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:44.873 [ 00:05:44.873 "bdev_malloc_delete", 00:05:44.873 "bdev_malloc_create", 00:05:44.873 "bdev_null_resize", 00:05:44.873 "bdev_null_delete", 00:05:44.873 "bdev_null_create", 00:05:44.873 "bdev_nvme_cuse_unregister", 00:05:44.873 "bdev_nvme_cuse_register", 00:05:44.873 "bdev_opal_new_user", 00:05:44.873 "bdev_opal_set_lock_state", 00:05:44.873 "bdev_opal_delete", 00:05:44.873 "bdev_opal_get_info", 00:05:44.873 "bdev_opal_create", 00:05:44.873 "bdev_nvme_opal_revert", 00:05:44.873 "bdev_nvme_opal_init", 00:05:44.873 "bdev_nvme_send_cmd", 00:05:44.873 "bdev_nvme_set_keys", 00:05:44.873 "bdev_nvme_get_path_iostat", 00:05:44.873 "bdev_nvme_get_mdns_discovery_info", 00:05:44.873 "bdev_nvme_stop_mdns_discovery", 00:05:44.873 "bdev_nvme_start_mdns_discovery", 00:05:44.873 "bdev_nvme_set_multipath_policy", 00:05:44.873 "bdev_nvme_set_preferred_path", 00:05:44.873 "bdev_nvme_get_io_paths", 00:05:44.873 "bdev_nvme_remove_error_injection", 00:05:44.873 "bdev_nvme_add_error_injection", 00:05:44.873 "bdev_nvme_get_discovery_info", 00:05:44.873 "bdev_nvme_stop_discovery", 00:05:44.873 "bdev_nvme_start_discovery", 00:05:44.873 "bdev_nvme_get_controller_health_info", 00:05:44.873 "bdev_nvme_disable_controller", 00:05:44.873 "bdev_nvme_enable_controller", 00:05:44.873 "bdev_nvme_reset_controller", 00:05:44.873 "bdev_nvme_get_transport_statistics", 00:05:44.873 "bdev_nvme_apply_firmware", 00:05:44.873 "bdev_nvme_detach_controller", 00:05:44.873 "bdev_nvme_get_controllers", 00:05:44.873 "bdev_nvme_attach_controller", 00:05:44.873 "bdev_nvme_set_hotplug", 00:05:44.873 "bdev_nvme_set_options", 00:05:44.873 "bdev_passthru_delete", 00:05:44.873 "bdev_passthru_create", 00:05:44.873 "bdev_lvol_set_parent_bdev", 00:05:44.873 "bdev_lvol_set_parent", 00:05:44.873 "bdev_lvol_check_shallow_copy", 00:05:44.873 "bdev_lvol_start_shallow_copy", 00:05:44.873 "bdev_lvol_grow_lvstore", 00:05:44.873 "bdev_lvol_get_lvols", 00:05:44.873 "bdev_lvol_get_lvstores", 00:05:44.873 "bdev_lvol_delete", 00:05:44.873 "bdev_lvol_set_read_only", 00:05:44.873 "bdev_lvol_resize", 00:05:44.873 "bdev_lvol_decouple_parent", 00:05:44.873 "bdev_lvol_inflate", 00:05:44.873 "bdev_lvol_rename", 00:05:44.873 "bdev_lvol_clone_bdev", 00:05:44.873 "bdev_lvol_clone", 00:05:44.873 "bdev_lvol_snapshot", 00:05:44.873 "bdev_lvol_create", 00:05:44.873 "bdev_lvol_delete_lvstore", 00:05:44.873 "bdev_lvol_rename_lvstore", 00:05:44.873 "bdev_lvol_create_lvstore", 00:05:44.873 "bdev_raid_set_options", 00:05:44.873 "bdev_raid_remove_base_bdev", 00:05:44.873 "bdev_raid_add_base_bdev", 00:05:44.873 "bdev_raid_delete", 00:05:44.873 "bdev_raid_create", 00:05:44.873 "bdev_raid_get_bdevs", 00:05:44.874 "bdev_error_inject_error", 00:05:44.874 "bdev_error_delete", 00:05:44.874 "bdev_error_create", 00:05:44.874 "bdev_split_delete", 00:05:44.874 "bdev_split_create", 00:05:44.874 "bdev_delay_delete", 00:05:44.874 "bdev_delay_create", 00:05:44.874 "bdev_delay_update_latency", 00:05:44.874 "bdev_zone_block_delete", 00:05:44.874 "bdev_zone_block_create", 00:05:44.874 "blobfs_create", 00:05:44.874 "blobfs_detect", 00:05:44.874 "blobfs_set_cache_size", 00:05:44.874 "bdev_xnvme_delete", 00:05:44.874 "bdev_xnvme_create", 00:05:44.874 "bdev_aio_delete", 00:05:44.874 "bdev_aio_rescan", 00:05:44.874 "bdev_aio_create", 00:05:44.874 "bdev_ftl_set_property", 00:05:44.874 "bdev_ftl_get_properties", 00:05:44.874 "bdev_ftl_get_stats", 00:05:44.874 "bdev_ftl_unmap", 00:05:44.874 "bdev_ftl_unload", 00:05:44.874 "bdev_ftl_delete", 00:05:44.874 "bdev_ftl_load", 00:05:44.874 "bdev_ftl_create", 00:05:44.874 "bdev_virtio_attach_controller", 00:05:44.874 "bdev_virtio_scsi_get_devices", 00:05:44.874 "bdev_virtio_detach_controller", 00:05:44.874 "bdev_virtio_blk_set_hotplug", 00:05:44.874 "bdev_iscsi_delete", 00:05:44.874 "bdev_iscsi_create", 00:05:44.874 "bdev_iscsi_set_options", 00:05:44.874 "accel_error_inject_error", 00:05:44.874 "ioat_scan_accel_module", 00:05:44.874 "dsa_scan_accel_module", 00:05:44.874 "iaa_scan_accel_module", 00:05:44.874 "keyring_file_remove_key", 00:05:44.874 "keyring_file_add_key", 00:05:44.874 "keyring_linux_set_options", 00:05:44.874 "fsdev_aio_delete", 00:05:44.874 "fsdev_aio_create", 00:05:44.874 "iscsi_get_histogram", 00:05:44.874 "iscsi_enable_histogram", 00:05:44.874 "iscsi_set_options", 00:05:44.874 "iscsi_get_auth_groups", 00:05:44.874 "iscsi_auth_group_remove_secret", 00:05:44.874 "iscsi_auth_group_add_secret", 00:05:44.874 "iscsi_delete_auth_group", 00:05:44.874 "iscsi_create_auth_group", 00:05:44.874 "iscsi_set_discovery_auth", 00:05:44.874 "iscsi_get_options", 00:05:44.874 "iscsi_target_node_request_logout", 00:05:44.874 "iscsi_target_node_set_redirect", 00:05:44.874 "iscsi_target_node_set_auth", 00:05:44.874 "iscsi_target_node_add_lun", 00:05:44.874 "iscsi_get_stats", 00:05:44.874 "iscsi_get_connections", 00:05:44.874 "iscsi_portal_group_set_auth", 00:05:44.874 "iscsi_start_portal_group", 00:05:44.874 "iscsi_delete_portal_group", 00:05:44.874 "iscsi_create_portal_group", 00:05:44.874 "iscsi_get_portal_groups", 00:05:44.874 "iscsi_delete_target_node", 00:05:44.874 "iscsi_target_node_remove_pg_ig_maps", 00:05:44.874 "iscsi_target_node_add_pg_ig_maps", 00:05:44.874 "iscsi_create_target_node", 00:05:44.874 "iscsi_get_target_nodes", 00:05:44.874 "iscsi_delete_initiator_group", 00:05:44.874 "iscsi_initiator_group_remove_initiators", 00:05:44.874 "iscsi_initiator_group_add_initiators", 00:05:44.874 "iscsi_create_initiator_group", 00:05:44.874 "iscsi_get_initiator_groups", 00:05:44.874 "nvmf_set_crdt", 00:05:44.874 "nvmf_set_config", 00:05:44.874 "nvmf_set_max_subsystems", 00:05:44.874 "nvmf_stop_mdns_prr", 00:05:44.874 "nvmf_publish_mdns_prr", 00:05:44.874 "nvmf_subsystem_get_listeners", 00:05:44.874 "nvmf_subsystem_get_qpairs", 00:05:44.874 "nvmf_subsystem_get_controllers", 00:05:44.874 "nvmf_get_stats", 00:05:44.874 "nvmf_get_transports", 00:05:44.874 "nvmf_create_transport", 00:05:44.874 "nvmf_get_targets", 00:05:44.874 "nvmf_delete_target", 00:05:44.874 "nvmf_create_target", 00:05:44.874 "nvmf_subsystem_allow_any_host", 00:05:44.874 "nvmf_subsystem_set_keys", 00:05:44.874 "nvmf_subsystem_remove_host", 00:05:44.874 "nvmf_subsystem_add_host", 00:05:44.874 "nvmf_ns_remove_host", 00:05:44.874 "nvmf_ns_add_host", 00:05:44.874 "nvmf_subsystem_remove_ns", 00:05:44.874 "nvmf_subsystem_set_ns_ana_group", 00:05:44.874 "nvmf_subsystem_add_ns", 00:05:44.874 "nvmf_subsystem_listener_set_ana_state", 00:05:44.874 "nvmf_discovery_get_referrals", 00:05:44.874 "nvmf_discovery_remove_referral", 00:05:44.874 "nvmf_discovery_add_referral", 00:05:44.874 "nvmf_subsystem_remove_listener", 00:05:44.874 "nvmf_subsystem_add_listener", 00:05:44.874 "nvmf_delete_subsystem", 00:05:44.874 "nvmf_create_subsystem", 00:05:44.874 "nvmf_get_subsystems", 00:05:44.874 "env_dpdk_get_mem_stats", 00:05:44.874 "nbd_get_disks", 00:05:44.874 "nbd_stop_disk", 00:05:44.874 "nbd_start_disk", 00:05:44.874 "ublk_recover_disk", 00:05:44.874 "ublk_get_disks", 00:05:44.874 "ublk_stop_disk", 00:05:44.874 "ublk_start_disk", 00:05:44.874 "ublk_destroy_target", 00:05:44.874 "ublk_create_target", 00:05:44.874 "virtio_blk_create_transport", 00:05:44.874 "virtio_blk_get_transports", 00:05:44.874 "vhost_controller_set_coalescing", 00:05:44.874 "vhost_get_controllers", 00:05:44.874 "vhost_delete_controller", 00:05:44.874 "vhost_create_blk_controller", 00:05:44.874 "vhost_scsi_controller_remove_target", 00:05:44.874 "vhost_scsi_controller_add_target", 00:05:44.874 "vhost_start_scsi_controller", 00:05:44.874 "vhost_create_scsi_controller", 00:05:44.874 "thread_set_cpumask", 00:05:44.874 "scheduler_set_options", 00:05:44.874 "framework_get_governor", 00:05:44.874 "framework_get_scheduler", 00:05:44.874 "framework_set_scheduler", 00:05:44.874 "framework_get_reactors", 00:05:44.874 "thread_get_io_channels", 00:05:44.874 "thread_get_pollers", 00:05:44.874 "thread_get_stats", 00:05:44.874 "framework_monitor_context_switch", 00:05:44.874 "spdk_kill_instance", 00:05:44.874 "log_enable_timestamps", 00:05:44.874 "log_get_flags", 00:05:44.874 "log_clear_flag", 00:05:44.874 "log_set_flag", 00:05:44.874 "log_get_level", 00:05:44.874 "log_set_level", 00:05:44.874 "log_get_print_level", 00:05:44.874 "log_set_print_level", 00:05:44.874 "framework_enable_cpumask_locks", 00:05:44.874 "framework_disable_cpumask_locks", 00:05:44.874 "framework_wait_init", 00:05:44.874 "framework_start_init", 00:05:44.874 "scsi_get_devices", 00:05:44.874 "bdev_get_histogram", 00:05:44.874 "bdev_enable_histogram", 00:05:44.874 "bdev_set_qos_limit", 00:05:44.874 "bdev_set_qd_sampling_period", 00:05:44.874 "bdev_get_bdevs", 00:05:44.874 "bdev_reset_iostat", 00:05:44.874 "bdev_get_iostat", 00:05:44.874 "bdev_examine", 00:05:44.874 "bdev_wait_for_examine", 00:05:44.874 "bdev_set_options", 00:05:44.874 "accel_get_stats", 00:05:44.874 "accel_set_options", 00:05:44.874 "accel_set_driver", 00:05:44.874 "accel_crypto_key_destroy", 00:05:44.874 "accel_crypto_keys_get", 00:05:44.874 "accel_crypto_key_create", 00:05:44.874 "accel_assign_opc", 00:05:44.874 "accel_get_module_info", 00:05:44.874 "accel_get_opc_assignments", 00:05:44.874 "vmd_rescan", 00:05:44.874 "vmd_remove_device", 00:05:44.874 "vmd_enable", 00:05:44.874 "sock_get_default_impl", 00:05:44.874 "sock_set_default_impl", 00:05:44.874 "sock_impl_set_options", 00:05:44.874 "sock_impl_get_options", 00:05:44.874 "iobuf_get_stats", 00:05:44.874 "iobuf_set_options", 00:05:44.874 "keyring_get_keys", 00:05:44.874 "framework_get_pci_devices", 00:05:44.874 "framework_get_config", 00:05:44.874 "framework_get_subsystems", 00:05:44.874 "fsdev_set_opts", 00:05:44.874 "fsdev_get_opts", 00:05:44.874 "trace_get_info", 00:05:44.874 "trace_get_tpoint_group_mask", 00:05:44.874 "trace_disable_tpoint_group", 00:05:44.874 "trace_enable_tpoint_group", 00:05:44.874 "trace_clear_tpoint_mask", 00:05:44.874 "trace_set_tpoint_mask", 00:05:44.874 "notify_get_notifications", 00:05:44.874 "notify_get_types", 00:05:44.874 "spdk_get_version", 00:05:44.874 "rpc_get_methods" 00:05:44.874 ] 00:05:44.874 18:56:02 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:44.874 18:56:02 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:44.874 18:56:02 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:44.874 18:56:02 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:44.874 18:56:02 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 69831 00:05:44.874 18:56:02 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 69831 ']' 00:05:44.874 18:56:02 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 69831 00:05:44.874 18:56:02 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:44.874 18:56:02 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:44.874 18:56:02 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69831 00:05:44.874 killing process with pid 69831 00:05:44.874 18:56:02 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:44.874 18:56:02 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:44.874 18:56:02 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69831' 00:05:44.874 18:56:02 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 69831 00:05:44.874 18:56:02 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 69831 00:05:45.168 ************************************ 00:05:45.168 END TEST spdkcli_tcp 00:05:45.168 ************************************ 00:05:45.168 00:05:45.168 real 0m1.544s 00:05:45.168 user 0m2.779s 00:05:45.168 sys 0m0.381s 00:05:45.168 18:56:02 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:45.168 18:56:02 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:45.168 18:56:02 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:45.168 18:56:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:45.168 18:56:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:45.168 18:56:02 -- common/autotest_common.sh@10 -- # set +x 00:05:45.168 ************************************ 00:05:45.168 START TEST dpdk_mem_utility 00:05:45.168 ************************************ 00:05:45.168 18:56:02 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:45.168 * Looking for test storage... 00:05:45.168 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:45.168 18:56:02 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:45.168 18:56:02 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:45.168 18:56:02 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:05:45.427 18:56:02 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:45.427 18:56:02 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:45.427 18:56:02 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:45.427 18:56:02 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:45.427 18:56:02 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:45.427 18:56:02 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:45.427 18:56:02 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:45.427 18:56:02 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:45.427 18:56:02 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:45.427 18:56:02 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:45.427 18:56:02 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:45.427 18:56:02 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:45.427 18:56:02 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:45.427 18:56:02 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:45.427 18:56:02 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:45.427 18:56:02 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:45.427 18:56:02 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:45.427 18:56:02 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:45.427 18:56:02 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:45.427 18:56:02 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:45.427 18:56:02 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:45.427 18:56:02 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:45.427 18:56:02 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:45.427 18:56:02 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:45.427 18:56:02 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:45.427 18:56:02 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:45.427 18:56:02 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:45.427 18:56:02 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:45.427 18:56:02 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:45.427 18:56:02 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:45.427 18:56:02 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:45.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.427 --rc genhtml_branch_coverage=1 00:05:45.427 --rc genhtml_function_coverage=1 00:05:45.427 --rc genhtml_legend=1 00:05:45.427 --rc geninfo_all_blocks=1 00:05:45.427 --rc geninfo_unexecuted_blocks=1 00:05:45.427 00:05:45.427 ' 00:05:45.427 18:56:02 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:45.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.427 --rc genhtml_branch_coverage=1 00:05:45.427 --rc genhtml_function_coverage=1 00:05:45.427 --rc genhtml_legend=1 00:05:45.427 --rc geninfo_all_blocks=1 00:05:45.427 --rc geninfo_unexecuted_blocks=1 00:05:45.427 00:05:45.427 ' 00:05:45.427 18:56:02 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:45.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.427 --rc genhtml_branch_coverage=1 00:05:45.427 --rc genhtml_function_coverage=1 00:05:45.427 --rc genhtml_legend=1 00:05:45.427 --rc geninfo_all_blocks=1 00:05:45.427 --rc geninfo_unexecuted_blocks=1 00:05:45.427 00:05:45.427 ' 00:05:45.427 18:56:02 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:45.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.427 --rc genhtml_branch_coverage=1 00:05:45.427 --rc genhtml_function_coverage=1 00:05:45.427 --rc genhtml_legend=1 00:05:45.427 --rc geninfo_all_blocks=1 00:05:45.427 --rc geninfo_unexecuted_blocks=1 00:05:45.427 00:05:45.427 ' 00:05:45.427 18:56:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:45.427 18:56:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=69925 00:05:45.427 18:56:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 69925 00:05:45.427 18:56:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:45.427 18:56:02 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 69925 ']' 00:05:45.427 18:56:02 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.427 18:56:02 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:45.427 18:56:02 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.427 18:56:02 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:45.427 18:56:02 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:45.427 [2024-12-05 18:56:02.823997] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:05:45.427 [2024-12-05 18:56:02.824264] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69925 ] 00:05:45.427 [2024-12-05 18:56:02.966502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.685 [2024-12-05 18:56:02.985943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.255 18:56:03 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:46.255 18:56:03 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:46.255 18:56:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:46.255 18:56:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:46.255 18:56:03 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:46.255 18:56:03 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:46.255 { 00:05:46.255 "filename": "/tmp/spdk_mem_dump.txt" 00:05:46.255 } 00:05:46.255 18:56:03 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:46.255 18:56:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:46.255 DPDK memory size 818.000000 MiB in 1 heap(s) 00:05:46.255 1 heaps totaling size 818.000000 MiB 00:05:46.255 size: 818.000000 MiB heap id: 0 00:05:46.255 end heaps---------- 00:05:46.255 9 mempools totaling size 603.782043 MiB 00:05:46.255 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:46.255 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:46.255 size: 100.555481 MiB name: bdev_io_69925 00:05:46.255 size: 50.003479 MiB name: msgpool_69925 00:05:46.255 size: 36.509338 MiB name: fsdev_io_69925 00:05:46.255 size: 21.763794 MiB name: PDU_Pool 00:05:46.255 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:46.255 size: 4.133484 MiB name: evtpool_69925 00:05:46.255 size: 0.026123 MiB name: Session_Pool 00:05:46.255 end mempools------- 00:05:46.255 6 memzones totaling size 4.142822 MiB 00:05:46.255 size: 1.000366 MiB name: RG_ring_0_69925 00:05:46.255 size: 1.000366 MiB name: RG_ring_1_69925 00:05:46.255 size: 1.000366 MiB name: RG_ring_4_69925 00:05:46.255 size: 1.000366 MiB name: RG_ring_5_69925 00:05:46.255 size: 0.125366 MiB name: RG_ring_2_69925 00:05:46.255 size: 0.015991 MiB name: RG_ring_3_69925 00:05:46.255 end memzones------- 00:05:46.255 18:56:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:46.255 heap id: 0 total size: 818.000000 MiB number of busy elements: 317 number of free elements: 15 00:05:46.255 list of free elements. size: 10.802490 MiB 00:05:46.255 element at address: 0x200019200000 with size: 0.999878 MiB 00:05:46.255 element at address: 0x200019400000 with size: 0.999878 MiB 00:05:46.255 element at address: 0x200032000000 with size: 0.994446 MiB 00:05:46.255 element at address: 0x200000400000 with size: 0.993958 MiB 00:05:46.255 element at address: 0x200006400000 with size: 0.959839 MiB 00:05:46.255 element at address: 0x200012c00000 with size: 0.944275 MiB 00:05:46.255 element at address: 0x200019600000 with size: 0.936584 MiB 00:05:46.255 element at address: 0x200000200000 with size: 0.717346 MiB 00:05:46.255 element at address: 0x20001ae00000 with size: 0.567688 MiB 00:05:46.255 element at address: 0x20000a600000 with size: 0.488892 MiB 00:05:46.255 element at address: 0x200000c00000 with size: 0.486267 MiB 00:05:46.255 element at address: 0x200019800000 with size: 0.485657 MiB 00:05:46.255 element at address: 0x200003e00000 with size: 0.480286 MiB 00:05:46.255 element at address: 0x200028200000 with size: 0.395752 MiB 00:05:46.255 element at address: 0x200000800000 with size: 0.351746 MiB 00:05:46.255 list of standard malloc elements. size: 199.268616 MiB 00:05:46.255 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:05:46.255 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:05:46.255 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:46.255 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:05:46.255 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:05:46.255 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:46.255 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:05:46.255 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:46.255 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:05:46.255 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:46.255 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:46.255 element at address: 0x2000004fe740 with size: 0.000183 MiB 00:05:46.255 element at address: 0x2000004fe800 with size: 0.000183 MiB 00:05:46.255 element at address: 0x2000004fe8c0 with size: 0.000183 MiB 00:05:46.255 element at address: 0x2000004fe980 with size: 0.000183 MiB 00:05:46.255 element at address: 0x2000004fea40 with size: 0.000183 MiB 00:05:46.255 element at address: 0x2000004feb00 with size: 0.000183 MiB 00:05:46.255 element at address: 0x2000004febc0 with size: 0.000183 MiB 00:05:46.255 element at address: 0x2000004fec80 with size: 0.000183 MiB 00:05:46.255 element at address: 0x2000004fed40 with size: 0.000183 MiB 00:05:46.255 element at address: 0x2000004fee00 with size: 0.000183 MiB 00:05:46.255 element at address: 0x2000004feec0 with size: 0.000183 MiB 00:05:46.255 element at address: 0x2000004fef80 with size: 0.000183 MiB 00:05:46.255 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:05:46.255 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:05:46.255 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:05:46.255 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:05:46.255 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:05:46.255 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:05:46.255 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:05:46.255 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:05:46.255 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:05:46.255 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:05:46.255 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:05:46.255 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:05:46.255 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:05:46.255 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:05:46.255 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:05:46.255 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:05:46.255 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:05:46.255 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:05:46.255 element at address: 0x20000085a0c0 with size: 0.000183 MiB 00:05:46.255 element at address: 0x20000085a2c0 with size: 0.000183 MiB 00:05:46.255 element at address: 0x20000085e580 with size: 0.000183 MiB 00:05:46.255 element at address: 0x20000087e840 with size: 0.000183 MiB 00:05:46.255 element at address: 0x20000087e900 with size: 0.000183 MiB 00:05:46.255 element at address: 0x20000087e9c0 with size: 0.000183 MiB 00:05:46.255 element at address: 0x20000087ea80 with size: 0.000183 MiB 00:05:46.255 element at address: 0x20000087eb40 with size: 0.000183 MiB 00:05:46.255 element at address: 0x20000087ec00 with size: 0.000183 MiB 00:05:46.255 element at address: 0x20000087ecc0 with size: 0.000183 MiB 00:05:46.255 element at address: 0x20000087ed80 with size: 0.000183 MiB 00:05:46.255 element at address: 0x20000087ee40 with size: 0.000183 MiB 00:05:46.255 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:05:46.255 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:05:46.255 element at address: 0x20000087f080 with size: 0.000183 MiB 00:05:46.255 element at address: 0x20000087f140 with size: 0.000183 MiB 00:05:46.255 element at address: 0x20000087f200 with size: 0.000183 MiB 00:05:46.255 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:05:46.255 element at address: 0x20000087f380 with size: 0.000183 MiB 00:05:46.255 element at address: 0x20000087f440 with size: 0.000183 MiB 00:05:46.255 element at address: 0x20000087f500 with size: 0.000183 MiB 00:05:46.255 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:05:46.255 element at address: 0x20000087f680 with size: 0.000183 MiB 00:05:46.255 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:05:46.255 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:05:46.255 element at address: 0x200000c7c7c0 with size: 0.000183 MiB 00:05:46.255 element at address: 0x200000c7c880 with size: 0.000183 MiB 00:05:46.255 element at address: 0x200000c7c940 with size: 0.000183 MiB 00:05:46.255 element at address: 0x200000c7ca00 with size: 0.000183 MiB 00:05:46.255 element at address: 0x200000c7cac0 with size: 0.000183 MiB 00:05:46.255 element at address: 0x200000c7cb80 with size: 0.000183 MiB 00:05:46.255 element at address: 0x200000c7cc40 with size: 0.000183 MiB 00:05:46.255 element at address: 0x200000c7cd00 with size: 0.000183 MiB 00:05:46.255 element at address: 0x200000c7cdc0 with size: 0.000183 MiB 00:05:46.255 element at address: 0x200000c7ce80 with size: 0.000183 MiB 00:05:46.256 element at address: 0x200000c7cf40 with size: 0.000183 MiB 00:05:46.256 element at address: 0x200000c7d000 with size: 0.000183 MiB 00:05:46.256 element at address: 0x200000c7d0c0 with size: 0.000183 MiB 00:05:46.256 element at address: 0x200000c7d180 with size: 0.000183 MiB 00:05:46.256 element at address: 0x200000c7d240 with size: 0.000183 MiB 00:05:46.256 element at address: 0x200000c7d300 with size: 0.000183 MiB 00:05:46.256 element at address: 0x200000c7d3c0 with size: 0.000183 MiB 00:05:46.256 element at address: 0x200000c7d480 with size: 0.000183 MiB 00:05:46.256 element at address: 0x200000c7d540 with size: 0.000183 MiB 00:05:46.256 element at address: 0x200000c7d600 with size: 0.000183 MiB 00:05:46.256 element at address: 0x200000c7d6c0 with size: 0.000183 MiB 00:05:46.256 element at address: 0x200000c7d780 with size: 0.000183 MiB 00:05:46.256 element at address: 0x200000c7d840 with size: 0.000183 MiB 00:05:46.256 element at address: 0x200000c7d900 with size: 0.000183 MiB 00:05:46.256 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:05:46.256 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:05:46.256 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:05:46.256 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:05:46.256 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:05:46.256 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:05:46.256 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:05:46.256 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:05:46.256 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:05:46.256 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:05:46.256 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:05:46.256 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:05:46.256 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:05:46.256 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:05:46.256 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:05:46.256 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:05:46.256 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:05:46.256 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:05:46.256 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:05:46.256 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:05:46.256 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:05:46.256 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:05:46.256 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:05:46.256 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:05:46.256 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:05:46.256 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:05:46.256 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:05:46.256 element at address: 0x200000cff000 with size: 0.000183 MiB 00:05:46.256 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:05:46.256 element at address: 0x200003e7af40 with size: 0.000183 MiB 00:05:46.256 element at address: 0x200003e7b000 with size: 0.000183 MiB 00:05:46.256 element at address: 0x200003e7b0c0 with size: 0.000183 MiB 00:05:46.256 element at address: 0x200003e7b180 with size: 0.000183 MiB 00:05:46.256 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:05:46.256 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:05:46.256 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:05:46.256 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:05:46.256 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:05:46.256 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:05:46.256 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:05:46.256 element at address: 0x200003efb980 with size: 0.000183 MiB 00:05:46.256 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:05:46.256 element at address: 0x20000a67d280 with size: 0.000183 MiB 00:05:46.256 element at address: 0x20000a67d340 with size: 0.000183 MiB 00:05:46.256 element at address: 0x20000a67d400 with size: 0.000183 MiB 00:05:46.256 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:05:46.256 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:05:46.256 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:05:46.256 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:05:46.256 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:05:46.256 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:05:46.256 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:05:46.256 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:05:46.256 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:05:46.256 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:05:46.256 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:05:46.256 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:05:46.256 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:05:46.256 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:05:46.256 element at address: 0x20001ae91540 with size: 0.000183 MiB 00:05:46.256 element at address: 0x20001ae91600 with size: 0.000183 MiB 00:05:46.256 element at address: 0x20001ae916c0 with size: 0.000183 MiB 00:05:46.256 element at address: 0x20001ae91780 with size: 0.000183 MiB 00:05:46.256 element at address: 0x20001ae91840 with size: 0.000183 MiB 00:05:46.256 element at address: 0x20001ae91900 with size: 0.000183 MiB 00:05:46.256 element at address: 0x20001ae919c0 with size: 0.000183 MiB 00:05:46.256 element at address: 0x20001ae91a80 with size: 0.000183 MiB 00:05:46.256 element at address: 0x20001ae91b40 with size: 0.000183 MiB 00:05:46.256 element at address: 0x20001ae91c00 with size: 0.000183 MiB 00:05:46.256 element at address: 0x20001ae91cc0 with size: 0.000183 MiB 00:05:46.256 element at address: 0x20001ae91d80 with size: 0.000183 MiB 00:05:46.256 element at address: 0x20001ae91e40 with size: 0.000183 MiB 00:05:46.256 element at address: 0x20001ae91f00 with size: 0.000183 MiB 00:05:46.256 element at address: 0x20001ae91fc0 with size: 0.000183 MiB 00:05:46.256 element at address: 0x20001ae92080 with size: 0.000183 MiB 00:05:46.256 element at address: 0x20001ae92140 with size: 0.000183 MiB 00:05:46.256 element at address: 0x20001ae92200 with size: 0.000183 MiB 00:05:46.256 element at address: 0x20001ae922c0 with size: 0.000183 MiB 00:05:46.256 element at address: 0x20001ae92380 with size: 0.000183 MiB 00:05:46.256 element at address: 0x20001ae92440 with size: 0.000183 MiB 00:05:46.256 element at address: 0x20001ae92500 with size: 0.000183 MiB 00:05:46.256 element at address: 0x20001ae925c0 with size: 0.000183 MiB 00:05:46.256 element at address: 0x20001ae92680 with size: 0.000183 MiB 00:05:46.256 element at address: 0x20001ae92740 with size: 0.000183 MiB 00:05:46.256 element at address: 0x20001ae92800 with size: 0.000183 MiB 00:05:46.256 element at address: 0x20001ae928c0 with size: 0.000183 MiB 00:05:46.256 element at address: 0x20001ae92980 with size: 0.000183 MiB 00:05:46.256 element at address: 0x20001ae92a40 with size: 0.000183 MiB 00:05:46.256 element at address: 0x20001ae92b00 with size: 0.000183 MiB 00:05:46.256 element at address: 0x20001ae92bc0 with size: 0.000183 MiB 00:05:46.256 element at address: 0x20001ae92c80 with size: 0.000183 MiB 00:05:46.256 element at address: 0x20001ae92d40 with size: 0.000183 MiB 00:05:46.256 element at address: 0x20001ae92e00 with size: 0.000183 MiB 00:05:46.256 element at address: 0x20001ae92ec0 with size: 0.000183 MiB 00:05:46.256 element at address: 0x20001ae92f80 with size: 0.000183 MiB 00:05:46.256 element at address: 0x20001ae93040 with size: 0.000183 MiB 00:05:46.256 element at address: 0x20001ae93100 with size: 0.000183 MiB 00:05:46.256 element at address: 0x20001ae931c0 with size: 0.000183 MiB 00:05:46.256 element at address: 0x20001ae93280 with size: 0.000183 MiB 00:05:46.256 element at address: 0x20001ae93340 with size: 0.000183 MiB 00:05:46.256 element at address: 0x20001ae93400 with size: 0.000183 MiB 00:05:46.256 element at address: 0x20001ae934c0 with size: 0.000183 MiB 00:05:46.256 element at address: 0x20001ae93580 with size: 0.000183 MiB 00:05:46.256 element at address: 0x20001ae93640 with size: 0.000183 MiB 00:05:46.256 element at address: 0x20001ae93700 with size: 0.000183 MiB 00:05:46.256 element at address: 0x20001ae937c0 with size: 0.000183 MiB 00:05:46.256 element at address: 0x20001ae93880 with size: 0.000183 MiB 00:05:46.256 element at address: 0x20001ae93940 with size: 0.000183 MiB 00:05:46.256 element at address: 0x20001ae93a00 with size: 0.000183 MiB 00:05:46.256 element at address: 0x20001ae93ac0 with size: 0.000183 MiB 00:05:46.256 element at address: 0x20001ae93b80 with size: 0.000183 MiB 00:05:46.256 element at address: 0x20001ae93c40 with size: 0.000183 MiB 00:05:46.256 element at address: 0x20001ae93d00 with size: 0.000183 MiB 00:05:46.256 element at address: 0x20001ae93dc0 with size: 0.000183 MiB 00:05:46.256 element at address: 0x20001ae93e80 with size: 0.000183 MiB 00:05:46.256 element at address: 0x20001ae93f40 with size: 0.000183 MiB 00:05:46.256 element at address: 0x20001ae94000 with size: 0.000183 MiB 00:05:46.256 element at address: 0x20001ae940c0 with size: 0.000183 MiB 00:05:46.256 element at address: 0x20001ae94180 with size: 0.000183 MiB 00:05:46.256 element at address: 0x20001ae94240 with size: 0.000183 MiB 00:05:46.256 element at address: 0x20001ae94300 with size: 0.000183 MiB 00:05:46.256 element at address: 0x20001ae943c0 with size: 0.000183 MiB 00:05:46.256 element at address: 0x20001ae94480 with size: 0.000183 MiB 00:05:46.256 element at address: 0x20001ae94540 with size: 0.000183 MiB 00:05:46.256 element at address: 0x20001ae94600 with size: 0.000183 MiB 00:05:46.256 element at address: 0x20001ae946c0 with size: 0.000183 MiB 00:05:46.256 element at address: 0x20001ae94780 with size: 0.000183 MiB 00:05:46.256 element at address: 0x20001ae94840 with size: 0.000183 MiB 00:05:46.256 element at address: 0x20001ae94900 with size: 0.000183 MiB 00:05:46.256 element at address: 0x20001ae949c0 with size: 0.000183 MiB 00:05:46.256 element at address: 0x20001ae94a80 with size: 0.000183 MiB 00:05:46.256 element at address: 0x20001ae94b40 with size: 0.000183 MiB 00:05:46.256 element at address: 0x20001ae94c00 with size: 0.000183 MiB 00:05:46.256 element at address: 0x20001ae94cc0 with size: 0.000183 MiB 00:05:46.256 element at address: 0x20001ae94d80 with size: 0.000183 MiB 00:05:46.256 element at address: 0x20001ae94e40 with size: 0.000183 MiB 00:05:46.256 element at address: 0x20001ae94f00 with size: 0.000183 MiB 00:05:46.256 element at address: 0x20001ae94fc0 with size: 0.000183 MiB 00:05:46.257 element at address: 0x20001ae95080 with size: 0.000183 MiB 00:05:46.257 element at address: 0x20001ae95140 with size: 0.000183 MiB 00:05:46.257 element at address: 0x20001ae95200 with size: 0.000183 MiB 00:05:46.257 element at address: 0x20001ae952c0 with size: 0.000183 MiB 00:05:46.257 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:05:46.257 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:05:46.257 element at address: 0x200028265500 with size: 0.000183 MiB 00:05:46.257 element at address: 0x2000282655c0 with size: 0.000183 MiB 00:05:46.257 element at address: 0x20002826c1c0 with size: 0.000183 MiB 00:05:46.257 element at address: 0x20002826c3c0 with size: 0.000183 MiB 00:05:46.257 element at address: 0x20002826c480 with size: 0.000183 MiB 00:05:46.257 element at address: 0x20002826c540 with size: 0.000183 MiB 00:05:46.257 element at address: 0x20002826c600 with size: 0.000183 MiB 00:05:46.257 element at address: 0x20002826c6c0 with size: 0.000183 MiB 00:05:46.257 element at address: 0x20002826c780 with size: 0.000183 MiB 00:05:46.257 element at address: 0x20002826c840 with size: 0.000183 MiB 00:05:46.257 element at address: 0x20002826c900 with size: 0.000183 MiB 00:05:46.257 element at address: 0x20002826c9c0 with size: 0.000183 MiB 00:05:46.257 element at address: 0x20002826ca80 with size: 0.000183 MiB 00:05:46.257 element at address: 0x20002826cb40 with size: 0.000183 MiB 00:05:46.257 element at address: 0x20002826cc00 with size: 0.000183 MiB 00:05:46.257 element at address: 0x20002826ccc0 with size: 0.000183 MiB 00:05:46.257 element at address: 0x20002826cd80 with size: 0.000183 MiB 00:05:46.257 element at address: 0x20002826ce40 with size: 0.000183 MiB 00:05:46.257 element at address: 0x20002826cf00 with size: 0.000183 MiB 00:05:46.257 element at address: 0x20002826cfc0 with size: 0.000183 MiB 00:05:46.257 element at address: 0x20002826d080 with size: 0.000183 MiB 00:05:46.257 element at address: 0x20002826d140 with size: 0.000183 MiB 00:05:46.257 element at address: 0x20002826d200 with size: 0.000183 MiB 00:05:46.257 element at address: 0x20002826d2c0 with size: 0.000183 MiB 00:05:46.257 element at address: 0x20002826d380 with size: 0.000183 MiB 00:05:46.257 element at address: 0x20002826d440 with size: 0.000183 MiB 00:05:46.257 element at address: 0x20002826d500 with size: 0.000183 MiB 00:05:46.257 element at address: 0x20002826d5c0 with size: 0.000183 MiB 00:05:46.257 element at address: 0x20002826d680 with size: 0.000183 MiB 00:05:46.257 element at address: 0x20002826d740 with size: 0.000183 MiB 00:05:46.257 element at address: 0x20002826d800 with size: 0.000183 MiB 00:05:46.257 element at address: 0x20002826d8c0 with size: 0.000183 MiB 00:05:46.257 element at address: 0x20002826d980 with size: 0.000183 MiB 00:05:46.257 element at address: 0x20002826da40 with size: 0.000183 MiB 00:05:46.257 element at address: 0x20002826db00 with size: 0.000183 MiB 00:05:46.257 element at address: 0x20002826dbc0 with size: 0.000183 MiB 00:05:46.257 element at address: 0x20002826dc80 with size: 0.000183 MiB 00:05:46.257 element at address: 0x20002826dd40 with size: 0.000183 MiB 00:05:46.257 element at address: 0x20002826de00 with size: 0.000183 MiB 00:05:46.257 element at address: 0x20002826dec0 with size: 0.000183 MiB 00:05:46.257 element at address: 0x20002826df80 with size: 0.000183 MiB 00:05:46.257 element at address: 0x20002826e040 with size: 0.000183 MiB 00:05:46.257 element at address: 0x20002826e100 with size: 0.000183 MiB 00:05:46.257 element at address: 0x20002826e1c0 with size: 0.000183 MiB 00:05:46.257 element at address: 0x20002826e280 with size: 0.000183 MiB 00:05:46.257 element at address: 0x20002826e340 with size: 0.000183 MiB 00:05:46.257 element at address: 0x20002826e400 with size: 0.000183 MiB 00:05:46.257 element at address: 0x20002826e4c0 with size: 0.000183 MiB 00:05:46.257 element at address: 0x20002826e580 with size: 0.000183 MiB 00:05:46.257 element at address: 0x20002826e640 with size: 0.000183 MiB 00:05:46.257 element at address: 0x20002826e700 with size: 0.000183 MiB 00:05:46.257 element at address: 0x20002826e7c0 with size: 0.000183 MiB 00:05:46.257 element at address: 0x20002826e880 with size: 0.000183 MiB 00:05:46.257 element at address: 0x20002826e940 with size: 0.000183 MiB 00:05:46.257 element at address: 0x20002826ea00 with size: 0.000183 MiB 00:05:46.257 element at address: 0x20002826eac0 with size: 0.000183 MiB 00:05:46.257 element at address: 0x20002826eb80 with size: 0.000183 MiB 00:05:46.257 element at address: 0x20002826ec40 with size: 0.000183 MiB 00:05:46.257 element at address: 0x20002826ed00 with size: 0.000183 MiB 00:05:46.257 element at address: 0x20002826edc0 with size: 0.000183 MiB 00:05:46.257 element at address: 0x20002826ee80 with size: 0.000183 MiB 00:05:46.257 element at address: 0x20002826ef40 with size: 0.000183 MiB 00:05:46.257 element at address: 0x20002826f000 with size: 0.000183 MiB 00:05:46.257 element at address: 0x20002826f0c0 with size: 0.000183 MiB 00:05:46.257 element at address: 0x20002826f180 with size: 0.000183 MiB 00:05:46.257 element at address: 0x20002826f240 with size: 0.000183 MiB 00:05:46.257 element at address: 0x20002826f300 with size: 0.000183 MiB 00:05:46.257 element at address: 0x20002826f3c0 with size: 0.000183 MiB 00:05:46.257 element at address: 0x20002826f480 with size: 0.000183 MiB 00:05:46.257 element at address: 0x20002826f540 with size: 0.000183 MiB 00:05:46.257 element at address: 0x20002826f600 with size: 0.000183 MiB 00:05:46.257 element at address: 0x20002826f6c0 with size: 0.000183 MiB 00:05:46.257 element at address: 0x20002826f780 with size: 0.000183 MiB 00:05:46.257 element at address: 0x20002826f840 with size: 0.000183 MiB 00:05:46.257 element at address: 0x20002826f900 with size: 0.000183 MiB 00:05:46.257 element at address: 0x20002826f9c0 with size: 0.000183 MiB 00:05:46.257 element at address: 0x20002826fa80 with size: 0.000183 MiB 00:05:46.257 element at address: 0x20002826fb40 with size: 0.000183 MiB 00:05:46.257 element at address: 0x20002826fc00 with size: 0.000183 MiB 00:05:46.257 element at address: 0x20002826fcc0 with size: 0.000183 MiB 00:05:46.257 element at address: 0x20002826fd80 with size: 0.000183 MiB 00:05:46.257 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:05:46.257 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:05:46.257 list of memzone associated elements. size: 607.928894 MiB 00:05:46.257 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:05:46.257 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:46.257 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:05:46.257 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:46.257 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:05:46.257 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_69925_0 00:05:46.257 element at address: 0x200000dff380 with size: 48.003052 MiB 00:05:46.257 associated memzone info: size: 48.002930 MiB name: MP_msgpool_69925_0 00:05:46.257 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:05:46.257 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_69925_0 00:05:46.257 element at address: 0x2000199be940 with size: 20.255554 MiB 00:05:46.257 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:46.257 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:05:46.257 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:46.257 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:05:46.257 associated memzone info: size: 3.000122 MiB name: MP_evtpool_69925_0 00:05:46.257 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:05:46.257 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_69925 00:05:46.257 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:46.257 associated memzone info: size: 1.007996 MiB name: MP_evtpool_69925 00:05:46.257 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:05:46.257 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:46.257 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:05:46.257 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:46.257 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:05:46.257 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:46.257 element at address: 0x200003efba40 with size: 1.008118 MiB 00:05:46.257 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:46.257 element at address: 0x200000cff180 with size: 1.000488 MiB 00:05:46.257 associated memzone info: size: 1.000366 MiB name: RG_ring_0_69925 00:05:46.257 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:05:46.257 associated memzone info: size: 1.000366 MiB name: RG_ring_1_69925 00:05:46.257 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:05:46.257 associated memzone info: size: 1.000366 MiB name: RG_ring_4_69925 00:05:46.257 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:05:46.257 associated memzone info: size: 1.000366 MiB name: RG_ring_5_69925 00:05:46.257 element at address: 0x20000087f740 with size: 0.500488 MiB 00:05:46.257 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_69925 00:05:46.257 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:05:46.257 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_69925 00:05:46.257 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:05:46.257 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:46.257 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:05:46.257 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:46.257 element at address: 0x20001987c540 with size: 0.250488 MiB 00:05:46.257 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:46.257 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:05:46.257 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_69925 00:05:46.257 element at address: 0x20000085e640 with size: 0.125488 MiB 00:05:46.257 associated memzone info: size: 0.125366 MiB name: RG_ring_2_69925 00:05:46.257 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:05:46.257 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:46.257 element at address: 0x200028265680 with size: 0.023743 MiB 00:05:46.257 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:46.257 element at address: 0x20000085a380 with size: 0.016113 MiB 00:05:46.257 associated memzone info: size: 0.015991 MiB name: RG_ring_3_69925 00:05:46.258 element at address: 0x20002826b7c0 with size: 0.002441 MiB 00:05:46.258 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:46.258 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:05:46.258 associated memzone info: size: 0.000183 MiB name: MP_msgpool_69925 00:05:46.258 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:05:46.258 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_69925 00:05:46.258 element at address: 0x20000085a180 with size: 0.000305 MiB 00:05:46.258 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_69925 00:05:46.258 element at address: 0x20002826c280 with size: 0.000305 MiB 00:05:46.258 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:46.258 18:56:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:46.258 18:56:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 69925 00:05:46.258 18:56:03 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 69925 ']' 00:05:46.258 18:56:03 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 69925 00:05:46.258 18:56:03 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:46.258 18:56:03 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:46.258 18:56:03 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69925 00:05:46.258 18:56:03 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:46.258 18:56:03 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:46.258 18:56:03 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69925' 00:05:46.258 killing process with pid 69925 00:05:46.258 18:56:03 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 69925 00:05:46.258 18:56:03 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 69925 00:05:46.516 00:05:46.516 real 0m1.434s 00:05:46.516 user 0m1.507s 00:05:46.516 sys 0m0.336s 00:05:46.516 18:56:04 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:46.516 18:56:04 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:46.516 ************************************ 00:05:46.516 END TEST dpdk_mem_utility 00:05:46.516 ************************************ 00:05:46.516 18:56:04 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:46.775 18:56:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:46.775 18:56:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.775 18:56:04 -- common/autotest_common.sh@10 -- # set +x 00:05:46.775 ************************************ 00:05:46.775 START TEST event 00:05:46.775 ************************************ 00:05:46.775 18:56:04 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:46.775 * Looking for test storage... 00:05:46.775 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:46.776 18:56:04 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:46.776 18:56:04 event -- common/autotest_common.sh@1711 -- # lcov --version 00:05:46.776 18:56:04 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:46.776 18:56:04 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:46.776 18:56:04 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:46.776 18:56:04 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:46.776 18:56:04 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:46.776 18:56:04 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:46.776 18:56:04 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:46.776 18:56:04 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:46.776 18:56:04 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:46.776 18:56:04 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:46.776 18:56:04 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:46.776 18:56:04 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:46.776 18:56:04 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:46.776 18:56:04 event -- scripts/common.sh@344 -- # case "$op" in 00:05:46.776 18:56:04 event -- scripts/common.sh@345 -- # : 1 00:05:46.776 18:56:04 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:46.776 18:56:04 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:46.776 18:56:04 event -- scripts/common.sh@365 -- # decimal 1 00:05:46.776 18:56:04 event -- scripts/common.sh@353 -- # local d=1 00:05:46.776 18:56:04 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:46.776 18:56:04 event -- scripts/common.sh@355 -- # echo 1 00:05:46.776 18:56:04 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:46.776 18:56:04 event -- scripts/common.sh@366 -- # decimal 2 00:05:46.776 18:56:04 event -- scripts/common.sh@353 -- # local d=2 00:05:46.776 18:56:04 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:46.776 18:56:04 event -- scripts/common.sh@355 -- # echo 2 00:05:46.776 18:56:04 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:46.776 18:56:04 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:46.776 18:56:04 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:46.776 18:56:04 event -- scripts/common.sh@368 -- # return 0 00:05:46.776 18:56:04 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:46.776 18:56:04 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:46.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.776 --rc genhtml_branch_coverage=1 00:05:46.776 --rc genhtml_function_coverage=1 00:05:46.776 --rc genhtml_legend=1 00:05:46.776 --rc geninfo_all_blocks=1 00:05:46.776 --rc geninfo_unexecuted_blocks=1 00:05:46.776 00:05:46.776 ' 00:05:46.776 18:56:04 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:46.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.776 --rc genhtml_branch_coverage=1 00:05:46.776 --rc genhtml_function_coverage=1 00:05:46.776 --rc genhtml_legend=1 00:05:46.776 --rc geninfo_all_blocks=1 00:05:46.776 --rc geninfo_unexecuted_blocks=1 00:05:46.776 00:05:46.776 ' 00:05:46.776 18:56:04 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:46.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.776 --rc genhtml_branch_coverage=1 00:05:46.776 --rc genhtml_function_coverage=1 00:05:46.776 --rc genhtml_legend=1 00:05:46.776 --rc geninfo_all_blocks=1 00:05:46.776 --rc geninfo_unexecuted_blocks=1 00:05:46.776 00:05:46.776 ' 00:05:46.776 18:56:04 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:46.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.776 --rc genhtml_branch_coverage=1 00:05:46.776 --rc genhtml_function_coverage=1 00:05:46.776 --rc genhtml_legend=1 00:05:46.776 --rc geninfo_all_blocks=1 00:05:46.776 --rc geninfo_unexecuted_blocks=1 00:05:46.776 00:05:46.776 ' 00:05:46.776 18:56:04 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:46.776 18:56:04 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:46.776 18:56:04 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:46.776 18:56:04 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:46.776 18:56:04 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.776 18:56:04 event -- common/autotest_common.sh@10 -- # set +x 00:05:46.776 ************************************ 00:05:46.776 START TEST event_perf 00:05:46.776 ************************************ 00:05:46.776 18:56:04 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:46.776 Running I/O for 1 seconds...[2024-12-05 18:56:04.261718] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:05:46.776 [2024-12-05 18:56:04.262554] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70006 ] 00:05:47.034 [2024-12-05 18:56:04.407212] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:47.034 [2024-12-05 18:56:04.429694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:47.034 [2024-12-05 18:56:04.429886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:47.034 [2024-12-05 18:56:04.430024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.034 Running I/O for 1 seconds...[2024-12-05 18:56:04.430103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:47.987 00:05:47.987 lcore 0: 198868 00:05:47.987 lcore 1: 198868 00:05:47.987 lcore 2: 198868 00:05:47.987 lcore 3: 198868 00:05:47.987 done. 00:05:47.987 00:05:47.987 real 0m1.241s 00:05:47.987 user 0m4.049s 00:05:47.987 sys 0m0.064s 00:05:47.987 18:56:05 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:47.987 ************************************ 00:05:47.987 END TEST event_perf 00:05:47.987 ************************************ 00:05:47.987 18:56:05 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:47.987 18:56:05 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:47.987 18:56:05 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:47.987 18:56:05 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:47.987 18:56:05 event -- common/autotest_common.sh@10 -- # set +x 00:05:47.987 ************************************ 00:05:47.987 START TEST event_reactor 00:05:47.987 ************************************ 00:05:47.987 18:56:05 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:47.987 [2024-12-05 18:56:05.539302] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:05:47.987 [2024-12-05 18:56:05.540097] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70044 ] 00:05:48.245 [2024-12-05 18:56:05.690726] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.245 [2024-12-05 18:56:05.709553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.622 test_start 00:05:49.622 oneshot 00:05:49.622 tick 100 00:05:49.622 tick 100 00:05:49.622 tick 250 00:05:49.622 tick 100 00:05:49.622 tick 100 00:05:49.622 tick 100 00:05:49.622 tick 250 00:05:49.622 tick 500 00:05:49.622 tick 100 00:05:49.622 tick 100 00:05:49.622 tick 250 00:05:49.622 tick 100 00:05:49.622 tick 100 00:05:49.622 test_end 00:05:49.622 00:05:49.622 real 0m1.231s 00:05:49.622 user 0m1.077s 00:05:49.622 sys 0m0.045s 00:05:49.622 18:56:06 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.622 ************************************ 00:05:49.622 END TEST event_reactor 00:05:49.622 ************************************ 00:05:49.622 18:56:06 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:49.622 18:56:06 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:49.622 18:56:06 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:49.622 18:56:06 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.622 18:56:06 event -- common/autotest_common.sh@10 -- # set +x 00:05:49.622 ************************************ 00:05:49.622 START TEST event_reactor_perf 00:05:49.622 ************************************ 00:05:49.622 18:56:06 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:49.622 [2024-12-05 18:56:06.809234] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:05:49.622 [2024-12-05 18:56:06.809549] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70076 ] 00:05:49.622 [2024-12-05 18:56:06.954405] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.622 [2024-12-05 18:56:06.973766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.552 test_start 00:05:50.552 test_end 00:05:50.552 Performance: 314756 events per second 00:05:50.552 ************************************ 00:05:50.552 00:05:50.552 real 0m1.230s 00:05:50.552 user 0m1.071s 00:05:50.552 sys 0m0.051s 00:05:50.552 18:56:08 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:50.552 18:56:08 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:50.552 END TEST event_reactor_perf 00:05:50.552 ************************************ 00:05:50.552 18:56:08 event -- event/event.sh@49 -- # uname -s 00:05:50.552 18:56:08 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:50.552 18:56:08 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:50.552 18:56:08 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:50.552 18:56:08 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:50.552 18:56:08 event -- common/autotest_common.sh@10 -- # set +x 00:05:50.552 ************************************ 00:05:50.552 START TEST event_scheduler 00:05:50.552 ************************************ 00:05:50.552 18:56:08 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:50.810 * Looking for test storage... 00:05:50.810 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:50.810 18:56:08 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:50.810 18:56:08 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:05:50.810 18:56:08 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:50.810 18:56:08 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:50.810 18:56:08 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:50.810 18:56:08 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:50.810 18:56:08 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:50.810 18:56:08 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:50.810 18:56:08 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:50.810 18:56:08 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:50.810 18:56:08 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:50.810 18:56:08 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:50.810 18:56:08 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:50.810 18:56:08 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:50.810 18:56:08 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:50.810 18:56:08 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:50.810 18:56:08 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:50.810 18:56:08 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:50.810 18:56:08 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:50.810 18:56:08 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:50.810 18:56:08 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:50.810 18:56:08 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:50.810 18:56:08 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:50.810 18:56:08 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:50.810 18:56:08 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:50.810 18:56:08 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:50.810 18:56:08 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:50.810 18:56:08 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:50.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.810 18:56:08 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:50.810 18:56:08 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:50.810 18:56:08 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:50.810 18:56:08 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:50.810 18:56:08 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:50.810 18:56:08 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:50.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.810 --rc genhtml_branch_coverage=1 00:05:50.810 --rc genhtml_function_coverage=1 00:05:50.810 --rc genhtml_legend=1 00:05:50.810 --rc geninfo_all_blocks=1 00:05:50.810 --rc geninfo_unexecuted_blocks=1 00:05:50.810 00:05:50.810 ' 00:05:50.810 18:56:08 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:50.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.810 --rc genhtml_branch_coverage=1 00:05:50.810 --rc genhtml_function_coverage=1 00:05:50.810 --rc genhtml_legend=1 00:05:50.810 --rc geninfo_all_blocks=1 00:05:50.810 --rc geninfo_unexecuted_blocks=1 00:05:50.810 00:05:50.810 ' 00:05:50.810 18:56:08 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:50.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.810 --rc genhtml_branch_coverage=1 00:05:50.810 --rc genhtml_function_coverage=1 00:05:50.810 --rc genhtml_legend=1 00:05:50.810 --rc geninfo_all_blocks=1 00:05:50.810 --rc geninfo_unexecuted_blocks=1 00:05:50.810 00:05:50.810 ' 00:05:50.810 18:56:08 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:50.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.811 --rc genhtml_branch_coverage=1 00:05:50.811 --rc genhtml_function_coverage=1 00:05:50.811 --rc genhtml_legend=1 00:05:50.811 --rc geninfo_all_blocks=1 00:05:50.811 --rc geninfo_unexecuted_blocks=1 00:05:50.811 00:05:50.811 ' 00:05:50.811 18:56:08 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:50.811 18:56:08 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=70147 00:05:50.811 18:56:08 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:50.811 18:56:08 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 70147 00:05:50.811 18:56:08 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:50.811 18:56:08 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 70147 ']' 00:05:50.811 18:56:08 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.811 18:56:08 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:50.811 18:56:08 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.811 18:56:08 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:50.811 18:56:08 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:50.811 [2024-12-05 18:56:08.294048] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:05:50.811 [2024-12-05 18:56:08.294167] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70147 ] 00:05:51.069 [2024-12-05 18:56:08.442542] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:51.069 [2024-12-05 18:56:08.465169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.069 [2024-12-05 18:56:08.465413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:51.069 [2024-12-05 18:56:08.465579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:51.069 [2024-12-05 18:56:08.465794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:51.636 18:56:09 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:51.636 18:56:09 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:51.636 18:56:09 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:51.636 18:56:09 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.636 18:56:09 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:51.636 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:51.636 POWER: Cannot set governor of lcore 0 to userspace 00:05:51.636 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:51.636 POWER: Cannot set governor of lcore 0 to performance 00:05:51.636 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:51.636 POWER: Cannot set governor of lcore 0 to userspace 00:05:51.636 GUEST_CHANNEL: Unable to to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:51.636 POWER: Unable to set Power Management Environment for lcore 0 00:05:51.636 [2024-12-05 18:56:09.127099] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:05:51.636 [2024-12-05 18:56:09.127117] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:05:51.636 [2024-12-05 18:56:09.127135] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:51.636 [2024-12-05 18:56:09.127149] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:51.636 [2024-12-05 18:56:09.127157] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:51.636 [2024-12-05 18:56:09.127175] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:51.636 18:56:09 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.636 18:56:09 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:51.636 18:56:09 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.636 18:56:09 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:51.636 [2024-12-05 18:56:09.185006] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:51.636 18:56:09 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.636 18:56:09 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:51.636 18:56:09 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:51.636 18:56:09 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.636 18:56:09 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:51.636 ************************************ 00:05:51.636 START TEST scheduler_create_thread 00:05:51.636 ************************************ 00:05:51.636 18:56:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:51.636 18:56:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:51.636 18:56:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.895 18:56:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.895 2 00:05:51.895 18:56:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.895 18:56:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:51.895 18:56:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.895 18:56:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.895 3 00:05:51.895 18:56:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.895 18:56:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:51.895 18:56:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.895 18:56:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.895 4 00:05:51.895 18:56:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.895 18:56:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:51.895 18:56:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.895 18:56:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.895 5 00:05:51.895 18:56:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.895 18:56:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:51.895 18:56:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.895 18:56:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.895 6 00:05:51.895 18:56:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.895 18:56:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:51.895 18:56:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.895 18:56:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.895 7 00:05:51.895 18:56:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.895 18:56:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:51.895 18:56:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.895 18:56:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.895 8 00:05:51.895 18:56:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.895 18:56:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:51.895 18:56:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.895 18:56:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.895 9 00:05:51.895 18:56:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.895 18:56:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:51.895 18:56:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.895 18:56:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.895 10 00:05:51.895 18:56:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.895 18:56:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:51.895 18:56:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.895 18:56:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.895 18:56:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.895 18:56:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:51.895 18:56:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:51.895 18:56:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.895 18:56:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.895 18:56:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.895 18:56:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:51.895 18:56:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.895 18:56:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.895 18:56:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.895 18:56:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:51.895 18:56:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:51.895 18:56:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.895 18:56:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.830 18:56:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.830 00:05:52.830 real 0m1.169s 00:05:52.830 user 0m0.014s 00:05:52.830 sys 0m0.004s 00:05:52.830 18:56:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:52.830 18:56:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.830 ************************************ 00:05:52.830 END TEST scheduler_create_thread 00:05:52.830 ************************************ 00:05:53.088 18:56:10 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:53.088 18:56:10 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 70147 00:05:53.088 18:56:10 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 70147 ']' 00:05:53.088 18:56:10 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 70147 00:05:53.088 18:56:10 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:53.088 18:56:10 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:53.088 18:56:10 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70147 00:05:53.088 killing process with pid 70147 00:05:53.088 18:56:10 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:53.088 18:56:10 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:53.088 18:56:10 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70147' 00:05:53.088 18:56:10 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 70147 00:05:53.088 18:56:10 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 70147 00:05:53.346 [2024-12-05 18:56:10.842892] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:53.606 00:05:53.606 real 0m2.915s 00:05:53.606 user 0m5.074s 00:05:53.606 sys 0m0.302s 00:05:53.606 18:56:10 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:53.606 ************************************ 00:05:53.606 END TEST event_scheduler 00:05:53.606 ************************************ 00:05:53.606 18:56:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:53.606 18:56:11 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:53.606 18:56:11 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:53.606 18:56:11 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:53.606 18:56:11 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:53.606 18:56:11 event -- common/autotest_common.sh@10 -- # set +x 00:05:53.606 ************************************ 00:05:53.606 START TEST app_repeat 00:05:53.606 ************************************ 00:05:53.606 18:56:11 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:53.606 18:56:11 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.606 18:56:11 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.606 18:56:11 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:53.606 18:56:11 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:53.606 18:56:11 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:53.606 18:56:11 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:53.606 18:56:11 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:53.606 Process app_repeat pid: 70225 00:05:53.606 spdk_app_start Round 0 00:05:53.606 18:56:11 event.app_repeat -- event/event.sh@19 -- # repeat_pid=70225 00:05:53.606 18:56:11 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:53.606 18:56:11 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 70225' 00:05:53.606 18:56:11 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:53.606 18:56:11 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:53.606 18:56:11 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70225 /var/tmp/spdk-nbd.sock 00:05:53.606 18:56:11 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 70225 ']' 00:05:53.606 18:56:11 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:53.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:53.606 18:56:11 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:53.606 18:56:11 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:53.606 18:56:11 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:53.606 18:56:11 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:53.606 18:56:11 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:53.606 [2024-12-05 18:56:11.052365] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:05:53.606 [2024-12-05 18:56:11.052479] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70225 ] 00:05:53.865 [2024-12-05 18:56:11.198419] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:53.865 [2024-12-05 18:56:11.219137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:53.865 [2024-12-05 18:56:11.219175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.528 18:56:11 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:54.528 18:56:11 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:54.528 18:56:11 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:54.528 Malloc0 00:05:54.788 18:56:12 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:54.788 Malloc1 00:05:54.788 18:56:12 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:54.788 18:56:12 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.788 18:56:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:54.788 18:56:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:54.788 18:56:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.788 18:56:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:54.788 18:56:12 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:54.788 18:56:12 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.788 18:56:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:54.788 18:56:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:54.788 18:56:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.788 18:56:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:54.788 18:56:12 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:54.788 18:56:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:54.788 18:56:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:54.788 18:56:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:55.047 /dev/nbd0 00:05:55.047 18:56:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:55.047 18:56:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:55.047 18:56:12 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:55.047 18:56:12 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:55.047 18:56:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:55.047 18:56:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:55.047 18:56:12 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:55.047 18:56:12 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:55.047 18:56:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:55.047 18:56:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:55.047 18:56:12 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:55.047 1+0 records in 00:05:55.047 1+0 records out 00:05:55.047 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000357672 s, 11.5 MB/s 00:05:55.047 18:56:12 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:55.047 18:56:12 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:55.047 18:56:12 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:55.047 18:56:12 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:55.048 18:56:12 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:55.048 18:56:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:55.048 18:56:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:55.048 18:56:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:55.308 /dev/nbd1 00:05:55.308 18:56:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:55.308 18:56:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:55.308 18:56:12 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:55.308 18:56:12 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:55.308 18:56:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:55.308 18:56:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:55.308 18:56:12 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:55.308 18:56:12 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:55.308 18:56:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:55.308 18:56:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:55.308 18:56:12 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:55.308 1+0 records in 00:05:55.308 1+0 records out 00:05:55.308 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000206151 s, 19.9 MB/s 00:05:55.308 18:56:12 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:55.308 18:56:12 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:55.308 18:56:12 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:55.308 18:56:12 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:55.308 18:56:12 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:55.308 18:56:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:55.308 18:56:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:55.308 18:56:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:55.308 18:56:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.308 18:56:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:55.568 18:56:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:55.568 { 00:05:55.568 "nbd_device": "/dev/nbd0", 00:05:55.568 "bdev_name": "Malloc0" 00:05:55.568 }, 00:05:55.568 { 00:05:55.568 "nbd_device": "/dev/nbd1", 00:05:55.568 "bdev_name": "Malloc1" 00:05:55.568 } 00:05:55.568 ]' 00:05:55.568 18:56:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:55.568 { 00:05:55.568 "nbd_device": "/dev/nbd0", 00:05:55.568 "bdev_name": "Malloc0" 00:05:55.568 }, 00:05:55.568 { 00:05:55.568 "nbd_device": "/dev/nbd1", 00:05:55.568 "bdev_name": "Malloc1" 00:05:55.568 } 00:05:55.568 ]' 00:05:55.568 18:56:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:55.568 18:56:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:55.568 /dev/nbd1' 00:05:55.568 18:56:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:55.568 18:56:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:55.568 /dev/nbd1' 00:05:55.568 18:56:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:55.568 18:56:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:55.568 18:56:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:55.568 18:56:12 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:55.568 18:56:12 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:55.568 18:56:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.568 18:56:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:55.568 18:56:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:55.568 18:56:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:55.568 18:56:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:55.568 18:56:12 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:55.568 256+0 records in 00:05:55.568 256+0 records out 00:05:55.568 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00773109 s, 136 MB/s 00:05:55.568 18:56:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:55.568 18:56:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:55.568 256+0 records in 00:05:55.568 256+0 records out 00:05:55.568 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0210685 s, 49.8 MB/s 00:05:55.568 18:56:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:55.568 18:56:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:55.568 256+0 records in 00:05:55.568 256+0 records out 00:05:55.568 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0180092 s, 58.2 MB/s 00:05:55.568 18:56:13 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:55.568 18:56:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.568 18:56:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:55.568 18:56:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:55.568 18:56:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:55.568 18:56:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:55.568 18:56:13 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:55.568 18:56:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:55.568 18:56:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:55.568 18:56:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:55.568 18:56:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:55.568 18:56:13 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:55.568 18:56:13 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:55.568 18:56:13 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.568 18:56:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.568 18:56:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:55.568 18:56:13 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:55.568 18:56:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:55.568 18:56:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:55.830 18:56:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:55.830 18:56:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:55.830 18:56:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:55.830 18:56:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:55.830 18:56:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:55.830 18:56:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:55.830 18:56:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:55.830 18:56:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:55.830 18:56:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:55.830 18:56:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:55.830 18:56:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:55.830 18:56:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:55.830 18:56:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:55.830 18:56:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:55.830 18:56:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:55.830 18:56:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:55.830 18:56:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:55.830 18:56:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:56.091 18:56:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:56.091 18:56:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.091 18:56:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:56.091 18:56:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:56.091 18:56:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:56.091 18:56:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:56.091 18:56:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:56.091 18:56:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:56.091 18:56:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:56.091 18:56:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:56.091 18:56:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:56.091 18:56:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:56.091 18:56:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:56.091 18:56:13 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:56.091 18:56:13 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:56.091 18:56:13 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:56.352 18:56:13 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:56.613 [2024-12-05 18:56:13.951723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:56.613 [2024-12-05 18:56:13.972115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:56.613 [2024-12-05 18:56:13.972197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.613 [2024-12-05 18:56:14.008215] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:56.613 [2024-12-05 18:56:14.008292] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:59.896 spdk_app_start Round 1 00:05:59.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:59.896 18:56:16 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:59.896 18:56:16 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:59.896 18:56:16 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70225 /var/tmp/spdk-nbd.sock 00:05:59.896 18:56:16 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 70225 ']' 00:05:59.896 18:56:16 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:59.896 18:56:16 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:59.896 18:56:16 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:59.896 18:56:16 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:59.896 18:56:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:59.896 18:56:17 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:59.896 18:56:17 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:59.897 18:56:17 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:59.897 Malloc0 00:05:59.897 18:56:17 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:00.156 Malloc1 00:06:00.156 18:56:17 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:00.156 18:56:17 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.156 18:56:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:00.156 18:56:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:00.156 18:56:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.156 18:56:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:00.156 18:56:17 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:00.156 18:56:17 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.156 18:56:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:00.156 18:56:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:00.156 18:56:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.156 18:56:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:00.156 18:56:17 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:00.156 18:56:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:00.156 18:56:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:00.156 18:56:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:00.156 /dev/nbd0 00:06:00.156 18:56:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:00.156 18:56:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:00.156 18:56:17 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:00.415 18:56:17 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:00.415 18:56:17 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:00.415 18:56:17 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:00.415 18:56:17 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:00.415 18:56:17 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:00.415 18:56:17 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:00.415 18:56:17 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:00.415 18:56:17 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:00.415 1+0 records in 00:06:00.415 1+0 records out 00:06:00.415 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000434924 s, 9.4 MB/s 00:06:00.415 18:56:17 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:00.415 18:56:17 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:00.415 18:56:17 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:00.415 18:56:17 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:00.415 18:56:17 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:00.415 18:56:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:00.415 18:56:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:00.415 18:56:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:00.415 /dev/nbd1 00:06:00.416 18:56:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:00.416 18:56:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:00.416 18:56:17 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:00.416 18:56:17 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:00.416 18:56:17 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:00.416 18:56:17 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:00.416 18:56:17 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:00.416 18:56:17 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:00.416 18:56:17 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:00.416 18:56:17 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:00.416 18:56:17 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:00.416 1+0 records in 00:06:00.416 1+0 records out 00:06:00.416 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000366925 s, 11.2 MB/s 00:06:00.416 18:56:17 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:00.416 18:56:17 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:00.416 18:56:17 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:00.416 18:56:17 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:00.416 18:56:17 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:00.416 18:56:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:00.416 18:56:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:00.416 18:56:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:00.416 18:56:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.416 18:56:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:00.674 18:56:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:00.674 { 00:06:00.674 "nbd_device": "/dev/nbd0", 00:06:00.674 "bdev_name": "Malloc0" 00:06:00.674 }, 00:06:00.674 { 00:06:00.674 "nbd_device": "/dev/nbd1", 00:06:00.674 "bdev_name": "Malloc1" 00:06:00.674 } 00:06:00.674 ]' 00:06:00.674 18:56:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:00.674 { 00:06:00.674 "nbd_device": "/dev/nbd0", 00:06:00.674 "bdev_name": "Malloc0" 00:06:00.674 }, 00:06:00.674 { 00:06:00.674 "nbd_device": "/dev/nbd1", 00:06:00.674 "bdev_name": "Malloc1" 00:06:00.674 } 00:06:00.674 ]' 00:06:00.674 18:56:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:00.674 18:56:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:00.674 /dev/nbd1' 00:06:00.674 18:56:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:00.674 /dev/nbd1' 00:06:00.674 18:56:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:00.674 18:56:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:00.674 18:56:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:00.674 18:56:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:00.674 18:56:18 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:00.674 18:56:18 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:00.675 18:56:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.675 18:56:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:00.675 18:56:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:00.675 18:56:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:00.675 18:56:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:00.675 18:56:18 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:00.675 256+0 records in 00:06:00.675 256+0 records out 00:06:00.675 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00431936 s, 243 MB/s 00:06:00.675 18:56:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:00.675 18:56:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:00.933 256+0 records in 00:06:00.933 256+0 records out 00:06:00.933 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0175306 s, 59.8 MB/s 00:06:00.933 18:56:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:00.933 18:56:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:00.933 256+0 records in 00:06:00.933 256+0 records out 00:06:00.933 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0198957 s, 52.7 MB/s 00:06:00.933 18:56:18 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:00.933 18:56:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.933 18:56:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:00.933 18:56:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:00.933 18:56:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:00.933 18:56:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:00.933 18:56:18 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:00.933 18:56:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:00.933 18:56:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:00.933 18:56:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:00.933 18:56:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:00.933 18:56:18 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:00.933 18:56:18 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:00.933 18:56:18 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.933 18:56:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.933 18:56:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:00.933 18:56:18 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:00.933 18:56:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:00.933 18:56:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:00.933 18:56:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:00.933 18:56:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:00.933 18:56:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:00.933 18:56:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:00.933 18:56:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:00.933 18:56:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:00.933 18:56:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:00.933 18:56:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:00.933 18:56:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:00.933 18:56:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:01.192 18:56:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:01.192 18:56:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:01.192 18:56:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:01.192 18:56:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:01.192 18:56:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:01.192 18:56:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:01.192 18:56:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:01.192 18:56:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:01.192 18:56:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:01.192 18:56:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.192 18:56:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:01.450 18:56:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:01.450 18:56:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:01.450 18:56:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:01.450 18:56:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:01.450 18:56:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:01.450 18:56:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:01.450 18:56:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:01.450 18:56:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:01.450 18:56:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:01.450 18:56:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:01.450 18:56:18 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:01.450 18:56:18 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:01.450 18:56:18 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:01.709 18:56:19 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:01.709 [2024-12-05 18:56:19.218594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:01.709 [2024-12-05 18:56:19.234311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:01.709 [2024-12-05 18:56:19.234315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.709 [2024-12-05 18:56:19.263985] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:01.709 [2024-12-05 18:56:19.264019] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:04.997 18:56:22 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:04.997 spdk_app_start Round 2 00:06:04.997 18:56:22 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:04.997 18:56:22 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70225 /var/tmp/spdk-nbd.sock 00:06:04.997 18:56:22 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 70225 ']' 00:06:04.997 18:56:22 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:04.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:04.997 18:56:22 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:04.997 18:56:22 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:04.997 18:56:22 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:04.997 18:56:22 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:04.997 18:56:22 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:04.997 18:56:22 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:04.997 18:56:22 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:04.997 Malloc0 00:06:04.997 18:56:22 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:05.255 Malloc1 00:06:05.255 18:56:22 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:05.255 18:56:22 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.255 18:56:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:05.255 18:56:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:05.255 18:56:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.255 18:56:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:05.255 18:56:22 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:05.255 18:56:22 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.255 18:56:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:05.255 18:56:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:05.255 18:56:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.255 18:56:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:05.255 18:56:22 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:05.255 18:56:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:05.255 18:56:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:05.255 18:56:22 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:05.514 /dev/nbd0 00:06:05.514 18:56:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:05.514 18:56:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:05.514 18:56:22 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:05.514 18:56:22 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:05.514 18:56:22 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:05.514 18:56:22 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:05.514 18:56:22 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:05.514 18:56:22 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:05.514 18:56:22 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:05.514 18:56:22 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:05.514 18:56:22 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:05.514 1+0 records in 00:06:05.514 1+0 records out 00:06:05.514 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000298844 s, 13.7 MB/s 00:06:05.514 18:56:22 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:05.514 18:56:22 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:05.514 18:56:22 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:05.514 18:56:22 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:05.514 18:56:22 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:05.514 18:56:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:05.514 18:56:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:05.514 18:56:22 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:05.772 /dev/nbd1 00:06:05.772 18:56:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:05.772 18:56:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:05.772 18:56:23 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:05.772 18:56:23 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:05.772 18:56:23 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:05.772 18:56:23 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:05.772 18:56:23 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:05.772 18:56:23 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:05.772 18:56:23 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:05.772 18:56:23 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:05.772 18:56:23 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:05.772 1+0 records in 00:06:05.772 1+0 records out 00:06:05.772 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000287749 s, 14.2 MB/s 00:06:05.772 18:56:23 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:05.772 18:56:23 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:05.772 18:56:23 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:05.772 18:56:23 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:05.772 18:56:23 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:05.772 18:56:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:05.772 18:56:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:05.772 18:56:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:05.772 18:56:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.772 18:56:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:06.031 18:56:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:06.031 { 00:06:06.031 "nbd_device": "/dev/nbd0", 00:06:06.031 "bdev_name": "Malloc0" 00:06:06.031 }, 00:06:06.031 { 00:06:06.031 "nbd_device": "/dev/nbd1", 00:06:06.031 "bdev_name": "Malloc1" 00:06:06.031 } 00:06:06.031 ]' 00:06:06.031 18:56:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:06.031 { 00:06:06.031 "nbd_device": "/dev/nbd0", 00:06:06.031 "bdev_name": "Malloc0" 00:06:06.031 }, 00:06:06.031 { 00:06:06.031 "nbd_device": "/dev/nbd1", 00:06:06.031 "bdev_name": "Malloc1" 00:06:06.031 } 00:06:06.031 ]' 00:06:06.031 18:56:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:06.031 18:56:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:06.031 /dev/nbd1' 00:06:06.031 18:56:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:06.031 /dev/nbd1' 00:06:06.031 18:56:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:06.031 18:56:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:06.031 18:56:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:06.031 18:56:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:06.031 18:56:23 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:06.031 18:56:23 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:06.031 18:56:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.031 18:56:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:06.031 18:56:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:06.031 18:56:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:06.031 18:56:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:06.031 18:56:23 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:06.031 256+0 records in 00:06:06.031 256+0 records out 00:06:06.031 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00826341 s, 127 MB/s 00:06:06.031 18:56:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:06.031 18:56:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:06.031 256+0 records in 00:06:06.031 256+0 records out 00:06:06.031 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0172606 s, 60.7 MB/s 00:06:06.031 18:56:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:06.031 18:56:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:06.031 256+0 records in 00:06:06.031 256+0 records out 00:06:06.031 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0156117 s, 67.2 MB/s 00:06:06.031 18:56:23 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:06.031 18:56:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.031 18:56:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:06.031 18:56:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:06.031 18:56:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:06.031 18:56:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:06.031 18:56:23 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:06.031 18:56:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:06.031 18:56:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:06.031 18:56:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:06.031 18:56:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:06.031 18:56:23 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:06.031 18:56:23 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:06.031 18:56:23 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.031 18:56:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.031 18:56:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:06.031 18:56:23 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:06.031 18:56:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:06.031 18:56:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:06.290 18:56:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:06.290 18:56:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:06.290 18:56:23 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:06.290 18:56:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:06.290 18:56:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:06.290 18:56:23 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:06.290 18:56:23 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:06.290 18:56:23 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:06.290 18:56:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:06.290 18:56:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:06.548 18:56:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:06.548 18:56:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:06.548 18:56:23 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:06.548 18:56:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:06.548 18:56:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:06.548 18:56:23 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:06.548 18:56:23 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:06.548 18:56:23 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:06.548 18:56:23 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:06.548 18:56:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.548 18:56:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:06.807 18:56:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:06.807 18:56:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:06.807 18:56:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:06.807 18:56:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:06.807 18:56:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:06.807 18:56:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:06.807 18:56:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:06.807 18:56:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:06.807 18:56:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:06.807 18:56:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:06.807 18:56:24 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:06.807 18:56:24 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:06.807 18:56:24 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:07.065 18:56:24 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:07.065 [2024-12-05 18:56:24.444514] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:07.065 [2024-12-05 18:56:24.460395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.065 [2024-12-05 18:56:24.460566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.065 [2024-12-05 18:56:24.489693] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:07.065 [2024-12-05 18:56:24.489734] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:10.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:10.348 18:56:27 event.app_repeat -- event/event.sh@38 -- # waitforlisten 70225 /var/tmp/spdk-nbd.sock 00:06:10.348 18:56:27 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 70225 ']' 00:06:10.348 18:56:27 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:10.348 18:56:27 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:10.348 18:56:27 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:10.348 18:56:27 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:10.348 18:56:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:10.348 18:56:27 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:10.348 18:56:27 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:10.348 18:56:27 event.app_repeat -- event/event.sh@39 -- # killprocess 70225 00:06:10.348 18:56:27 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 70225 ']' 00:06:10.348 18:56:27 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 70225 00:06:10.348 18:56:27 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:10.348 18:56:27 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:10.348 18:56:27 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70225 00:06:10.348 killing process with pid 70225 00:06:10.348 18:56:27 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:10.348 18:56:27 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:10.348 18:56:27 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70225' 00:06:10.348 18:56:27 event.app_repeat -- common/autotest_common.sh@973 -- # kill 70225 00:06:10.348 18:56:27 event.app_repeat -- common/autotest_common.sh@978 -- # wait 70225 00:06:10.348 spdk_app_start is called in Round 0. 00:06:10.348 Shutdown signal received, stop current app iteration 00:06:10.348 Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 reinitialization... 00:06:10.348 spdk_app_start is called in Round 1. 00:06:10.348 Shutdown signal received, stop current app iteration 00:06:10.348 Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 reinitialization... 00:06:10.348 spdk_app_start is called in Round 2. 00:06:10.348 Shutdown signal received, stop current app iteration 00:06:10.348 Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 reinitialization... 00:06:10.348 spdk_app_start is called in Round 3. 00:06:10.348 Shutdown signal received, stop current app iteration 00:06:10.348 18:56:27 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:10.348 18:56:27 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:10.348 00:06:10.348 real 0m16.704s 00:06:10.348 user 0m37.403s 00:06:10.348 sys 0m1.914s 00:06:10.348 18:56:27 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.348 ************************************ 00:06:10.348 END TEST app_repeat 00:06:10.348 ************************************ 00:06:10.348 18:56:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:10.348 18:56:27 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:10.348 18:56:27 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:10.348 18:56:27 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:10.348 18:56:27 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.348 18:56:27 event -- common/autotest_common.sh@10 -- # set +x 00:06:10.348 ************************************ 00:06:10.348 START TEST cpu_locks 00:06:10.348 ************************************ 00:06:10.348 18:56:27 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:10.348 * Looking for test storage... 00:06:10.348 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:10.348 18:56:27 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:10.348 18:56:27 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:10.348 18:56:27 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:06:10.348 18:56:27 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:10.348 18:56:27 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:10.348 18:56:27 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:10.348 18:56:27 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:10.348 18:56:27 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:10.348 18:56:27 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:10.348 18:56:27 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:10.348 18:56:27 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:10.348 18:56:27 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:10.348 18:56:27 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:10.348 18:56:27 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:10.348 18:56:27 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:10.610 18:56:27 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:10.610 18:56:27 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:10.610 18:56:27 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:10.610 18:56:27 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:10.610 18:56:27 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:10.610 18:56:27 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:10.610 18:56:27 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:10.610 18:56:27 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:10.610 18:56:27 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:10.610 18:56:27 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:10.610 18:56:27 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:10.610 18:56:27 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:10.610 18:56:27 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:10.610 18:56:27 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:10.610 18:56:27 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:10.610 18:56:27 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:10.610 18:56:27 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:10.610 18:56:27 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:10.610 18:56:27 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:10.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.610 --rc genhtml_branch_coverage=1 00:06:10.610 --rc genhtml_function_coverage=1 00:06:10.610 --rc genhtml_legend=1 00:06:10.610 --rc geninfo_all_blocks=1 00:06:10.610 --rc geninfo_unexecuted_blocks=1 00:06:10.610 00:06:10.610 ' 00:06:10.610 18:56:27 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:10.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.610 --rc genhtml_branch_coverage=1 00:06:10.610 --rc genhtml_function_coverage=1 00:06:10.610 --rc genhtml_legend=1 00:06:10.610 --rc geninfo_all_blocks=1 00:06:10.610 --rc geninfo_unexecuted_blocks=1 00:06:10.610 00:06:10.610 ' 00:06:10.610 18:56:27 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:10.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.610 --rc genhtml_branch_coverage=1 00:06:10.610 --rc genhtml_function_coverage=1 00:06:10.610 --rc genhtml_legend=1 00:06:10.610 --rc geninfo_all_blocks=1 00:06:10.610 --rc geninfo_unexecuted_blocks=1 00:06:10.610 00:06:10.610 ' 00:06:10.610 18:56:27 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:10.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.610 --rc genhtml_branch_coverage=1 00:06:10.610 --rc genhtml_function_coverage=1 00:06:10.610 --rc genhtml_legend=1 00:06:10.610 --rc geninfo_all_blocks=1 00:06:10.610 --rc geninfo_unexecuted_blocks=1 00:06:10.610 00:06:10.610 ' 00:06:10.610 18:56:27 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:10.610 18:56:27 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:10.610 18:56:27 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:10.610 18:56:27 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:10.610 18:56:27 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:10.610 18:56:27 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.610 18:56:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:10.610 ************************************ 00:06:10.610 START TEST default_locks 00:06:10.610 ************************************ 00:06:10.610 18:56:27 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:10.610 18:56:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=70645 00:06:10.610 18:56:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 70645 00:06:10.610 18:56:27 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 70645 ']' 00:06:10.610 18:56:27 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.610 18:56:27 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:10.610 18:56:27 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.610 18:56:27 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:10.610 18:56:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:10.610 18:56:27 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:10.610 [2024-12-05 18:56:28.003614] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:06:10.610 [2024-12-05 18:56:28.003745] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70645 ] 00:06:10.610 [2024-12-05 18:56:28.147475] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.872 [2024-12-05 18:56:28.175151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.445 18:56:28 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:11.445 18:56:28 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:11.445 18:56:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 70645 00:06:11.445 18:56:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 70645 00:06:11.445 18:56:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:11.708 18:56:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 70645 00:06:11.708 18:56:29 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 70645 ']' 00:06:11.708 18:56:29 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 70645 00:06:11.708 18:56:29 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:11.708 18:56:29 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:11.708 18:56:29 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70645 00:06:11.708 18:56:29 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:11.708 killing process with pid 70645 00:06:11.708 18:56:29 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:11.708 18:56:29 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70645' 00:06:11.708 18:56:29 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 70645 00:06:11.708 18:56:29 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 70645 00:06:11.969 18:56:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 70645 00:06:11.969 18:56:29 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:11.969 18:56:29 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 70645 00:06:11.969 18:56:29 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:11.969 18:56:29 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:11.969 18:56:29 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:11.969 18:56:29 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:11.969 18:56:29 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 70645 00:06:11.969 18:56:29 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 70645 ']' 00:06:11.969 18:56:29 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.969 18:56:29 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:11.969 18:56:29 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.969 18:56:29 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:11.969 18:56:29 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:11.969 ERROR: process (pid: 70645) is no longer running 00:06:11.969 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (70645) - No such process 00:06:11.969 18:56:29 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:11.969 18:56:29 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:11.969 18:56:29 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:11.969 18:56:29 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:11.969 18:56:29 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:11.969 18:56:29 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:11.969 18:56:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:11.969 18:56:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:11.969 18:56:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:11.969 ************************************ 00:06:11.969 END TEST default_locks 00:06:11.969 ************************************ 00:06:11.969 18:56:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:11.969 00:06:11.969 real 0m1.399s 00:06:11.969 user 0m1.389s 00:06:11.969 sys 0m0.467s 00:06:11.969 18:56:29 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:11.969 18:56:29 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:11.969 18:56:29 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:11.969 18:56:29 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:11.969 18:56:29 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:11.969 18:56:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:11.969 ************************************ 00:06:11.969 START TEST default_locks_via_rpc 00:06:11.969 ************************************ 00:06:11.969 18:56:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:11.969 18:56:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=70692 00:06:11.969 18:56:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 70692 00:06:11.969 18:56:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 70692 ']' 00:06:11.969 18:56:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.969 18:56:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:11.969 18:56:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.969 18:56:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:11.969 18:56:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.969 18:56:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:11.969 [2024-12-05 18:56:29.465890] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:06:11.969 [2024-12-05 18:56:29.466024] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70692 ] 00:06:12.232 [2024-12-05 18:56:29.611015] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.232 [2024-12-05 18:56:29.630087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.856 18:56:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:12.856 18:56:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:12.856 18:56:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:12.856 18:56:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:12.856 18:56:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.856 18:56:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:12.856 18:56:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:12.856 18:56:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:12.856 18:56:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:12.856 18:56:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:12.856 18:56:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:12.856 18:56:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:12.856 18:56:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.856 18:56:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:12.856 18:56:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 70692 00:06:12.856 18:56:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 70692 00:06:12.856 18:56:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:13.119 18:56:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 70692 00:06:13.119 18:56:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 70692 ']' 00:06:13.119 18:56:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 70692 00:06:13.119 18:56:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:13.119 18:56:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:13.119 18:56:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70692 00:06:13.119 18:56:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:13.119 18:56:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:13.119 killing process with pid 70692 00:06:13.119 18:56:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70692' 00:06:13.119 18:56:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 70692 00:06:13.119 18:56:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 70692 00:06:13.381 00:06:13.381 real 0m1.377s 00:06:13.381 user 0m1.438s 00:06:13.381 sys 0m0.382s 00:06:13.381 18:56:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:13.381 ************************************ 00:06:13.381 END TEST default_locks_via_rpc 00:06:13.381 ************************************ 00:06:13.381 18:56:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.381 18:56:30 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:13.381 18:56:30 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:13.381 18:56:30 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:13.381 18:56:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:13.381 ************************************ 00:06:13.381 START TEST non_locking_app_on_locked_coremask 00:06:13.381 ************************************ 00:06:13.381 18:56:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:13.381 18:56:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=70739 00:06:13.381 18:56:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 70739 /var/tmp/spdk.sock 00:06:13.381 18:56:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 70739 ']' 00:06:13.381 18:56:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.381 18:56:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:13.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.381 18:56:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.381 18:56:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:13.381 18:56:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:13.381 18:56:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:13.381 [2024-12-05 18:56:30.895016] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:06:13.381 [2024-12-05 18:56:30.895135] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70739 ] 00:06:13.643 [2024-12-05 18:56:31.040133] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.643 [2024-12-05 18:56:31.059361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.216 18:56:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:14.216 18:56:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:14.216 18:56:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=70755 00:06:14.216 18:56:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 70755 /var/tmp/spdk2.sock 00:06:14.216 18:56:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 70755 ']' 00:06:14.216 18:56:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:14.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:14.216 18:56:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:14.216 18:56:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:14.216 18:56:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:14.216 18:56:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:14.216 18:56:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:14.478 [2024-12-05 18:56:31.808305] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:06:14.478 [2024-12-05 18:56:31.808425] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70755 ] 00:06:14.478 [2024-12-05 18:56:31.965244] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:14.478 [2024-12-05 18:56:31.965300] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.478 [2024-12-05 18:56:32.004552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.423 18:56:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:15.423 18:56:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:15.423 18:56:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 70739 00:06:15.423 18:56:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 70739 00:06:15.423 18:56:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:15.423 18:56:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 70739 00:06:15.423 18:56:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 70739 ']' 00:06:15.423 18:56:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 70739 00:06:15.423 18:56:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:15.423 18:56:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:15.423 18:56:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70739 00:06:15.423 18:56:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:15.423 killing process with pid 70739 00:06:15.423 18:56:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:15.423 18:56:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70739' 00:06:15.423 18:56:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 70739 00:06:15.423 18:56:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 70739 00:06:15.992 18:56:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 70755 00:06:15.992 18:56:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 70755 ']' 00:06:15.992 18:56:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 70755 00:06:15.992 18:56:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:15.992 18:56:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:15.992 18:56:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70755 00:06:15.992 18:56:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:15.992 killing process with pid 70755 00:06:15.992 18:56:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:15.992 18:56:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70755' 00:06:15.992 18:56:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 70755 00:06:15.992 18:56:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 70755 00:06:16.250 00:06:16.250 real 0m2.859s 00:06:16.250 user 0m3.182s 00:06:16.250 sys 0m0.737s 00:06:16.250 18:56:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.250 ************************************ 00:06:16.250 END TEST non_locking_app_on_locked_coremask 00:06:16.250 ************************************ 00:06:16.250 18:56:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:16.250 18:56:33 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:16.250 18:56:33 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:16.250 18:56:33 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.250 18:56:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:16.250 ************************************ 00:06:16.250 START TEST locking_app_on_unlocked_coremask 00:06:16.250 ************************************ 00:06:16.250 18:56:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:16.250 18:56:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=70813 00:06:16.250 18:56:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 70813 /var/tmp/spdk.sock 00:06:16.250 18:56:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 70813 ']' 00:06:16.250 18:56:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:16.250 18:56:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.250 18:56:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:16.250 18:56:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.250 18:56:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:16.250 18:56:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:16.508 [2024-12-05 18:56:33.811652] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:06:16.508 [2024-12-05 18:56:33.811776] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70813 ] 00:06:16.508 [2024-12-05 18:56:33.953233] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:16.508 [2024-12-05 18:56:33.953279] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.508 [2024-12-05 18:56:33.970588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.075 18:56:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:17.075 18:56:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:17.075 18:56:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=70829 00:06:17.075 18:56:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 70829 /var/tmp/spdk2.sock 00:06:17.075 18:56:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 70829 ']' 00:06:17.075 18:56:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:17.075 18:56:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:17.075 18:56:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:17.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:17.075 18:56:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:17.075 18:56:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:17.075 18:56:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:17.334 [2024-12-05 18:56:34.672591] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:06:17.334 [2024-12-05 18:56:34.672728] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70829 ] 00:06:17.334 [2024-12-05 18:56:34.821302] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.334 [2024-12-05 18:56:34.863433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.266 18:56:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:18.266 18:56:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:18.266 18:56:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 70829 00:06:18.266 18:56:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 70829 00:06:18.266 18:56:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:18.266 18:56:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 70813 00:06:18.266 18:56:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 70813 ']' 00:06:18.266 18:56:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 70813 00:06:18.266 18:56:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:18.266 18:56:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:18.266 18:56:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70813 00:06:18.524 18:56:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:18.524 18:56:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:18.524 killing process with pid 70813 00:06:18.524 18:56:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70813' 00:06:18.524 18:56:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 70813 00:06:18.524 18:56:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 70813 00:06:18.782 18:56:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 70829 00:06:18.782 18:56:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 70829 ']' 00:06:18.782 18:56:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 70829 00:06:18.782 18:56:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:18.782 18:56:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:18.782 18:56:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70829 00:06:18.782 18:56:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:18.782 18:56:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:18.782 killing process with pid 70829 00:06:18.782 18:56:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70829' 00:06:18.782 18:56:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 70829 00:06:18.782 18:56:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 70829 00:06:19.043 00:06:19.043 real 0m2.822s 00:06:19.043 user 0m3.131s 00:06:19.043 sys 0m0.725s 00:06:19.043 18:56:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:19.043 ************************************ 00:06:19.043 END TEST locking_app_on_unlocked_coremask 00:06:19.043 18:56:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:19.043 ************************************ 00:06:19.304 18:56:36 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:19.304 18:56:36 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:19.304 18:56:36 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:19.304 18:56:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:19.304 ************************************ 00:06:19.304 START TEST locking_app_on_locked_coremask 00:06:19.304 ************************************ 00:06:19.304 18:56:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:19.304 18:56:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=70886 00:06:19.304 18:56:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 70886 /var/tmp/spdk.sock 00:06:19.304 18:56:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:19.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.304 18:56:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 70886 ']' 00:06:19.304 18:56:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.304 18:56:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:19.304 18:56:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.304 18:56:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:19.304 18:56:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:19.304 [2024-12-05 18:56:36.703283] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:06:19.304 [2024-12-05 18:56:36.703399] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70886 ] 00:06:19.304 [2024-12-05 18:56:36.840314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.304 [2024-12-05 18:56:36.861103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.295 18:56:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:20.295 18:56:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:20.295 18:56:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=70896 00:06:20.295 18:56:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 70896 /var/tmp/spdk2.sock 00:06:20.295 18:56:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:20.295 18:56:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 70896 /var/tmp/spdk2.sock 00:06:20.295 18:56:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:20.295 18:56:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:20.295 18:56:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:20.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:20.295 18:56:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:20.295 18:56:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 70896 /var/tmp/spdk2.sock 00:06:20.295 18:56:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:20.295 18:56:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 70896 ']' 00:06:20.295 18:56:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:20.295 18:56:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:20.296 18:56:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:20.296 18:56:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:20.296 18:56:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:20.296 [2024-12-05 18:56:37.625397] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:06:20.296 [2024-12-05 18:56:37.625509] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70896 ] 00:06:20.296 [2024-12-05 18:56:37.785386] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 70886 has claimed it. 00:06:20.296 [2024-12-05 18:56:37.785455] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:20.864 ERROR: process (pid: 70896) is no longer running 00:06:20.864 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (70896) - No such process 00:06:20.864 18:56:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:20.864 18:56:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:20.864 18:56:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:20.864 18:56:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:20.864 18:56:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:20.864 18:56:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:20.864 18:56:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 70886 00:06:20.864 18:56:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 70886 00:06:20.864 18:56:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:21.125 18:56:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 70886 00:06:21.125 18:56:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 70886 ']' 00:06:21.125 18:56:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 70886 00:06:21.125 18:56:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:21.125 18:56:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:21.125 18:56:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70886 00:06:21.125 killing process with pid 70886 00:06:21.125 18:56:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:21.125 18:56:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:21.125 18:56:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70886' 00:06:21.125 18:56:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 70886 00:06:21.125 18:56:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 70886 00:06:21.386 ************************************ 00:06:21.386 END TEST locking_app_on_locked_coremask 00:06:21.386 ************************************ 00:06:21.386 00:06:21.386 real 0m2.114s 00:06:21.386 user 0m2.361s 00:06:21.386 sys 0m0.500s 00:06:21.386 18:56:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:21.386 18:56:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:21.386 18:56:38 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:21.386 18:56:38 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:21.386 18:56:38 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:21.386 18:56:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:21.386 ************************************ 00:06:21.386 START TEST locking_overlapped_coremask 00:06:21.386 ************************************ 00:06:21.386 18:56:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:21.386 18:56:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=70945 00:06:21.386 18:56:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 70945 /var/tmp/spdk.sock 00:06:21.386 18:56:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 70945 ']' 00:06:21.386 18:56:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.386 18:56:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:21.386 18:56:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.386 18:56:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:21.386 18:56:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:21.386 18:56:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:21.386 [2024-12-05 18:56:38.875133] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:06:21.386 [2024-12-05 18:56:38.875248] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70945 ] 00:06:21.648 [2024-12-05 18:56:39.022154] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:21.648 [2024-12-05 18:56:39.044493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:21.648 [2024-12-05 18:56:39.044916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:21.648 [2024-12-05 18:56:39.045127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.219 18:56:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:22.219 18:56:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:22.219 18:56:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:22.219 18:56:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=70963 00:06:22.219 18:56:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 70963 /var/tmp/spdk2.sock 00:06:22.219 18:56:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:22.219 18:56:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 70963 /var/tmp/spdk2.sock 00:06:22.219 18:56:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:22.219 18:56:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:22.219 18:56:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:22.219 18:56:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:22.219 18:56:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 70963 /var/tmp/spdk2.sock 00:06:22.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:22.219 18:56:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 70963 ']' 00:06:22.219 18:56:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:22.219 18:56:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:22.219 18:56:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:22.219 18:56:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:22.219 18:56:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:22.505 [2024-12-05 18:56:39.843316] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:06:22.505 [2024-12-05 18:56:39.843477] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70963 ] 00:06:22.505 [2024-12-05 18:56:40.003998] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 70945 has claimed it. 00:06:22.505 [2024-12-05 18:56:40.004065] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:23.076 ERROR: process (pid: 70963) is no longer running 00:06:23.076 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (70963) - No such process 00:06:23.076 18:56:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:23.076 18:56:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:23.076 18:56:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:23.076 18:56:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:23.076 18:56:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:23.076 18:56:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:23.076 18:56:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:23.076 18:56:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:23.076 18:56:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:23.077 18:56:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:23.077 18:56:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 70945 00:06:23.077 18:56:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 70945 ']' 00:06:23.077 18:56:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 70945 00:06:23.077 18:56:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:23.077 18:56:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:23.077 18:56:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70945 00:06:23.077 18:56:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:23.077 18:56:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:23.077 18:56:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70945' 00:06:23.077 killing process with pid 70945 00:06:23.077 18:56:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 70945 00:06:23.077 18:56:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 70945 00:06:23.648 00:06:23.648 real 0m2.138s 00:06:23.648 user 0m5.944s 00:06:23.648 sys 0m0.437s 00:06:23.648 18:56:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:23.648 18:56:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:23.648 ************************************ 00:06:23.648 END TEST locking_overlapped_coremask 00:06:23.648 ************************************ 00:06:23.648 18:56:40 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:23.648 18:56:41 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:23.648 18:56:41 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.648 18:56:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:23.648 ************************************ 00:06:23.648 START TEST locking_overlapped_coremask_via_rpc 00:06:23.648 ************************************ 00:06:23.648 18:56:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:23.648 18:56:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=71005 00:06:23.648 18:56:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 71005 /var/tmp/spdk.sock 00:06:23.648 18:56:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 71005 ']' 00:06:23.648 18:56:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.648 18:56:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:23.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.648 18:56:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.648 18:56:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:23.648 18:56:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.648 18:56:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:23.648 [2024-12-05 18:56:41.105244] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:06:23.648 [2024-12-05 18:56:41.105433] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71005 ] 00:06:23.909 [2024-12-05 18:56:41.251054] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:23.909 [2024-12-05 18:56:41.251130] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:23.909 [2024-12-05 18:56:41.286669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.909 [2024-12-05 18:56:41.287048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:23.909 [2024-12-05 18:56:41.287061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:24.481 18:56:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:24.481 18:56:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:24.481 18:56:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=71023 00:06:24.481 18:56:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 71023 /var/tmp/spdk2.sock 00:06:24.481 18:56:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 71023 ']' 00:06:24.481 18:56:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:24.481 18:56:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:24.481 18:56:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:24.481 18:56:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:24.481 18:56:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:24.481 18:56:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.481 [2024-12-05 18:56:42.021946] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:06:24.481 [2024-12-05 18:56:42.022316] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71023 ] 00:06:24.742 [2024-12-05 18:56:42.182038] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:24.742 [2024-12-05 18:56:42.182118] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:24.742 [2024-12-05 18:56:42.262002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:24.742 [2024-12-05 18:56:42.262153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:24.742 [2024-12-05 18:56:42.262227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:25.682 18:56:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:25.682 18:56:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:25.682 18:56:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:25.682 18:56:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:25.682 18:56:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:25.683 18:56:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:25.683 18:56:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:25.683 18:56:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:25.683 18:56:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:25.683 18:56:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:25.683 18:56:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:25.683 18:56:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:25.683 18:56:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:25.683 18:56:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:25.683 18:56:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:25.683 18:56:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:25.683 [2024-12-05 18:56:42.905449] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71005 has claimed it. 00:06:25.683 request: 00:06:25.683 { 00:06:25.683 "method": "framework_enable_cpumask_locks", 00:06:25.683 "req_id": 1 00:06:25.683 } 00:06:25.683 Got JSON-RPC error response 00:06:25.683 response: 00:06:25.683 { 00:06:25.683 "code": -32603, 00:06:25.683 "message": "Failed to claim CPU core: 2" 00:06:25.683 } 00:06:25.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.683 18:56:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:25.683 18:56:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:25.683 18:56:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:25.683 18:56:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:25.683 18:56:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:25.683 18:56:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 71005 /var/tmp/spdk.sock 00:06:25.683 18:56:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 71005 ']' 00:06:25.683 18:56:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.683 18:56:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:25.683 18:56:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.683 18:56:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:25.683 18:56:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:25.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:25.683 18:56:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:25.683 18:56:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:25.683 18:56:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 71023 /var/tmp/spdk2.sock 00:06:25.683 18:56:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 71023 ']' 00:06:25.683 18:56:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:25.683 18:56:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:25.683 18:56:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:25.683 18:56:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:25.683 18:56:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:25.944 ************************************ 00:06:25.944 END TEST locking_overlapped_coremask_via_rpc 00:06:25.944 ************************************ 00:06:25.944 18:56:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:25.944 18:56:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:25.944 18:56:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:25.944 18:56:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:25.944 18:56:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:25.944 18:56:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:25.944 00:06:25.944 real 0m2.337s 00:06:25.944 user 0m1.109s 00:06:25.944 sys 0m0.164s 00:06:25.944 18:56:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:25.944 18:56:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:25.944 18:56:43 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:25.944 18:56:43 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71005 ]] 00:06:25.944 18:56:43 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71005 00:06:25.944 18:56:43 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 71005 ']' 00:06:25.944 18:56:43 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 71005 00:06:25.944 18:56:43 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:25.944 18:56:43 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:25.944 18:56:43 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71005 00:06:25.944 killing process with pid 71005 00:06:25.944 18:56:43 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:25.944 18:56:43 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:25.944 18:56:43 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71005' 00:06:25.944 18:56:43 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 71005 00:06:25.944 18:56:43 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 71005 00:06:26.205 18:56:43 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71023 ]] 00:06:26.205 18:56:43 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71023 00:06:26.205 18:56:43 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 71023 ']' 00:06:26.205 18:56:43 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 71023 00:06:26.205 18:56:43 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:26.205 18:56:43 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:26.205 18:56:43 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71023 00:06:26.205 killing process with pid 71023 00:06:26.205 18:56:43 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:26.205 18:56:43 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:26.205 18:56:43 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71023' 00:06:26.205 18:56:43 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 71023 00:06:26.205 18:56:43 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 71023 00:06:26.465 18:56:44 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:26.465 Process with pid 71005 is not found 00:06:26.465 18:56:44 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:26.465 18:56:44 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71005 ]] 00:06:26.465 18:56:44 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71005 00:06:26.465 18:56:44 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 71005 ']' 00:06:26.465 18:56:44 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 71005 00:06:26.465 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (71005) - No such process 00:06:26.465 18:56:44 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 71005 is not found' 00:06:26.465 18:56:44 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71023 ]] 00:06:26.465 18:56:44 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71023 00:06:26.465 18:56:44 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 71023 ']' 00:06:26.465 Process with pid 71023 is not found 00:06:26.465 18:56:44 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 71023 00:06:26.465 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (71023) - No such process 00:06:26.465 18:56:44 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 71023 is not found' 00:06:26.465 18:56:44 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:26.465 00:06:26.465 real 0m16.249s 00:06:26.465 user 0m29.043s 00:06:26.465 sys 0m4.379s 00:06:26.465 18:56:44 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:26.465 ************************************ 00:06:26.465 END TEST cpu_locks 00:06:26.466 ************************************ 00:06:26.466 18:56:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:26.727 ************************************ 00:06:26.727 END TEST event 00:06:26.727 ************************************ 00:06:26.727 00:06:26.727 real 0m39.982s 00:06:26.727 user 1m17.878s 00:06:26.727 sys 0m6.982s 00:06:26.727 18:56:44 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:26.727 18:56:44 event -- common/autotest_common.sh@10 -- # set +x 00:06:26.727 18:56:44 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:26.727 18:56:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:26.727 18:56:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:26.727 18:56:44 -- common/autotest_common.sh@10 -- # set +x 00:06:26.727 ************************************ 00:06:26.727 START TEST thread 00:06:26.727 ************************************ 00:06:26.727 18:56:44 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:26.727 * Looking for test storage... 00:06:26.727 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:26.727 18:56:44 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:26.727 18:56:44 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:06:26.727 18:56:44 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:26.727 18:56:44 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:26.727 18:56:44 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:26.727 18:56:44 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:26.727 18:56:44 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:26.727 18:56:44 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:26.727 18:56:44 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:26.727 18:56:44 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:26.727 18:56:44 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:26.727 18:56:44 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:26.727 18:56:44 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:26.727 18:56:44 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:26.727 18:56:44 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:26.727 18:56:44 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:26.727 18:56:44 thread -- scripts/common.sh@345 -- # : 1 00:06:26.727 18:56:44 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:26.727 18:56:44 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:26.727 18:56:44 thread -- scripts/common.sh@365 -- # decimal 1 00:06:26.727 18:56:44 thread -- scripts/common.sh@353 -- # local d=1 00:06:26.727 18:56:44 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:26.727 18:56:44 thread -- scripts/common.sh@355 -- # echo 1 00:06:26.727 18:56:44 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:26.727 18:56:44 thread -- scripts/common.sh@366 -- # decimal 2 00:06:26.727 18:56:44 thread -- scripts/common.sh@353 -- # local d=2 00:06:26.727 18:56:44 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:26.727 18:56:44 thread -- scripts/common.sh@355 -- # echo 2 00:06:26.727 18:56:44 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:26.727 18:56:44 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:26.727 18:56:44 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:26.727 18:56:44 thread -- scripts/common.sh@368 -- # return 0 00:06:26.727 18:56:44 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:26.727 18:56:44 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:26.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.727 --rc genhtml_branch_coverage=1 00:06:26.727 --rc genhtml_function_coverage=1 00:06:26.727 --rc genhtml_legend=1 00:06:26.727 --rc geninfo_all_blocks=1 00:06:26.727 --rc geninfo_unexecuted_blocks=1 00:06:26.727 00:06:26.727 ' 00:06:26.727 18:56:44 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:26.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.727 --rc genhtml_branch_coverage=1 00:06:26.727 --rc genhtml_function_coverage=1 00:06:26.727 --rc genhtml_legend=1 00:06:26.727 --rc geninfo_all_blocks=1 00:06:26.727 --rc geninfo_unexecuted_blocks=1 00:06:26.727 00:06:26.727 ' 00:06:26.727 18:56:44 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:26.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.727 --rc genhtml_branch_coverage=1 00:06:26.727 --rc genhtml_function_coverage=1 00:06:26.727 --rc genhtml_legend=1 00:06:26.727 --rc geninfo_all_blocks=1 00:06:26.727 --rc geninfo_unexecuted_blocks=1 00:06:26.727 00:06:26.727 ' 00:06:26.727 18:56:44 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:26.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.727 --rc genhtml_branch_coverage=1 00:06:26.727 --rc genhtml_function_coverage=1 00:06:26.727 --rc genhtml_legend=1 00:06:26.727 --rc geninfo_all_blocks=1 00:06:26.727 --rc geninfo_unexecuted_blocks=1 00:06:26.727 00:06:26.727 ' 00:06:26.727 18:56:44 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:26.727 18:56:44 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:26.727 18:56:44 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:26.727 18:56:44 thread -- common/autotest_common.sh@10 -- # set +x 00:06:26.989 ************************************ 00:06:26.989 START TEST thread_poller_perf 00:06:26.989 ************************************ 00:06:26.989 18:56:44 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:26.989 [2024-12-05 18:56:44.325304] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:06:26.989 [2024-12-05 18:56:44.325445] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71150 ] 00:06:26.989 [2024-12-05 18:56:44.474287] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.989 [2024-12-05 18:56:44.505083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.989 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:28.374 [2024-12-05T18:56:45.933Z] ====================================== 00:06:28.374 [2024-12-05T18:56:45.933Z] busy:2615184038 (cyc) 00:06:28.374 [2024-12-05T18:56:45.933Z] total_run_count: 304000 00:06:28.374 [2024-12-05T18:56:45.933Z] tsc_hz: 2600000000 (cyc) 00:06:28.374 [2024-12-05T18:56:45.933Z] ====================================== 00:06:28.374 [2024-12-05T18:56:45.933Z] poller_cost: 8602 (cyc), 3308 (nsec) 00:06:28.374 00:06:28.374 real 0m1.279s 00:06:28.374 user 0m1.102s 00:06:28.374 sys 0m0.068s 00:06:28.374 18:56:45 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:28.374 18:56:45 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:28.374 ************************************ 00:06:28.374 END TEST thread_poller_perf 00:06:28.374 ************************************ 00:06:28.374 18:56:45 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:28.374 18:56:45 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:28.374 18:56:45 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:28.374 18:56:45 thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.374 ************************************ 00:06:28.374 START TEST thread_poller_perf 00:06:28.374 ************************************ 00:06:28.374 18:56:45 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:28.374 [2024-12-05 18:56:45.678323] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:06:28.374 [2024-12-05 18:56:45.678639] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71181 ] 00:06:28.374 [2024-12-05 18:56:45.825749] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.374 [2024-12-05 18:56:45.856505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.374 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:29.755 [2024-12-05T18:56:47.314Z] ====================================== 00:06:29.755 [2024-12-05T18:56:47.314Z] busy:2603751648 (cyc) 00:06:29.755 [2024-12-05T18:56:47.314Z] total_run_count: 3956000 00:06:29.755 [2024-12-05T18:56:47.314Z] tsc_hz: 2600000000 (cyc) 00:06:29.755 [2024-12-05T18:56:47.314Z] ====================================== 00:06:29.755 [2024-12-05T18:56:47.314Z] poller_cost: 658 (cyc), 253 (nsec) 00:06:29.755 00:06:29.755 real 0m1.252s 00:06:29.755 user 0m1.083s 00:06:29.755 sys 0m0.060s 00:06:29.755 18:56:46 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:29.755 ************************************ 00:06:29.755 END TEST thread_poller_perf 00:06:29.755 ************************************ 00:06:29.755 18:56:46 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:29.755 18:56:46 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:29.755 ************************************ 00:06:29.755 END TEST thread 00:06:29.755 ************************************ 00:06:29.755 00:06:29.755 real 0m2.814s 00:06:29.755 user 0m2.291s 00:06:29.755 sys 0m0.273s 00:06:29.755 18:56:46 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:29.755 18:56:46 thread -- common/autotest_common.sh@10 -- # set +x 00:06:29.755 18:56:46 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:29.755 18:56:46 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:29.755 18:56:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:29.755 18:56:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:29.755 18:56:46 -- common/autotest_common.sh@10 -- # set +x 00:06:29.755 ************************************ 00:06:29.755 START TEST app_cmdline 00:06:29.755 ************************************ 00:06:29.755 18:56:46 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:29.755 * Looking for test storage... 00:06:29.755 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:29.755 18:56:47 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:29.755 18:56:47 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:29.755 18:56:47 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:06:29.755 18:56:47 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:29.755 18:56:47 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:29.755 18:56:47 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:29.755 18:56:47 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:29.755 18:56:47 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:29.755 18:56:47 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:29.755 18:56:47 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:29.755 18:56:47 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:29.755 18:56:47 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:29.755 18:56:47 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:29.755 18:56:47 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:29.755 18:56:47 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:29.755 18:56:47 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:29.755 18:56:47 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:29.756 18:56:47 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:29.756 18:56:47 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:29.756 18:56:47 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:29.756 18:56:47 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:29.756 18:56:47 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:29.756 18:56:47 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:29.756 18:56:47 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:29.756 18:56:47 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:29.756 18:56:47 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:29.756 18:56:47 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:29.756 18:56:47 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:29.756 18:56:47 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:29.756 18:56:47 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:29.756 18:56:47 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:29.756 18:56:47 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:29.756 18:56:47 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:29.756 18:56:47 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:29.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.756 --rc genhtml_branch_coverage=1 00:06:29.756 --rc genhtml_function_coverage=1 00:06:29.756 --rc genhtml_legend=1 00:06:29.756 --rc geninfo_all_blocks=1 00:06:29.756 --rc geninfo_unexecuted_blocks=1 00:06:29.756 00:06:29.756 ' 00:06:29.756 18:56:47 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:29.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.756 --rc genhtml_branch_coverage=1 00:06:29.756 --rc genhtml_function_coverage=1 00:06:29.756 --rc genhtml_legend=1 00:06:29.756 --rc geninfo_all_blocks=1 00:06:29.756 --rc geninfo_unexecuted_blocks=1 00:06:29.756 00:06:29.756 ' 00:06:29.756 18:56:47 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:29.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.756 --rc genhtml_branch_coverage=1 00:06:29.756 --rc genhtml_function_coverage=1 00:06:29.756 --rc genhtml_legend=1 00:06:29.756 --rc geninfo_all_blocks=1 00:06:29.756 --rc geninfo_unexecuted_blocks=1 00:06:29.756 00:06:29.756 ' 00:06:29.756 18:56:47 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:29.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.756 --rc genhtml_branch_coverage=1 00:06:29.756 --rc genhtml_function_coverage=1 00:06:29.756 --rc genhtml_legend=1 00:06:29.756 --rc geninfo_all_blocks=1 00:06:29.756 --rc geninfo_unexecuted_blocks=1 00:06:29.756 00:06:29.756 ' 00:06:29.756 18:56:47 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:29.756 18:56:47 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=71270 00:06:29.756 18:56:47 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 71270 00:06:29.756 18:56:47 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:29.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.756 18:56:47 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 71270 ']' 00:06:29.756 18:56:47 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.756 18:56:47 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:29.756 18:56:47 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.756 18:56:47 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:29.756 18:56:47 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:29.756 [2024-12-05 18:56:47.216296] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:06:29.756 [2024-12-05 18:56:47.217043] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71270 ] 00:06:30.015 [2024-12-05 18:56:47.361082] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.015 [2024-12-05 18:56:47.393107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.588 18:56:48 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:30.588 18:56:48 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:30.588 18:56:48 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:30.919 { 00:06:30.919 "version": "SPDK v25.01-pre git sha1 8d3947977", 00:06:30.919 "fields": { 00:06:30.919 "major": 25, 00:06:30.919 "minor": 1, 00:06:30.919 "patch": 0, 00:06:30.919 "suffix": "-pre", 00:06:30.919 "commit": "8d3947977" 00:06:30.919 } 00:06:30.919 } 00:06:30.919 18:56:48 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:30.919 18:56:48 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:30.919 18:56:48 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:30.919 18:56:48 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:30.919 18:56:48 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:30.919 18:56:48 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:30.919 18:56:48 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.919 18:56:48 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:30.919 18:56:48 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:30.919 18:56:48 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.919 18:56:48 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:30.919 18:56:48 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:30.919 18:56:48 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:30.919 18:56:48 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:30.919 18:56:48 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:30.919 18:56:48 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:30.919 18:56:48 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:30.919 18:56:48 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:30.920 18:56:48 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:30.920 18:56:48 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:30.920 18:56:48 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:30.920 18:56:48 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:30.920 18:56:48 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:30.920 18:56:48 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:31.181 request: 00:06:31.181 { 00:06:31.181 "method": "env_dpdk_get_mem_stats", 00:06:31.181 "req_id": 1 00:06:31.181 } 00:06:31.181 Got JSON-RPC error response 00:06:31.181 response: 00:06:31.181 { 00:06:31.181 "code": -32601, 00:06:31.181 "message": "Method not found" 00:06:31.181 } 00:06:31.181 18:56:48 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:31.181 18:56:48 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:31.181 18:56:48 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:31.181 18:56:48 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:31.181 18:56:48 app_cmdline -- app/cmdline.sh@1 -- # killprocess 71270 00:06:31.181 18:56:48 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 71270 ']' 00:06:31.181 18:56:48 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 71270 00:06:31.181 18:56:48 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:31.181 18:56:48 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:31.181 18:56:48 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71270 00:06:31.181 killing process with pid 71270 00:06:31.181 18:56:48 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:31.181 18:56:48 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:31.181 18:56:48 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71270' 00:06:31.181 18:56:48 app_cmdline -- common/autotest_common.sh@973 -- # kill 71270 00:06:31.181 18:56:48 app_cmdline -- common/autotest_common.sh@978 -- # wait 71270 00:06:31.442 00:06:31.442 real 0m2.008s 00:06:31.442 user 0m2.313s 00:06:31.442 sys 0m0.523s 00:06:31.442 18:56:48 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:31.704 ************************************ 00:06:31.704 END TEST app_cmdline 00:06:31.704 ************************************ 00:06:31.704 18:56:49 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:31.704 18:56:49 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:31.704 18:56:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:31.704 18:56:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:31.704 18:56:49 -- common/autotest_common.sh@10 -- # set +x 00:06:31.704 ************************************ 00:06:31.704 START TEST version 00:06:31.704 ************************************ 00:06:31.704 18:56:49 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:31.704 * Looking for test storage... 00:06:31.704 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:31.704 18:56:49 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:31.704 18:56:49 version -- common/autotest_common.sh@1711 -- # lcov --version 00:06:31.704 18:56:49 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:31.704 18:56:49 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:31.704 18:56:49 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:31.704 18:56:49 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:31.704 18:56:49 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:31.704 18:56:49 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:31.704 18:56:49 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:31.705 18:56:49 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:31.705 18:56:49 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:31.705 18:56:49 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:31.705 18:56:49 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:31.705 18:56:49 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:31.705 18:56:49 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:31.705 18:56:49 version -- scripts/common.sh@344 -- # case "$op" in 00:06:31.705 18:56:49 version -- scripts/common.sh@345 -- # : 1 00:06:31.705 18:56:49 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:31.705 18:56:49 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:31.705 18:56:49 version -- scripts/common.sh@365 -- # decimal 1 00:06:31.705 18:56:49 version -- scripts/common.sh@353 -- # local d=1 00:06:31.705 18:56:49 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:31.705 18:56:49 version -- scripts/common.sh@355 -- # echo 1 00:06:31.705 18:56:49 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:31.705 18:56:49 version -- scripts/common.sh@366 -- # decimal 2 00:06:31.705 18:56:49 version -- scripts/common.sh@353 -- # local d=2 00:06:31.705 18:56:49 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:31.705 18:56:49 version -- scripts/common.sh@355 -- # echo 2 00:06:31.705 18:56:49 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:31.705 18:56:49 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:31.705 18:56:49 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:31.705 18:56:49 version -- scripts/common.sh@368 -- # return 0 00:06:31.705 18:56:49 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:31.705 18:56:49 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:31.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.705 --rc genhtml_branch_coverage=1 00:06:31.705 --rc genhtml_function_coverage=1 00:06:31.705 --rc genhtml_legend=1 00:06:31.705 --rc geninfo_all_blocks=1 00:06:31.705 --rc geninfo_unexecuted_blocks=1 00:06:31.705 00:06:31.705 ' 00:06:31.705 18:56:49 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:31.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.705 --rc genhtml_branch_coverage=1 00:06:31.705 --rc genhtml_function_coverage=1 00:06:31.705 --rc genhtml_legend=1 00:06:31.705 --rc geninfo_all_blocks=1 00:06:31.705 --rc geninfo_unexecuted_blocks=1 00:06:31.705 00:06:31.705 ' 00:06:31.705 18:56:49 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:31.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.705 --rc genhtml_branch_coverage=1 00:06:31.705 --rc genhtml_function_coverage=1 00:06:31.705 --rc genhtml_legend=1 00:06:31.705 --rc geninfo_all_blocks=1 00:06:31.705 --rc geninfo_unexecuted_blocks=1 00:06:31.705 00:06:31.705 ' 00:06:31.705 18:56:49 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:31.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.705 --rc genhtml_branch_coverage=1 00:06:31.705 --rc genhtml_function_coverage=1 00:06:31.705 --rc genhtml_legend=1 00:06:31.705 --rc geninfo_all_blocks=1 00:06:31.705 --rc geninfo_unexecuted_blocks=1 00:06:31.705 00:06:31.705 ' 00:06:31.705 18:56:49 version -- app/version.sh@17 -- # get_header_version major 00:06:31.705 18:56:49 version -- app/version.sh@14 -- # cut -f2 00:06:31.705 18:56:49 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:31.705 18:56:49 version -- app/version.sh@14 -- # tr -d '"' 00:06:31.705 18:56:49 version -- app/version.sh@17 -- # major=25 00:06:31.705 18:56:49 version -- app/version.sh@18 -- # get_header_version minor 00:06:31.705 18:56:49 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:31.705 18:56:49 version -- app/version.sh@14 -- # tr -d '"' 00:06:31.705 18:56:49 version -- app/version.sh@14 -- # cut -f2 00:06:31.705 18:56:49 version -- app/version.sh@18 -- # minor=1 00:06:31.705 18:56:49 version -- app/version.sh@19 -- # get_header_version patch 00:06:31.705 18:56:49 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:31.705 18:56:49 version -- app/version.sh@14 -- # cut -f2 00:06:31.705 18:56:49 version -- app/version.sh@14 -- # tr -d '"' 00:06:31.705 18:56:49 version -- app/version.sh@19 -- # patch=0 00:06:31.705 18:56:49 version -- app/version.sh@20 -- # get_header_version suffix 00:06:31.705 18:56:49 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:31.705 18:56:49 version -- app/version.sh@14 -- # cut -f2 00:06:31.705 18:56:49 version -- app/version.sh@14 -- # tr -d '"' 00:06:31.705 18:56:49 version -- app/version.sh@20 -- # suffix=-pre 00:06:31.705 18:56:49 version -- app/version.sh@22 -- # version=25.1 00:06:31.705 18:56:49 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:31.705 18:56:49 version -- app/version.sh@28 -- # version=25.1rc0 00:06:31.705 18:56:49 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:31.705 18:56:49 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:31.967 18:56:49 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:31.967 18:56:49 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:31.967 ************************************ 00:06:31.967 END TEST version 00:06:31.967 ************************************ 00:06:31.967 00:06:31.967 real 0m0.222s 00:06:31.967 user 0m0.126s 00:06:31.967 sys 0m0.128s 00:06:31.967 18:56:49 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:31.967 18:56:49 version -- common/autotest_common.sh@10 -- # set +x 00:06:31.967 18:56:49 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:31.967 18:56:49 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:31.967 18:56:49 -- spdk/autotest.sh@194 -- # uname -s 00:06:31.967 18:56:49 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:31.967 18:56:49 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:31.967 18:56:49 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:31.967 18:56:49 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:06:31.967 18:56:49 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:31.967 18:56:49 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:31.967 18:56:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:31.967 18:56:49 -- common/autotest_common.sh@10 -- # set +x 00:06:31.967 ************************************ 00:06:31.967 START TEST blockdev_nvme 00:06:31.967 ************************************ 00:06:31.967 18:56:49 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:31.967 * Looking for test storage... 00:06:31.967 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:31.967 18:56:49 blockdev_nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:31.967 18:56:49 blockdev_nvme -- common/autotest_common.sh@1711 -- # lcov --version 00:06:31.967 18:56:49 blockdev_nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:31.967 18:56:49 blockdev_nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:31.967 18:56:49 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:31.967 18:56:49 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:31.967 18:56:49 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:31.967 18:56:49 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:06:31.967 18:56:49 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:06:31.967 18:56:49 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:06:31.967 18:56:49 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:06:31.967 18:56:49 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:06:31.967 18:56:49 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:06:31.967 18:56:49 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:06:31.967 18:56:49 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:31.967 18:56:49 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:06:31.967 18:56:49 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:06:31.967 18:56:49 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:31.967 18:56:49 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:31.967 18:56:49 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:06:31.967 18:56:49 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:06:31.967 18:56:49 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:31.967 18:56:49 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:06:31.967 18:56:49 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:06:31.967 18:56:49 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:06:31.967 18:56:49 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:06:31.967 18:56:49 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:31.967 18:56:49 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:06:31.967 18:56:49 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:06:31.967 18:56:49 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:31.967 18:56:49 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:31.967 18:56:49 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:06:31.967 18:56:49 blockdev_nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:31.967 18:56:49 blockdev_nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:31.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.967 --rc genhtml_branch_coverage=1 00:06:31.967 --rc genhtml_function_coverage=1 00:06:31.967 --rc genhtml_legend=1 00:06:31.967 --rc geninfo_all_blocks=1 00:06:31.967 --rc geninfo_unexecuted_blocks=1 00:06:31.967 00:06:31.967 ' 00:06:31.967 18:56:49 blockdev_nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:31.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.967 --rc genhtml_branch_coverage=1 00:06:31.967 --rc genhtml_function_coverage=1 00:06:31.967 --rc genhtml_legend=1 00:06:31.967 --rc geninfo_all_blocks=1 00:06:31.967 --rc geninfo_unexecuted_blocks=1 00:06:31.967 00:06:31.967 ' 00:06:31.967 18:56:49 blockdev_nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:31.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.967 --rc genhtml_branch_coverage=1 00:06:31.967 --rc genhtml_function_coverage=1 00:06:31.967 --rc genhtml_legend=1 00:06:31.967 --rc geninfo_all_blocks=1 00:06:31.967 --rc geninfo_unexecuted_blocks=1 00:06:31.967 00:06:31.967 ' 00:06:31.967 18:56:49 blockdev_nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:31.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.967 --rc genhtml_branch_coverage=1 00:06:31.967 --rc genhtml_function_coverage=1 00:06:31.967 --rc genhtml_legend=1 00:06:31.967 --rc geninfo_all_blocks=1 00:06:31.967 --rc geninfo_unexecuted_blocks=1 00:06:31.967 00:06:31.967 ' 00:06:31.967 18:56:49 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:31.967 18:56:49 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:06:31.967 18:56:49 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:31.967 18:56:49 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:31.967 18:56:49 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:31.967 18:56:49 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:31.967 18:56:49 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:31.967 18:56:49 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:31.967 18:56:49 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:06:31.967 18:56:49 blockdev_nvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:06:31.967 18:56:49 blockdev_nvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:06:31.967 18:56:49 blockdev_nvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:06:31.967 18:56:49 blockdev_nvme -- bdev/blockdev.sh@711 -- # uname -s 00:06:32.229 18:56:49 blockdev_nvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:06:32.229 18:56:49 blockdev_nvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:06:32.229 18:56:49 blockdev_nvme -- bdev/blockdev.sh@719 -- # test_type=nvme 00:06:32.229 18:56:49 blockdev_nvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:06:32.229 18:56:49 blockdev_nvme -- bdev/blockdev.sh@721 -- # dek= 00:06:32.229 18:56:49 blockdev_nvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:06:32.229 18:56:49 blockdev_nvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:06:32.229 18:56:49 blockdev_nvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:06:32.229 18:56:49 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == bdev ]] 00:06:32.229 18:56:49 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == crypto_* ]] 00:06:32.229 18:56:49 blockdev_nvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:06:32.229 18:56:49 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=71431 00:06:32.229 18:56:49 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:32.229 18:56:49 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:32.229 18:56:49 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 71431 00:06:32.229 18:56:49 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 71431 ']' 00:06:32.229 18:56:49 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.229 18:56:49 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:32.229 18:56:49 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.229 18:56:49 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:32.229 18:56:49 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:32.229 [2024-12-05 18:56:49.618187] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:06:32.229 [2024-12-05 18:56:49.618580] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71431 ] 00:06:32.229 [2024-12-05 18:56:49.766505] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.492 [2024-12-05 18:56:49.797596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.065 18:56:50 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:33.065 18:56:50 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:06:33.065 18:56:50 blockdev_nvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:06:33.065 18:56:50 blockdev_nvme -- bdev/blockdev.sh@736 -- # setup_nvme_conf 00:06:33.065 18:56:50 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:06:33.065 18:56:50 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:33.065 18:56:50 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:33.065 18:56:50 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:33.065 18:56:50 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.065 18:56:50 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:33.327 18:56:50 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.327 18:56:50 blockdev_nvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:06:33.327 18:56:50 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.327 18:56:50 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:33.327 18:56:50 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.327 18:56:50 blockdev_nvme -- bdev/blockdev.sh@777 -- # cat 00:06:33.327 18:56:50 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:06:33.327 18:56:50 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.327 18:56:50 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:33.327 18:56:50 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.327 18:56:50 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:06:33.327 18:56:50 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.327 18:56:50 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:33.327 18:56:50 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.327 18:56:50 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:33.327 18:56:50 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.327 18:56:50 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:33.327 18:56:50 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.589 18:56:50 blockdev_nvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:06:33.590 18:56:50 blockdev_nvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:06:33.590 18:56:50 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.590 18:56:50 blockdev_nvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:06:33.590 18:56:50 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:33.590 18:56:50 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.590 18:56:50 blockdev_nvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:06:33.590 18:56:50 blockdev_nvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:06:33.590 18:56:50 blockdev_nvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "e50cfa84-5498-4051-bc69-a962f3fe4cc8"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "e50cfa84-5498-4051-bc69-a962f3fe4cc8",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "7f79459b-6002-4c23-a20d-00e8e0565249"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "7f79459b-6002-4c23-a20d-00e8e0565249",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "cb17cc65-48c2-48d9-8ec7-856f1df8ad38"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "cb17cc65-48c2-48d9-8ec7-856f1df8ad38",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "37796d53-c0f8-4bc4-bb0b-3092baf69a22"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "37796d53-c0f8-4bc4-bb0b-3092baf69a22",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "e2e330fe-987b-418b-9f0a-d509fa5616d8"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e2e330fe-987b-418b-9f0a-d509fa5616d8",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "58739084-5eea-469b-86c9-d45eff72e9f8"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "58739084-5eea-469b-86c9-d45eff72e9f8",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:33.590 18:56:50 blockdev_nvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:06:33.590 18:56:50 blockdev_nvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:06:33.590 18:56:50 blockdev_nvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:06:33.590 18:56:50 blockdev_nvme -- bdev/blockdev.sh@791 -- # killprocess 71431 00:06:33.590 18:56:50 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 71431 ']' 00:06:33.590 18:56:50 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 71431 00:06:33.590 18:56:50 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:06:33.590 18:56:50 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:33.590 18:56:50 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71431 00:06:33.590 killing process with pid 71431 00:06:33.590 18:56:51 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:33.590 18:56:51 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:33.590 18:56:51 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71431' 00:06:33.590 18:56:51 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 71431 00:06:33.590 18:56:51 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 71431 00:06:33.852 18:56:51 blockdev_nvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:33.852 18:56:51 blockdev_nvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:33.852 18:56:51 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:06:33.852 18:56:51 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:33.852 18:56:51 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:33.852 ************************************ 00:06:33.852 START TEST bdev_hello_world 00:06:33.852 ************************************ 00:06:33.852 18:56:51 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:34.114 [2024-12-05 18:56:51.473026] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:06:34.114 [2024-12-05 18:56:51.473371] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71504 ] 00:06:34.114 [2024-12-05 18:56:51.621968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.114 [2024-12-05 18:56:51.642314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.687 [2024-12-05 18:56:52.043215] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:34.687 [2024-12-05 18:56:52.043297] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:34.687 [2024-12-05 18:56:52.043329] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:34.687 [2024-12-05 18:56:52.045972] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:34.687 [2024-12-05 18:56:52.046945] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:34.687 [2024-12-05 18:56:52.046987] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:34.687 [2024-12-05 18:56:52.047814] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:34.687 00:06:34.687 [2024-12-05 18:56:52.047868] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:34.949 00:06:34.949 real 0m0.864s 00:06:34.949 user 0m0.559s 00:06:34.949 sys 0m0.198s 00:06:34.949 ************************************ 00:06:34.949 END TEST bdev_hello_world 00:06:34.949 ************************************ 00:06:34.949 18:56:52 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:34.949 18:56:52 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:34.949 18:56:52 blockdev_nvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:06:34.949 18:56:52 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:34.949 18:56:52 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:34.949 18:56:52 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:34.949 ************************************ 00:06:34.949 START TEST bdev_bounds 00:06:34.949 ************************************ 00:06:34.949 18:56:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:06:34.949 18:56:52 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=71535 00:06:34.949 Process bdevio pid: 71535 00:06:34.949 18:56:52 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:34.949 18:56:52 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 71535' 00:06:34.949 18:56:52 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 71535 00:06:34.949 18:56:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 71535 ']' 00:06:34.949 18:56:52 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:34.949 18:56:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.949 18:56:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:34.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.949 18:56:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.949 18:56:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:34.949 18:56:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:34.949 [2024-12-05 18:56:52.414539] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:06:34.949 [2024-12-05 18:56:52.414972] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71535 ] 00:06:35.211 [2024-12-05 18:56:52.563824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:35.211 [2024-12-05 18:56:52.597473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:35.211 [2024-12-05 18:56:52.597892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.211 [2024-12-05 18:56:52.597913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:35.783 18:56:53 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:35.783 18:56:53 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:06:35.783 18:56:53 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:36.045 I/O targets: 00:06:36.045 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:36.045 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:06:36.045 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:36.045 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:36.045 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:36.045 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:36.045 00:06:36.045 00:06:36.045 CUnit - A unit testing framework for C - Version 2.1-3 00:06:36.045 http://cunit.sourceforge.net/ 00:06:36.045 00:06:36.045 00:06:36.045 Suite: bdevio tests on: Nvme3n1 00:06:36.045 Test: blockdev write read block ...passed 00:06:36.045 Test: blockdev write zeroes read block ...passed 00:06:36.045 Test: blockdev write zeroes read no split ...passed 00:06:36.045 Test: blockdev write zeroes read split ...passed 00:06:36.045 Test: blockdev write zeroes read split partial ...passed 00:06:36.045 Test: blockdev reset ...[2024-12-05 18:56:53.407846] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:06:36.045 passed 00:06:36.045 Test: blockdev write read 8 blocks ...[2024-12-05 18:56:53.412523] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:06:36.045 passed 00:06:36.045 Test: blockdev write read size > 128k ...passed 00:06:36.045 Test: blockdev write read invalid size ...passed 00:06:36.045 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:36.045 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:36.045 Test: blockdev write read max offset ...passed 00:06:36.045 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:36.045 Test: blockdev writev readv 8 blocks ...passed 00:06:36.045 Test: blockdev writev readv 30 x 1block ...passed 00:06:36.045 Test: blockdev writev readv block ...passed 00:06:36.045 Test: blockdev writev readv size > 128k ...passed 00:06:36.045 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:36.045 Test: blockdev comparev and writev ...[2024-12-05 18:56:53.431365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c2c06000 len:0x1000 00:06:36.045 [2024-12-05 18:56:53.431442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:36.045 passed 00:06:36.045 Test: blockdev nvme passthru rw ...passed 00:06:36.045 Test: blockdev nvme passthru vendor specific ...passed 00:06:36.045 Test: blockdev nvme admin passthru ...[2024-12-05 18:56:53.433957] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:36.045 [2024-12-05 18:56:53.434007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:36.045 passed 00:06:36.045 Test: blockdev copy ...passed 00:06:36.045 Suite: bdevio tests on: Nvme2n3 00:06:36.045 Test: blockdev write read block ...passed 00:06:36.045 Test: blockdev write zeroes read block ...passed 00:06:36.045 Test: blockdev write zeroes read no split ...passed 00:06:36.045 Test: blockdev write zeroes read split ...passed 00:06:36.045 Test: blockdev write zeroes read split partial ...passed 00:06:36.045 Test: blockdev reset ...[2024-12-05 18:56:53.464074] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:36.045 [2024-12-05 18:56:53.470496] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:06:36.045 Test: blockdev write read 8 blocks ...uccessful. 00:06:36.045 passed 00:06:36.045 Test: blockdev write read size > 128k ...passed 00:06:36.045 Test: blockdev write read invalid size ...passed 00:06:36.045 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:36.045 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:36.045 Test: blockdev write read max offset ...passed 00:06:36.045 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:36.045 Test: blockdev writev readv 8 blocks ...passed 00:06:36.045 Test: blockdev writev readv 30 x 1block ...passed 00:06:36.045 Test: blockdev writev readv block ...passed 00:06:36.045 Test: blockdev writev readv size > 128k ...passed 00:06:36.045 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:36.045 Test: blockdev comparev and writev ...[2024-12-05 18:56:53.488394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x29d202000 len:0x1000 00:06:36.045 [2024-12-05 18:56:53.488579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:36.045 passed 00:06:36.045 Test: blockdev nvme passthru rw ...passed 00:06:36.045 Test: blockdev nvme passthru vendor specific ...passed 00:06:36.045 Test: blockdev nvme admin passthru ...[2024-12-05 18:56:53.491371] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:36.045 [2024-12-05 18:56:53.491421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:36.045 passed 00:06:36.045 Test: blockdev copy ...passed 00:06:36.045 Suite: bdevio tests on: Nvme2n2 00:06:36.045 Test: blockdev write read block ...passed 00:06:36.045 Test: blockdev write zeroes read block ...passed 00:06:36.045 Test: blockdev write zeroes read no split ...passed 00:06:36.045 Test: blockdev write zeroes read split ...passed 00:06:36.045 Test: blockdev write zeroes read split partial ...passed 00:06:36.045 Test: blockdev reset ...[2024-12-05 18:56:53.520244] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:36.045 [2024-12-05 18:56:53.525421] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spasseduccessful. 00:06:36.045 00:06:36.045 Test: blockdev write read 8 blocks ...passed 00:06:36.045 Test: blockdev write read size > 128k ...passed 00:06:36.045 Test: blockdev write read invalid size ...passed 00:06:36.045 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:36.045 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:36.045 Test: blockdev write read max offset ...passed 00:06:36.045 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:36.045 Test: blockdev writev readv 8 blocks ...passed 00:06:36.045 Test: blockdev writev readv 30 x 1block ...passed 00:06:36.045 Test: blockdev writev readv block ...passed 00:06:36.045 Test: blockdev writev readv size > 128k ...passed 00:06:36.046 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:36.046 Test: blockdev comparev and writev ...[2024-12-05 18:56:53.543012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 passed 00:06:36.046 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2d3e3b000 len:0x1000 00:06:36.046 [2024-12-05 18:56:53.543176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:36.046 passed 00:06:36.046 Test: blockdev nvme passthru vendor specific ...passed 00:06:36.046 Test: blockdev nvme admin passthru ...[2024-12-05 18:56:53.545648] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:36.046 [2024-12-05 18:56:53.545693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:36.046 passed 00:06:36.046 Test: blockdev copy ...passed 00:06:36.046 Suite: bdevio tests on: Nvme2n1 00:06:36.046 Test: blockdev write read block ...passed 00:06:36.046 Test: blockdev write zeroes read block ...passed 00:06:36.046 Test: blockdev write zeroes read no split ...passed 00:06:36.046 Test: blockdev write zeroes read split ...passed 00:06:36.046 Test: blockdev write zeroes read split partial ...passed 00:06:36.046 Test: blockdev reset ...[2024-12-05 18:56:53.576450] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:36.046 passed 00:06:36.046 Test: blockdev write read 8 blocks ...[2024-12-05 18:56:53.581236] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:36.046 passed 00:06:36.046 Test: blockdev write read size > 128k ...passed 00:06:36.046 Test: blockdev write read invalid size ...passed 00:06:36.046 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:36.046 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:36.046 Test: blockdev write read max offset ...passed 00:06:36.046 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:36.046 Test: blockdev writev readv 8 blocks ...passed 00:06:36.046 Test: blockdev writev readv 30 x 1block ...passed 00:06:36.046 Test: blockdev writev readv block ...passed 00:06:36.046 Test: blockdev writev readv size > 128k ...passed 00:06:36.046 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:36.046 Test: blockdev comparev and writev ...[2024-12-05 18:56:53.601270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d3e37000 len:0x1000 00:06:36.046 [2024-12-05 18:56:53.601326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:36.046 passed 00:06:36.307 Test: blockdev nvme passthru rw ...passed 00:06:36.307 Test: blockdev nvme passthru vendor specific ...passed 00:06:36.307 Test: blockdev nvme admin passthru ...[2024-12-05 18:56:53.604522] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:36.307 [2024-12-05 18:56:53.604568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:36.307 passed 00:06:36.307 Test: blockdev copy ...passed 00:06:36.307 Suite: bdevio tests on: Nvme1n1 00:06:36.307 Test: blockdev write read block ...passed 00:06:36.307 Test: blockdev write zeroes read block ...passed 00:06:36.307 Test: blockdev write zeroes read no split ...passed 00:06:36.307 Test: blockdev write zeroes read split ...passed 00:06:36.307 Test: blockdev write zeroes read split partial ...passed 00:06:36.307 Test: blockdev reset ...[2024-12-05 18:56:53.632486] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:36.307 [2024-12-05 18:56:53.637320] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller spassed 00:06:36.307 Test: blockdev write read 8 blocks ...uccessful. 00:06:36.307 passed 00:06:36.307 Test: blockdev write read size > 128k ...passed 00:06:36.307 Test: blockdev write read invalid size ...passed 00:06:36.307 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:36.307 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:36.307 Test: blockdev write read max offset ...passed 00:06:36.307 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:36.307 Test: blockdev writev readv 8 blocks ...passed 00:06:36.307 Test: blockdev writev readv 30 x 1block ...passed 00:06:36.307 Test: blockdev writev readv block ...passed 00:06:36.307 Test: blockdev writev readv size > 128k ...passed 00:06:36.307 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:36.307 Test: blockdev comparev and writev ...[2024-12-05 18:56:53.651623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed 00:06:36.307 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2d3e33000 len:0x1000 00:06:36.307 [2024-12-05 18:56:53.651804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:36.307 passed 00:06:36.307 Test: blockdev nvme passthru vendor specific ...passed 00:06:36.307 Test: blockdev nvme admin passthru ...[2024-12-05 18:56:53.652912] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:36.307 [2024-12-05 18:56:53.652953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:36.307 passed 00:06:36.307 Test: blockdev copy ...passed 00:06:36.307 Suite: bdevio tests on: Nvme0n1 00:06:36.307 Test: blockdev write read block ...passed 00:06:36.307 Test: blockdev write zeroes read block ...passed 00:06:36.307 Test: blockdev write zeroes read no split ...passed 00:06:36.307 Test: blockdev write zeroes read split ...passed 00:06:36.307 Test: blockdev write zeroes read split partial ...passed 00:06:36.307 Test: blockdev reset ...[2024-12-05 18:56:53.679689] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:06:36.307 [2024-12-05 18:56:53.683819] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller spassed 00:06:36.307 Test: blockdev write read 8 blocks ...uccessful. 00:06:36.307 passed 00:06:36.307 Test: blockdev write read size > 128k ...passed 00:06:36.307 Test: blockdev write read invalid size ...passed 00:06:36.307 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:36.307 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:36.307 Test: blockdev write read max offset ...passed 00:06:36.307 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:36.307 Test: blockdev writev readv 8 blocks ...passed 00:06:36.307 Test: blockdev writev readv 30 x 1block ...passed 00:06:36.307 Test: blockdev writev readv block ...passed 00:06:36.307 Test: blockdev writev readv size > 128k ...passed 00:06:36.307 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:36.307 Test: blockdev comparev and writev ...passed 00:06:36.307 Test: blockdev nvme passthru rw ...[2024-12-05 18:56:53.694943] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:36.307 separate metadata which is not supported yet. 00:06:36.307 passed 00:06:36.307 Test: blockdev nvme passthru vendor specific ...[2024-12-05 18:56:53.695836] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:06:36.307 [2024-12-05 18:56:53.695884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:06:36.307 passed 00:06:36.307 Test: blockdev nvme admin passthru ...passed 00:06:36.307 Test: blockdev copy ...passed 00:06:36.307 00:06:36.307 Run Summary: Type Total Ran Passed Failed Inactive 00:06:36.307 suites 6 6 n/a 0 0 00:06:36.307 tests 138 138 138 0 0 00:06:36.307 asserts 893 893 893 0 n/a 00:06:36.307 00:06:36.307 Elapsed time = 0.720 seconds 00:06:36.307 0 00:06:36.307 18:56:53 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 71535 00:06:36.308 18:56:53 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 71535 ']' 00:06:36.308 18:56:53 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 71535 00:06:36.308 18:56:53 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:06:36.308 18:56:53 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:36.308 18:56:53 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71535 00:06:36.308 18:56:53 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:36.308 18:56:53 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:36.308 18:56:53 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71535' 00:06:36.308 killing process with pid 71535 00:06:36.308 18:56:53 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 71535 00:06:36.308 18:56:53 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 71535 00:06:36.568 18:56:53 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:36.568 00:06:36.568 real 0m1.591s 00:06:36.568 user 0m3.902s 00:06:36.568 sys 0m0.370s 00:06:36.568 18:56:53 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:36.568 ************************************ 00:06:36.568 END TEST bdev_bounds 00:06:36.568 ************************************ 00:06:36.568 18:56:53 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:36.568 18:56:53 blockdev_nvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:36.568 18:56:53 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:36.568 18:56:53 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:36.568 18:56:53 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:36.568 ************************************ 00:06:36.568 START TEST bdev_nbd 00:06:36.568 ************************************ 00:06:36.568 18:56:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:36.568 18:56:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:36.568 18:56:54 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:36.568 18:56:54 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.569 18:56:54 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:36.569 18:56:54 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:36.569 18:56:54 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:36.569 18:56:54 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:06:36.569 18:56:54 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:36.569 18:56:54 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:36.569 18:56:54 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:36.569 18:56:54 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:06:36.569 18:56:54 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:36.569 18:56:54 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:36.569 18:56:54 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:36.569 18:56:54 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:36.569 18:56:54 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=71578 00:06:36.569 18:56:54 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:36.569 18:56:54 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 71578 /var/tmp/spdk-nbd.sock 00:06:36.569 18:56:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 71578 ']' 00:06:36.569 18:56:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:36.569 18:56:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:36.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:36.569 18:56:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:36.569 18:56:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:36.569 18:56:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:36.569 18:56:54 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:36.569 [2024-12-05 18:56:54.080382] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:06:36.569 [2024-12-05 18:56:54.080536] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:36.829 [2024-12-05 18:56:54.231085] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.829 [2024-12-05 18:56:54.261014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.768 18:56:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:37.768 18:56:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:06:37.768 18:56:54 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:37.768 18:56:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:37.768 18:56:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:37.768 18:56:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:37.768 18:56:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:37.768 18:56:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:37.768 18:56:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:37.768 18:56:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:37.768 18:56:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:37.768 18:56:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:37.768 18:56:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:37.768 18:56:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:37.768 18:56:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:37.768 18:56:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:37.768 18:56:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:37.768 18:56:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:37.768 18:56:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:37.768 18:56:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:37.768 18:56:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:37.768 18:56:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:37.768 18:56:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:37.768 18:56:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:37.768 18:56:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:37.768 18:56:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:37.768 18:56:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:37.768 1+0 records in 00:06:37.768 1+0 records out 00:06:37.768 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00207633 s, 2.0 MB/s 00:06:37.768 18:56:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:37.768 18:56:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:37.768 18:56:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:37.768 18:56:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:37.768 18:56:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:37.768 18:56:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:37.768 18:56:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:37.768 18:56:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:06:38.029 18:56:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:38.029 18:56:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:38.029 18:56:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:38.029 18:56:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:38.029 18:56:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:38.029 18:56:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:38.029 18:56:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:38.029 18:56:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:38.029 18:56:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:38.029 18:56:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:38.029 18:56:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:38.029 18:56:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:38.029 1+0 records in 00:06:38.029 1+0 records out 00:06:38.029 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00120137 s, 3.4 MB/s 00:06:38.029 18:56:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:38.029 18:56:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:38.029 18:56:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:38.029 18:56:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:38.029 18:56:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:38.029 18:56:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:38.029 18:56:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:38.029 18:56:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:06:38.289 18:56:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:06:38.289 18:56:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:06:38.289 18:56:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:06:38.289 18:56:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:06:38.289 18:56:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:38.289 18:56:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:38.289 18:56:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:38.289 18:56:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:06:38.289 18:56:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:38.289 18:56:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:38.289 18:56:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:38.289 18:56:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:38.289 1+0 records in 00:06:38.289 1+0 records out 00:06:38.289 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00109618 s, 3.7 MB/s 00:06:38.289 18:56:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:38.289 18:56:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:38.289 18:56:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:38.289 18:56:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:38.289 18:56:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:38.289 18:56:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:38.289 18:56:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:38.289 18:56:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:06:38.550 18:56:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:06:38.550 18:56:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:06:38.550 18:56:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:06:38.550 18:56:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:06:38.550 18:56:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:38.550 18:56:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:38.550 18:56:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:38.550 18:56:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:06:38.550 18:56:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:38.550 18:56:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:38.550 18:56:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:38.550 18:56:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:38.550 1+0 records in 00:06:38.550 1+0 records out 00:06:38.550 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00135096 s, 3.0 MB/s 00:06:38.550 18:56:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:38.550 18:56:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:38.550 18:56:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:38.550 18:56:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:38.550 18:56:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:38.550 18:56:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:38.550 18:56:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:38.550 18:56:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:06:38.810 18:56:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:06:38.810 18:56:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:06:38.810 18:56:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:06:38.810 18:56:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:06:38.810 18:56:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:38.810 18:56:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:38.810 18:56:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:38.810 18:56:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:06:38.810 18:56:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:38.810 18:56:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:38.810 18:56:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:38.810 18:56:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:38.810 1+0 records in 00:06:38.810 1+0 records out 00:06:38.810 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000861217 s, 4.8 MB/s 00:06:38.810 18:56:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:38.810 18:56:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:38.810 18:56:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:38.810 18:56:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:38.810 18:56:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:38.810 18:56:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:38.810 18:56:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:38.810 18:56:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:06:39.070 18:56:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:06:39.070 18:56:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:06:39.071 18:56:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:06:39.071 18:56:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:06:39.071 18:56:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:39.071 18:56:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:39.071 18:56:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:39.071 18:56:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:06:39.071 18:56:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:39.071 18:56:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:39.071 18:56:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:39.071 18:56:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:39.071 1+0 records in 00:06:39.071 1+0 records out 00:06:39.071 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00116123 s, 3.5 MB/s 00:06:39.071 18:56:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:39.071 18:56:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:39.071 18:56:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:39.071 18:56:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:39.071 18:56:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:39.071 18:56:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:39.071 18:56:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:39.071 18:56:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:39.332 18:56:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:06:39.332 { 00:06:39.332 "nbd_device": "/dev/nbd0", 00:06:39.332 "bdev_name": "Nvme0n1" 00:06:39.332 }, 00:06:39.332 { 00:06:39.332 "nbd_device": "/dev/nbd1", 00:06:39.332 "bdev_name": "Nvme1n1" 00:06:39.332 }, 00:06:39.332 { 00:06:39.332 "nbd_device": "/dev/nbd2", 00:06:39.332 "bdev_name": "Nvme2n1" 00:06:39.332 }, 00:06:39.332 { 00:06:39.332 "nbd_device": "/dev/nbd3", 00:06:39.332 "bdev_name": "Nvme2n2" 00:06:39.332 }, 00:06:39.332 { 00:06:39.332 "nbd_device": "/dev/nbd4", 00:06:39.332 "bdev_name": "Nvme2n3" 00:06:39.332 }, 00:06:39.332 { 00:06:39.332 "nbd_device": "/dev/nbd5", 00:06:39.332 "bdev_name": "Nvme3n1" 00:06:39.332 } 00:06:39.332 ]' 00:06:39.332 18:56:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:06:39.332 18:56:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:06:39.332 { 00:06:39.332 "nbd_device": "/dev/nbd0", 00:06:39.332 "bdev_name": "Nvme0n1" 00:06:39.332 }, 00:06:39.332 { 00:06:39.332 "nbd_device": "/dev/nbd1", 00:06:39.332 "bdev_name": "Nvme1n1" 00:06:39.332 }, 00:06:39.332 { 00:06:39.332 "nbd_device": "/dev/nbd2", 00:06:39.332 "bdev_name": "Nvme2n1" 00:06:39.332 }, 00:06:39.332 { 00:06:39.332 "nbd_device": "/dev/nbd3", 00:06:39.332 "bdev_name": "Nvme2n2" 00:06:39.332 }, 00:06:39.332 { 00:06:39.332 "nbd_device": "/dev/nbd4", 00:06:39.332 "bdev_name": "Nvme2n3" 00:06:39.332 }, 00:06:39.332 { 00:06:39.332 "nbd_device": "/dev/nbd5", 00:06:39.332 "bdev_name": "Nvme3n1" 00:06:39.332 } 00:06:39.332 ]' 00:06:39.332 18:56:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:06:39.332 18:56:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:06:39.332 18:56:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.332 18:56:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:06:39.332 18:56:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:39.332 18:56:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:39.332 18:56:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:39.332 18:56:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:39.594 18:56:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:39.594 18:56:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:39.594 18:56:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:39.594 18:56:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:39.594 18:56:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:39.594 18:56:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:39.594 18:56:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:39.594 18:56:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:39.594 18:56:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:39.594 18:56:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:39.855 18:56:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:39.855 18:56:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:39.855 18:56:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:39.855 18:56:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:39.855 18:56:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:39.855 18:56:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:39.855 18:56:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:39.855 18:56:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:39.855 18:56:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:39.855 18:56:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:06:40.116 18:56:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:06:40.116 18:56:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:06:40.116 18:56:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:06:40.116 18:56:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:40.116 18:56:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:40.116 18:56:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:06:40.116 18:56:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:40.116 18:56:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:40.116 18:56:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:40.116 18:56:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:06:40.377 18:56:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:06:40.377 18:56:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:06:40.377 18:56:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:06:40.377 18:56:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:40.377 18:56:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:40.377 18:56:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:06:40.377 18:56:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:40.377 18:56:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:40.377 18:56:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:40.377 18:56:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:06:40.377 18:56:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:06:40.377 18:56:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:06:40.377 18:56:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:06:40.377 18:56:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:40.377 18:56:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:40.377 18:56:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:06:40.377 18:56:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:40.377 18:56:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:40.377 18:56:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:40.377 18:56:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:06:40.638 18:56:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:06:40.638 18:56:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:06:40.638 18:56:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:06:40.638 18:56:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:40.638 18:56:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:40.638 18:56:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:06:40.638 18:56:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:40.638 18:56:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:40.638 18:56:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:40.638 18:56:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.638 18:56:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:40.899 18:56:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:40.899 18:56:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:40.899 18:56:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:40.899 18:56:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:40.899 18:56:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:40.899 18:56:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:40.899 18:56:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:40.899 18:56:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:40.899 18:56:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:40.899 18:56:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:06:40.899 18:56:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:06:40.899 18:56:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:06:40.899 18:56:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:40.899 18:56:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.899 18:56:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:40.899 18:56:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:40.899 18:56:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:40.899 18:56:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:40.899 18:56:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:40.899 18:56:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.899 18:56:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:40.899 18:56:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:40.899 18:56:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:40.899 18:56:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:40.899 18:56:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:06:40.899 18:56:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:40.899 18:56:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:40.899 18:56:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:06:41.161 /dev/nbd0 00:06:41.161 18:56:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:41.161 18:56:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:41.161 18:56:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:41.161 18:56:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:41.161 18:56:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:41.161 18:56:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:41.161 18:56:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:41.161 18:56:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:41.161 18:56:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:41.161 18:56:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:41.161 18:56:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:41.161 1+0 records in 00:06:41.161 1+0 records out 00:06:41.161 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00130934 s, 3.1 MB/s 00:06:41.161 18:56:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:41.161 18:56:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:41.161 18:56:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:41.161 18:56:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:41.161 18:56:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:41.161 18:56:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:41.161 18:56:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:41.161 18:56:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:06:41.423 /dev/nbd1 00:06:41.423 18:56:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:41.423 18:56:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:41.423 18:56:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:41.423 18:56:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:41.423 18:56:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:41.423 18:56:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:41.423 18:56:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:41.423 18:56:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:41.423 18:56:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:41.423 18:56:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:41.423 18:56:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:41.423 1+0 records in 00:06:41.423 1+0 records out 00:06:41.423 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00109509 s, 3.7 MB/s 00:06:41.423 18:56:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:41.423 18:56:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:41.423 18:56:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:41.423 18:56:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:41.423 18:56:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:41.423 18:56:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:41.423 18:56:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:41.423 18:56:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:06:41.686 /dev/nbd10 00:06:41.686 18:56:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:06:41.686 18:56:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:06:41.686 18:56:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:06:41.686 18:56:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:41.686 18:56:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:41.686 18:56:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:41.686 18:56:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:06:41.686 18:56:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:41.686 18:56:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:41.686 18:56:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:41.686 18:56:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:41.686 1+0 records in 00:06:41.686 1+0 records out 00:06:41.686 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00147349 s, 2.8 MB/s 00:06:41.686 18:56:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:41.686 18:56:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:41.686 18:56:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:41.686 18:56:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:41.686 18:56:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:41.686 18:56:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:41.686 18:56:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:41.686 18:56:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:06:41.946 /dev/nbd11 00:06:41.946 18:56:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:06:41.946 18:56:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:06:41.946 18:56:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:06:41.946 18:56:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:41.946 18:56:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:41.946 18:56:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:41.946 18:56:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:06:41.946 18:56:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:41.946 18:56:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:41.946 18:56:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:41.946 18:56:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:41.946 1+0 records in 00:06:41.946 1+0 records out 00:06:41.946 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000913785 s, 4.5 MB/s 00:06:41.946 18:56:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:41.946 18:56:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:41.946 18:56:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:41.946 18:56:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:41.946 18:56:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:41.946 18:56:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:41.946 18:56:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:41.946 18:56:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:06:42.207 /dev/nbd12 00:06:42.207 18:56:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:06:42.207 18:56:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:06:42.207 18:56:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:06:42.207 18:56:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:42.207 18:56:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:42.207 18:56:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:42.207 18:56:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:06:42.207 18:56:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:42.207 18:56:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:42.207 18:56:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:42.207 18:56:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:42.207 1+0 records in 00:06:42.207 1+0 records out 00:06:42.207 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000684356 s, 6.0 MB/s 00:06:42.207 18:56:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:42.207 18:56:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:42.207 18:56:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:42.207 18:56:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:42.207 18:56:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:42.207 18:56:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:42.207 18:56:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:42.207 18:56:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:06:42.468 /dev/nbd13 00:06:42.468 18:56:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:06:42.468 18:56:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:06:42.468 18:56:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:06:42.468 18:56:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:42.468 18:56:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:42.468 18:56:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:42.468 18:56:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:06:42.468 18:56:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:42.468 18:56:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:42.468 18:56:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:42.468 18:56:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:42.468 1+0 records in 00:06:42.468 1+0 records out 00:06:42.468 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000826121 s, 5.0 MB/s 00:06:42.468 18:56:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:42.468 18:56:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:42.468 18:56:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:42.468 18:56:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:42.468 18:56:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:42.468 18:56:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:42.468 18:56:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:42.468 18:56:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:42.468 18:56:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:42.468 18:56:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:42.729 18:57:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:42.729 { 00:06:42.729 "nbd_device": "/dev/nbd0", 00:06:42.729 "bdev_name": "Nvme0n1" 00:06:42.729 }, 00:06:42.729 { 00:06:42.729 "nbd_device": "/dev/nbd1", 00:06:42.729 "bdev_name": "Nvme1n1" 00:06:42.729 }, 00:06:42.729 { 00:06:42.729 "nbd_device": "/dev/nbd10", 00:06:42.729 "bdev_name": "Nvme2n1" 00:06:42.729 }, 00:06:42.729 { 00:06:42.729 "nbd_device": "/dev/nbd11", 00:06:42.729 "bdev_name": "Nvme2n2" 00:06:42.729 }, 00:06:42.729 { 00:06:42.729 "nbd_device": "/dev/nbd12", 00:06:42.729 "bdev_name": "Nvme2n3" 00:06:42.729 }, 00:06:42.729 { 00:06:42.729 "nbd_device": "/dev/nbd13", 00:06:42.729 "bdev_name": "Nvme3n1" 00:06:42.729 } 00:06:42.729 ]' 00:06:42.729 18:57:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:42.729 { 00:06:42.729 "nbd_device": "/dev/nbd0", 00:06:42.729 "bdev_name": "Nvme0n1" 00:06:42.729 }, 00:06:42.729 { 00:06:42.729 "nbd_device": "/dev/nbd1", 00:06:42.729 "bdev_name": "Nvme1n1" 00:06:42.729 }, 00:06:42.729 { 00:06:42.729 "nbd_device": "/dev/nbd10", 00:06:42.729 "bdev_name": "Nvme2n1" 00:06:42.729 }, 00:06:42.729 { 00:06:42.729 "nbd_device": "/dev/nbd11", 00:06:42.729 "bdev_name": "Nvme2n2" 00:06:42.729 }, 00:06:42.729 { 00:06:42.729 "nbd_device": "/dev/nbd12", 00:06:42.729 "bdev_name": "Nvme2n3" 00:06:42.729 }, 00:06:42.729 { 00:06:42.729 "nbd_device": "/dev/nbd13", 00:06:42.729 "bdev_name": "Nvme3n1" 00:06:42.729 } 00:06:42.729 ]' 00:06:42.729 18:57:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:42.729 18:57:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:42.729 /dev/nbd1 00:06:42.729 /dev/nbd10 00:06:42.729 /dev/nbd11 00:06:42.729 /dev/nbd12 00:06:42.729 /dev/nbd13' 00:06:42.729 18:57:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:42.729 /dev/nbd1 00:06:42.729 /dev/nbd10 00:06:42.729 /dev/nbd11 00:06:42.729 /dev/nbd12 00:06:42.729 /dev/nbd13' 00:06:42.729 18:57:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:42.729 18:57:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:06:42.729 18:57:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:06:42.729 18:57:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:06:42.729 18:57:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:06:42.729 18:57:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:06:42.729 18:57:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:42.729 18:57:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:42.729 18:57:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:42.729 18:57:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:42.729 18:57:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:42.729 18:57:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:06:42.729 256+0 records in 00:06:42.729 256+0 records out 00:06:42.729 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00890481 s, 118 MB/s 00:06:42.729 18:57:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:42.729 18:57:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:42.993 256+0 records in 00:06:42.993 256+0 records out 00:06:42.993 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.145509 s, 7.2 MB/s 00:06:42.993 18:57:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:42.993 18:57:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:42.993 256+0 records in 00:06:42.993 256+0 records out 00:06:42.993 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.141994 s, 7.4 MB/s 00:06:42.994 18:57:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:42.994 18:57:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:06:43.286 256+0 records in 00:06:43.286 256+0 records out 00:06:43.286 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.182638 s, 5.7 MB/s 00:06:43.286 18:57:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:43.286 18:57:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:06:43.286 256+0 records in 00:06:43.286 256+0 records out 00:06:43.286 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.144599 s, 7.3 MB/s 00:06:43.286 18:57:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:43.286 18:57:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:06:43.547 256+0 records in 00:06:43.547 256+0 records out 00:06:43.547 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.164505 s, 6.4 MB/s 00:06:43.547 18:57:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:43.547 18:57:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:06:43.807 256+0 records in 00:06:43.807 256+0 records out 00:06:43.807 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.218397 s, 4.8 MB/s 00:06:43.807 18:57:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:06:43.807 18:57:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:43.807 18:57:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:43.807 18:57:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:43.807 18:57:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:43.807 18:57:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:43.807 18:57:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:43.807 18:57:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:43.807 18:57:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:06:43.807 18:57:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:43.807 18:57:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:06:43.807 18:57:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:43.807 18:57:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:06:43.807 18:57:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:43.807 18:57:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:06:43.807 18:57:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:43.807 18:57:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:06:43.807 18:57:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:43.807 18:57:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:06:43.807 18:57:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:43.807 18:57:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:43.807 18:57:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:43.807 18:57:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:43.807 18:57:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:43.807 18:57:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:43.807 18:57:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:43.807 18:57:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:44.066 18:57:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:44.066 18:57:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:44.066 18:57:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:44.066 18:57:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:44.066 18:57:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:44.067 18:57:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:44.067 18:57:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:44.067 18:57:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:44.067 18:57:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:44.067 18:57:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:44.327 18:57:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:44.327 18:57:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:44.327 18:57:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:44.327 18:57:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:44.327 18:57:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:44.327 18:57:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:44.327 18:57:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:44.327 18:57:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:44.327 18:57:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:44.327 18:57:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:06:44.327 18:57:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:06:44.327 18:57:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:06:44.327 18:57:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:06:44.327 18:57:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:44.327 18:57:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:44.327 18:57:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:06:44.327 18:57:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:44.327 18:57:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:44.327 18:57:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:44.327 18:57:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:06:44.588 18:57:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:06:44.588 18:57:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:06:44.588 18:57:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:06:44.588 18:57:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:44.588 18:57:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:44.588 18:57:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:06:44.588 18:57:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:44.588 18:57:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:44.588 18:57:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:44.588 18:57:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:06:44.849 18:57:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:06:44.849 18:57:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:06:44.849 18:57:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:06:44.849 18:57:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:44.849 18:57:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:44.849 18:57:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:06:44.849 18:57:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:44.849 18:57:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:44.849 18:57:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:44.849 18:57:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:06:45.111 18:57:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:06:45.111 18:57:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:06:45.111 18:57:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:06:45.111 18:57:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:45.111 18:57:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:45.111 18:57:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:06:45.111 18:57:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:45.111 18:57:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:45.111 18:57:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:45.111 18:57:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.111 18:57:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:45.372 18:57:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:45.372 18:57:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:45.372 18:57:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:45.372 18:57:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:45.372 18:57:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:45.372 18:57:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:45.372 18:57:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:45.372 18:57:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:45.372 18:57:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:45.372 18:57:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:06:45.372 18:57:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:45.372 18:57:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:06:45.372 18:57:02 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:45.372 18:57:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.372 18:57:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:06:45.372 18:57:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:06:45.633 malloc_lvol_verify 00:06:45.633 18:57:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:06:45.895 72aedaab-7a14-4854-9d47-65c31609d46d 00:06:45.895 18:57:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:06:46.156 087687df-1d60-4674-a51a-33e5207c5f36 00:06:46.156 18:57:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:06:46.156 /dev/nbd0 00:06:46.416 18:57:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:06:46.416 18:57:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:06:46.416 18:57:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:06:46.416 18:57:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:06:46.416 18:57:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:06:46.416 mke2fs 1.47.0 (5-Feb-2023) 00:06:46.416 Discarding device blocks: 0/4096 done 00:06:46.416 Creating filesystem with 4096 1k blocks and 1024 inodes 00:06:46.416 00:06:46.416 Allocating group tables: 0/1 done 00:06:46.416 Writing inode tables: 0/1 done 00:06:46.416 Creating journal (1024 blocks): done 00:06:46.416 Writing superblocks and filesystem accounting information: 0/1 done 00:06:46.416 00:06:46.416 18:57:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:46.416 18:57:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.416 18:57:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:46.416 18:57:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:46.416 18:57:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:46.416 18:57:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:46.416 18:57:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:46.416 18:57:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:46.416 18:57:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:46.416 18:57:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:46.416 18:57:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:46.416 18:57:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:46.416 18:57:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:46.416 18:57:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:46.416 18:57:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:46.416 18:57:03 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 71578 00:06:46.416 18:57:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 71578 ']' 00:06:46.416 18:57:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 71578 00:06:46.416 18:57:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:06:46.677 18:57:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:46.677 18:57:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71578 00:06:46.677 18:57:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:46.677 18:57:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:46.677 killing process with pid 71578 00:06:46.677 18:57:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71578' 00:06:46.677 18:57:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 71578 00:06:46.677 18:57:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 71578 00:06:46.938 18:57:04 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:06:46.938 00:06:46.938 real 0m10.267s 00:06:46.938 user 0m14.339s 00:06:46.938 sys 0m3.706s 00:06:46.938 ************************************ 00:06:46.938 END TEST bdev_nbd 00:06:46.938 ************************************ 00:06:46.938 18:57:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:46.938 18:57:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:46.938 skipping fio tests on NVMe due to multi-ns failures. 00:06:46.938 18:57:04 blockdev_nvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:06:46.938 18:57:04 blockdev_nvme -- bdev/blockdev.sh@801 -- # '[' nvme = nvme ']' 00:06:46.938 18:57:04 blockdev_nvme -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:06:46.938 18:57:04 blockdev_nvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:46.938 18:57:04 blockdev_nvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:46.938 18:57:04 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:46.938 18:57:04 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:46.938 18:57:04 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:46.938 ************************************ 00:06:46.938 START TEST bdev_verify 00:06:46.938 ************************************ 00:06:46.938 18:57:04 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:46.938 [2024-12-05 18:57:04.412443] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:06:46.938 [2024-12-05 18:57:04.412616] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71963 ] 00:06:47.200 [2024-12-05 18:57:04.558670] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:47.200 [2024-12-05 18:57:04.595738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.200 [2024-12-05 18:57:04.595755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:47.770 Running I/O for 5 seconds... 00:06:50.091 17216.00 IOPS, 67.25 MiB/s [2024-12-05T18:57:08.591Z] 18880.00 IOPS, 73.75 MiB/s [2024-12-05T18:57:09.534Z] 18944.00 IOPS, 74.00 MiB/s [2024-12-05T18:57:10.475Z] 18816.00 IOPS, 73.50 MiB/s [2024-12-05T18:57:10.475Z] 18675.20 IOPS, 72.95 MiB/s 00:06:52.916 Latency(us) 00:06:52.916 [2024-12-05T18:57:10.475Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:52.916 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:52.916 Verification LBA range: start 0x0 length 0xbd0bd 00:06:52.916 Nvme0n1 : 5.07 1528.68 5.97 0.00 0.00 83306.12 9074.22 95178.44 00:06:52.916 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:52.916 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:06:52.916 Nvme0n1 : 5.06 1542.47 6.03 0.00 0.00 82611.73 18753.38 96791.63 00:06:52.916 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:52.916 Verification LBA range: start 0x0 length 0xa0000 00:06:52.916 Nvme1n1 : 5.07 1527.71 5.97 0.00 0.00 83206.64 10788.23 89935.56 00:06:52.916 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:52.916 Verification LBA range: start 0xa0000 length 0xa0000 00:06:52.916 Nvme1n1 : 5.06 1541.95 6.02 0.00 0.00 82492.25 23088.84 88322.36 00:06:52.916 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:52.916 Verification LBA range: start 0x0 length 0x80000 00:06:52.916 Nvme2n1 : 5.09 1533.91 5.99 0.00 0.00 82672.38 16736.89 72190.42 00:06:52.916 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:52.916 Verification LBA range: start 0x80000 length 0x80000 00:06:52.916 Nvme2n1 : 5.09 1546.05 6.04 0.00 0.00 81939.60 11443.59 74206.92 00:06:52.916 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:52.916 Verification LBA range: start 0x0 length 0x80000 00:06:52.916 Nvme2n2 : 5.09 1532.90 5.99 0.00 0.00 82506.73 17644.31 64124.46 00:06:52.916 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:52.916 Verification LBA range: start 0x80000 length 0x80000 00:06:52.916 Nvme2n2 : 5.10 1554.60 6.07 0.00 0.00 81508.34 10132.87 63721.16 00:06:52.916 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:52.916 Verification LBA range: start 0x0 length 0x80000 00:06:52.916 Nvme2n3 : 5.10 1532.38 5.99 0.00 0.00 82383.38 18148.43 66140.95 00:06:52.916 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:52.916 Verification LBA range: start 0x80000 length 0x80000 00:06:52.916 Nvme2n3 : 5.11 1553.67 6.07 0.00 0.00 81406.01 11695.66 64931.05 00:06:52.916 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:52.916 Verification LBA range: start 0x0 length 0x20000 00:06:52.916 Nvme3n1 : 5.10 1531.86 5.98 0.00 0.00 82276.93 17241.01 68157.44 00:06:52.916 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:52.916 Verification LBA range: start 0x20000 length 0x20000 00:06:52.916 Nvme3n1 : 5.11 1553.25 6.07 0.00 0.00 81281.11 11393.18 68964.04 00:06:52.916 [2024-12-05T18:57:10.475Z] =================================================================================================================== 00:06:52.916 [2024-12-05T18:57:10.475Z] Total : 18479.44 72.19 0.00 0.00 82294.04 9074.22 96791.63 00:06:53.489 00:06:53.489 real 0m6.559s 00:06:53.489 user 0m12.255s 00:06:53.489 sys 0m0.274s 00:06:53.489 ************************************ 00:06:53.489 END TEST bdev_verify 00:06:53.489 ************************************ 00:06:53.489 18:57:10 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:53.489 18:57:10 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:06:53.489 18:57:10 blockdev_nvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:53.489 18:57:10 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:53.489 18:57:10 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:53.489 18:57:10 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:53.489 ************************************ 00:06:53.489 START TEST bdev_verify_big_io 00:06:53.489 ************************************ 00:06:53.489 18:57:10 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:53.749 [2024-12-05 18:57:11.056650] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:06:53.749 [2024-12-05 18:57:11.056820] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72050 ] 00:06:53.749 [2024-12-05 18:57:11.203331] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:53.749 [2024-12-05 18:57:11.236482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:53.749 [2024-12-05 18:57:11.236594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.322 Running I/O for 5 seconds... 00:06:58.850 1693.00 IOPS, 105.81 MiB/s [2024-12-05T18:57:17.801Z] 2045.00 IOPS, 127.81 MiB/s [2024-12-05T18:57:18.063Z] 2302.33 IOPS, 143.90 MiB/s [2024-12-05T18:57:18.323Z] 2214.50 IOPS, 138.41 MiB/s 00:07:00.764 Latency(us) 00:07:00.764 [2024-12-05T18:57:18.323Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:00.764 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:00.764 Verification LBA range: start 0x0 length 0xbd0b 00:07:00.764 Nvme0n1 : 5.93 82.42 5.15 0.00 0.00 1487670.91 19862.45 1580929.97 00:07:00.764 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:00.764 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:00.764 Nvme0n1 : 5.76 130.95 8.18 0.00 0.00 942550.89 39523.25 948557.98 00:07:00.764 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:00.764 Verification LBA range: start 0x0 length 0xa000 00:07:00.764 Nvme1n1 : 5.93 82.57 5.16 0.00 0.00 1408768.44 66140.95 1303460.63 00:07:00.764 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:00.764 Verification LBA range: start 0xa000 length 0xa000 00:07:00.764 Nvme1n1 : 5.76 133.28 8.33 0.00 0.00 907950.34 111310.38 832408.02 00:07:00.764 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:00.764 Verification LBA range: start 0x0 length 0x8000 00:07:00.764 Nvme2n1 : 5.94 86.23 5.39 0.00 0.00 1289201.43 48597.46 1484138.34 00:07:00.764 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:00.764 Verification LBA range: start 0x8000 length 0x8000 00:07:00.764 Nvme2n1 : 5.77 133.21 8.33 0.00 0.00 881660.32 120182.94 858219.13 00:07:00.764 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:00.764 Verification LBA range: start 0x0 length 0x8000 00:07:00.764 Nvme2n2 : 5.99 96.15 6.01 0.00 0.00 1111837.32 21273.99 1329271.73 00:07:00.764 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:00.764 Verification LBA range: start 0x8000 length 0x8000 00:07:00.764 Nvme2n2 : 5.82 135.68 8.48 0.00 0.00 841343.38 53638.70 884030.23 00:07:00.764 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:00.764 Verification LBA range: start 0x0 length 0x8000 00:07:00.764 Nvme2n3 : 6.12 125.57 7.85 0.00 0.00 819912.21 13208.02 1335724.50 00:07:00.764 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:00.764 Verification LBA range: start 0x8000 length 0x8000 00:07:00.764 Nvme2n3 : 5.87 141.81 8.86 0.00 0.00 787413.46 40733.14 896935.78 00:07:00.764 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:00.764 Verification LBA range: start 0x0 length 0x2000 00:07:00.764 Nvme3n1 : 6.33 222.42 13.90 0.00 0.00 443815.01 297.75 1361535.61 00:07:00.764 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:00.764 Verification LBA range: start 0x2000 length 0x2000 00:07:00.764 Nvme3n1 : 5.88 152.40 9.53 0.00 0.00 715553.84 2873.50 922746.88 00:07:00.764 [2024-12-05T18:57:18.323Z] =================================================================================================================== 00:07:00.764 [2024-12-05T18:57:18.323Z] Total : 1522.70 95.17 0.00 0.00 885916.35 297.75 1580929.97 00:07:01.705 ************************************ 00:07:01.705 END TEST bdev_verify_big_io 00:07:01.705 00:07:01.705 real 0m8.003s 00:07:01.705 user 0m15.098s 00:07:01.705 sys 0m0.320s 00:07:01.705 18:57:18 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:01.705 18:57:18 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:01.705 ************************************ 00:07:01.705 18:57:19 blockdev_nvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:01.705 18:57:19 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:01.705 18:57:19 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:01.705 18:57:19 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:01.705 ************************************ 00:07:01.705 START TEST bdev_write_zeroes 00:07:01.705 ************************************ 00:07:01.705 18:57:19 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:01.705 [2024-12-05 18:57:19.132769] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:07:01.705 [2024-12-05 18:57:19.132912] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72161 ] 00:07:01.965 [2024-12-05 18:57:19.273644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.965 [2024-12-05 18:57:19.307023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.224 Running I/O for 1 seconds... 00:07:03.616 50236.00 IOPS, 196.23 MiB/s 00:07:03.616 Latency(us) 00:07:03.616 [2024-12-05T18:57:21.175Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:03.616 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:03.616 Nvme0n1 : 1.02 8401.33 32.82 0.00 0.00 15199.76 5066.44 32263.88 00:07:03.616 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:03.616 Nvme1n1 : 1.02 8395.06 32.79 0.00 0.00 15193.43 10082.46 23492.14 00:07:03.616 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:03.616 Nvme2n1 : 1.02 8385.51 32.76 0.00 0.00 15139.88 10032.05 21778.12 00:07:03.616 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:03.616 Nvme2n2 : 1.02 8375.99 32.72 0.00 0.00 15134.13 10082.46 20971.52 00:07:03.616 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:03.616 Nvme2n3 : 1.03 8366.48 32.68 0.00 0.00 15104.23 9275.86 22383.06 00:07:03.617 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:03.617 Nvme3n1 : 1.03 8294.55 32.40 0.00 0.00 15212.62 9124.63 29037.49 00:07:03.617 [2024-12-05T18:57:21.176Z] =================================================================================================================== 00:07:03.617 [2024-12-05T18:57:21.176Z] Total : 50218.92 196.17 0.00 0.00 15163.94 5066.44 32263.88 00:07:03.617 00:07:03.617 real 0m1.940s 00:07:03.617 user 0m1.613s 00:07:03.617 sys 0m0.206s 00:07:03.617 18:57:20 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:03.617 ************************************ 00:07:03.617 END TEST bdev_write_zeroes 00:07:03.617 ************************************ 00:07:03.617 18:57:20 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:03.617 18:57:21 blockdev_nvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:03.617 18:57:21 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:03.617 18:57:21 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:03.617 18:57:21 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:03.617 ************************************ 00:07:03.617 START TEST bdev_json_nonenclosed 00:07:03.617 ************************************ 00:07:03.617 18:57:21 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:03.617 [2024-12-05 18:57:21.132608] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:07:03.617 [2024-12-05 18:57:21.132750] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72203 ] 00:07:03.877 [2024-12-05 18:57:21.279245] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.877 [2024-12-05 18:57:21.312537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.877 [2024-12-05 18:57:21.312650] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:03.877 [2024-12-05 18:57:21.312669] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:03.877 [2024-12-05 18:57:21.312682] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:03.877 00:07:03.877 real 0m0.333s 00:07:03.877 user 0m0.127s 00:07:03.877 sys 0m0.102s 00:07:03.877 18:57:21 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:03.877 ************************************ 00:07:03.877 END TEST bdev_json_nonenclosed 00:07:03.877 ************************************ 00:07:03.877 18:57:21 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:04.138 18:57:21 blockdev_nvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:04.138 18:57:21 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:04.138 18:57:21 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:04.138 18:57:21 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:04.138 ************************************ 00:07:04.138 START TEST bdev_json_nonarray 00:07:04.138 ************************************ 00:07:04.138 18:57:21 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:04.138 [2024-12-05 18:57:21.543708] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:07:04.138 [2024-12-05 18:57:21.543847] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72223 ] 00:07:04.138 [2024-12-05 18:57:21.682926] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.398 [2024-12-05 18:57:21.716534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.398 [2024-12-05 18:57:21.716661] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:04.398 [2024-12-05 18:57:21.716679] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:04.398 [2024-12-05 18:57:21.716693] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:04.398 00:07:04.398 real 0m0.326s 00:07:04.398 user 0m0.128s 00:07:04.398 sys 0m0.095s 00:07:04.398 18:57:21 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:04.398 ************************************ 00:07:04.398 END TEST bdev_json_nonarray 00:07:04.398 ************************************ 00:07:04.398 18:57:21 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:04.398 18:57:21 blockdev_nvme -- bdev/blockdev.sh@824 -- # [[ nvme == bdev ]] 00:07:04.398 18:57:21 blockdev_nvme -- bdev/blockdev.sh@832 -- # [[ nvme == gpt ]] 00:07:04.398 18:57:21 blockdev_nvme -- bdev/blockdev.sh@836 -- # [[ nvme == crypto_sw ]] 00:07:04.398 18:57:21 blockdev_nvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:07:04.398 18:57:21 blockdev_nvme -- bdev/blockdev.sh@849 -- # cleanup 00:07:04.398 18:57:21 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:04.398 18:57:21 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:04.398 18:57:21 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:07:04.398 18:57:21 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:07:04.398 18:57:21 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:07:04.398 18:57:21 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:07:04.398 00:07:04.398 real 0m32.504s 00:07:04.398 user 0m50.089s 00:07:04.398 sys 0m6.208s 00:07:04.398 18:57:21 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:04.398 ************************************ 00:07:04.398 END TEST blockdev_nvme 00:07:04.398 ************************************ 00:07:04.398 18:57:21 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:04.398 18:57:21 -- spdk/autotest.sh@209 -- # uname -s 00:07:04.398 18:57:21 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:07:04.398 18:57:21 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:04.398 18:57:21 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:04.398 18:57:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:04.398 18:57:21 -- common/autotest_common.sh@10 -- # set +x 00:07:04.398 ************************************ 00:07:04.398 START TEST blockdev_nvme_gpt 00:07:04.398 ************************************ 00:07:04.398 18:57:21 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:04.659 * Looking for test storage... 00:07:04.659 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:04.659 18:57:22 blockdev_nvme_gpt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:04.659 18:57:22 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lcov --version 00:07:04.659 18:57:22 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:04.659 18:57:22 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:04.659 18:57:22 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:04.659 18:57:22 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:04.659 18:57:22 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:04.659 18:57:22 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:07:04.659 18:57:22 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:07:04.659 18:57:22 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:07:04.659 18:57:22 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:07:04.659 18:57:22 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:07:04.659 18:57:22 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:07:04.659 18:57:22 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:07:04.659 18:57:22 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:04.659 18:57:22 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:07:04.659 18:57:22 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:07:04.659 18:57:22 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:04.659 18:57:22 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:04.659 18:57:22 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:07:04.659 18:57:22 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:07:04.659 18:57:22 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:04.659 18:57:22 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:07:04.659 18:57:22 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:07:04.659 18:57:22 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:07:04.659 18:57:22 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:07:04.659 18:57:22 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:04.659 18:57:22 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:07:04.659 18:57:22 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:07:04.659 18:57:22 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:04.659 18:57:22 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:04.659 18:57:22 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:07:04.659 18:57:22 blockdev_nvme_gpt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:04.659 18:57:22 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:04.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.659 --rc genhtml_branch_coverage=1 00:07:04.659 --rc genhtml_function_coverage=1 00:07:04.659 --rc genhtml_legend=1 00:07:04.659 --rc geninfo_all_blocks=1 00:07:04.659 --rc geninfo_unexecuted_blocks=1 00:07:04.659 00:07:04.659 ' 00:07:04.659 18:57:22 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:04.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.659 --rc genhtml_branch_coverage=1 00:07:04.659 --rc genhtml_function_coverage=1 00:07:04.659 --rc genhtml_legend=1 00:07:04.659 --rc geninfo_all_blocks=1 00:07:04.660 --rc geninfo_unexecuted_blocks=1 00:07:04.660 00:07:04.660 ' 00:07:04.660 18:57:22 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:04.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.660 --rc genhtml_branch_coverage=1 00:07:04.660 --rc genhtml_function_coverage=1 00:07:04.660 --rc genhtml_legend=1 00:07:04.660 --rc geninfo_all_blocks=1 00:07:04.660 --rc geninfo_unexecuted_blocks=1 00:07:04.660 00:07:04.660 ' 00:07:04.660 18:57:22 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:04.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.660 --rc genhtml_branch_coverage=1 00:07:04.660 --rc genhtml_function_coverage=1 00:07:04.660 --rc genhtml_legend=1 00:07:04.660 --rc geninfo_all_blocks=1 00:07:04.660 --rc geninfo_unexecuted_blocks=1 00:07:04.660 00:07:04.660 ' 00:07:04.660 18:57:22 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:04.660 18:57:22 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:07:04.660 18:57:22 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:04.660 18:57:22 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:04.660 18:57:22 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:04.660 18:57:22 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:04.660 18:57:22 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:07:04.660 18:57:22 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:07:04.660 18:57:22 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:07:04.660 18:57:22 blockdev_nvme_gpt -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:07:04.660 18:57:22 blockdev_nvme_gpt -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:07:04.660 18:57:22 blockdev_nvme_gpt -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:07:04.660 18:57:22 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # uname -s 00:07:04.660 18:57:22 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:07:04.660 18:57:22 blockdev_nvme_gpt -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:07:04.660 18:57:22 blockdev_nvme_gpt -- bdev/blockdev.sh@719 -- # test_type=gpt 00:07:04.660 18:57:22 blockdev_nvme_gpt -- bdev/blockdev.sh@720 -- # crypto_device= 00:07:04.660 18:57:22 blockdev_nvme_gpt -- bdev/blockdev.sh@721 -- # dek= 00:07:04.660 18:57:22 blockdev_nvme_gpt -- bdev/blockdev.sh@722 -- # env_ctx= 00:07:04.660 18:57:22 blockdev_nvme_gpt -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:07:04.660 18:57:22 blockdev_nvme_gpt -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:07:04.660 18:57:22 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == bdev ]] 00:07:04.660 18:57:22 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == crypto_* ]] 00:07:04.660 18:57:22 blockdev_nvme_gpt -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:07:04.660 18:57:22 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=72307 00:07:04.660 18:57:22 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:04.660 18:57:22 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 72307 00:07:04.660 18:57:22 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 72307 ']' 00:07:04.660 18:57:22 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.660 18:57:22 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:04.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.660 18:57:22 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.660 18:57:22 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:04.660 18:57:22 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:04.660 18:57:22 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:04.660 [2024-12-05 18:57:22.185689] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:07:04.660 [2024-12-05 18:57:22.186337] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72307 ] 00:07:04.920 [2024-12-05 18:57:22.335246] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.920 [2024-12-05 18:57:22.370063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.863 18:57:23 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:05.863 18:57:23 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:07:05.863 18:57:23 blockdev_nvme_gpt -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:07:05.863 18:57:23 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # setup_gpt_conf 00:07:05.863 18:57:23 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:05.863 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:06.124 Waiting for block devices as requested 00:07:06.124 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:06.385 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:06.385 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:06.385 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:11.663 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:11.663 18:57:28 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:07:11.663 18:57:28 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:07:11.663 18:57:28 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:07:11.663 18:57:28 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:07:11.663 18:57:28 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:07:11.663 18:57:28 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:07:11.663 18:57:28 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:07:11.663 18:57:28 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:07:11.663 18:57:28 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:07:11.663 18:57:28 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:07:11.663 18:57:28 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:07:11.663 18:57:28 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:11.663 18:57:28 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:11.663 18:57:28 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:07:11.663 18:57:28 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:07:11.663 18:57:28 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:07:11.663 18:57:28 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:07:11.663 18:57:28 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:07:11.663 18:57:28 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:07:11.663 18:57:28 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:11.663 18:57:28 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:07:11.663 18:57:28 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:07:11.663 18:57:28 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:07:11.663 18:57:28 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:07:11.663 18:57:28 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:07:11.663 18:57:28 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:07:11.663 18:57:28 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:11.663 18:57:28 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:07:11.663 18:57:28 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n2 00:07:11.663 18:57:28 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:07:11.663 18:57:28 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:07:11.663 18:57:28 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:11.663 18:57:28 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:07:11.663 18:57:28 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n3 00:07:11.663 18:57:28 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:07:11.663 18:57:28 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:07:11.663 18:57:28 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:11.663 18:57:28 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:07:11.663 18:57:28 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:07:11.663 18:57:28 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:07:11.663 18:57:28 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3c3n1 00:07:11.663 18:57:28 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:07:11.663 18:57:28 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:07:11.663 18:57:28 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:11.663 18:57:28 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:07:11.663 18:57:28 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:07:11.663 18:57:28 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:07:11.663 18:57:28 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:07:11.663 18:57:28 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:07:11.663 18:57:28 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:07:11.663 18:57:28 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:07:11.663 18:57:28 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:07:11.663 BYT; 00:07:11.663 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:07:11.663 18:57:28 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:07:11.663 BYT; 00:07:11.663 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:07:11.663 18:57:28 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:07:11.663 18:57:28 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:07:11.663 18:57:28 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:07:11.663 18:57:28 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:07:11.663 18:57:28 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:11.663 18:57:28 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:07:11.663 18:57:29 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:07:11.663 18:57:29 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:07:11.663 18:57:29 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:11.663 18:57:29 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:11.663 18:57:29 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:07:11.663 18:57:29 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:07:11.663 18:57:29 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:11.663 18:57:29 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:07:11.663 18:57:29 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:11.663 18:57:29 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:11.663 18:57:29 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:11.663 18:57:29 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:07:11.663 18:57:29 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:07:11.663 18:57:29 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:11.663 18:57:29 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:11.663 18:57:29 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:07:11.663 18:57:29 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:07:11.663 18:57:29 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:11.663 18:57:29 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:07:11.663 18:57:29 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:11.663 18:57:29 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:11.663 18:57:29 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:11.663 18:57:29 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:07:12.598 The operation has completed successfully. 00:07:12.598 18:57:30 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:07:13.533 The operation has completed successfully. 00:07:13.533 18:57:31 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:14.124 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:14.382 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:14.382 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:14.641 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:14.641 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:14.641 18:57:32 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:07:14.641 18:57:32 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.641 18:57:32 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:14.641 [] 00:07:14.641 18:57:32 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.641 18:57:32 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:07:14.641 18:57:32 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:07:14.641 18:57:32 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:07:14.641 18:57:32 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:14.641 18:57:32 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:07:14.641 18:57:32 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.641 18:57:32 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:14.898 18:57:32 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.898 18:57:32 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:07:14.898 18:57:32 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.898 18:57:32 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:14.898 18:57:32 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.899 18:57:32 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # cat 00:07:14.899 18:57:32 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:07:14.899 18:57:32 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.899 18:57:32 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:14.899 18:57:32 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.899 18:57:32 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:07:14.899 18:57:32 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.899 18:57:32 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:14.899 18:57:32 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.899 18:57:32 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:14.899 18:57:32 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.899 18:57:32 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:14.899 18:57:32 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.899 18:57:32 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:07:14.899 18:57:32 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:07:14.899 18:57:32 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:07:14.899 18:57:32 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.899 18:57:32 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:15.159 18:57:32 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.159 18:57:32 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:07:15.159 18:57:32 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "4d449993-80c7-4e21-975c-f58926242479"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "4d449993-80c7-4e21-975c-f58926242479",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "238a8304-8374-4062-9193-1b0d3c411991"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "238a8304-8374-4062-9193-1b0d3c411991",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "cd821860-0a49-4b7f-93a6-2c9478dbedf9"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "cd821860-0a49-4b7f-93a6-2c9478dbedf9",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "4cba8359-c1cf-436c-9584-fd645f442a46"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "4cba8359-c1cf-436c-9584-fd645f442a46",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "01d49035-29fa-4a9b-91c9-06ac5b0f2b68"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "01d49035-29fa-4a9b-91c9-06ac5b0f2b68",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:15.159 18:57:32 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # jq -r .name 00:07:15.159 18:57:32 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:07:15.159 18:57:32 blockdev_nvme_gpt -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:07:15.159 18:57:32 blockdev_nvme_gpt -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:07:15.159 18:57:32 blockdev_nvme_gpt -- bdev/blockdev.sh@791 -- # killprocess 72307 00:07:15.159 18:57:32 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 72307 ']' 00:07:15.159 18:57:32 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 72307 00:07:15.159 18:57:32 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:07:15.159 18:57:32 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:15.159 18:57:32 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72307 00:07:15.159 18:57:32 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:15.159 killing process with pid 72307 00:07:15.159 18:57:32 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:15.159 18:57:32 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72307' 00:07:15.159 18:57:32 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 72307 00:07:15.160 18:57:32 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 72307 00:07:15.419 18:57:32 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:15.419 18:57:32 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:15.419 18:57:32 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:07:15.419 18:57:32 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:15.419 18:57:32 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:15.419 ************************************ 00:07:15.419 START TEST bdev_hello_world 00:07:15.419 ************************************ 00:07:15.419 18:57:32 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:15.419 [2024-12-05 18:57:32.857504] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:07:15.419 [2024-12-05 18:57:32.857613] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72913 ] 00:07:15.677 [2024-12-05 18:57:33.000269] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.677 [2024-12-05 18:57:33.018194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.935 [2024-12-05 18:57:33.379245] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:15.935 [2024-12-05 18:57:33.379295] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:15.935 [2024-12-05 18:57:33.379309] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:15.935 [2024-12-05 18:57:33.380914] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:15.935 [2024-12-05 18:57:33.381289] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:15.935 [2024-12-05 18:57:33.381316] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:15.935 [2024-12-05 18:57:33.381520] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:15.935 00:07:15.935 [2024-12-05 18:57:33.381544] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:16.194 00:07:16.194 real 0m0.707s 00:07:16.194 user 0m0.483s 00:07:16.194 sys 0m0.120s 00:07:16.194 18:57:33 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:16.194 18:57:33 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:16.194 ************************************ 00:07:16.194 END TEST bdev_hello_world 00:07:16.194 ************************************ 00:07:16.194 18:57:33 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:07:16.194 18:57:33 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:16.194 18:57:33 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:16.194 18:57:33 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:16.194 ************************************ 00:07:16.194 START TEST bdev_bounds 00:07:16.194 ************************************ 00:07:16.194 18:57:33 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:07:16.194 18:57:33 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=72946 00:07:16.194 18:57:33 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:16.194 Process bdevio pid: 72946 00:07:16.194 18:57:33 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 72946' 00:07:16.194 18:57:33 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:16.194 18:57:33 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 72946 00:07:16.194 18:57:33 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 72946 ']' 00:07:16.194 18:57:33 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:16.194 18:57:33 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:16.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:16.194 18:57:33 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:16.194 18:57:33 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:16.194 18:57:33 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:16.194 [2024-12-05 18:57:33.609883] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:07:16.194 [2024-12-05 18:57:33.609998] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72946 ] 00:07:16.194 [2024-12-05 18:57:33.749642] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:16.452 [2024-12-05 18:57:33.770179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.452 [2024-12-05 18:57:33.770162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:16.452 [2024-12-05 18:57:33.770279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:17.017 18:57:34 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:17.017 18:57:34 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:07:17.017 18:57:34 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:17.017 I/O targets: 00:07:17.017 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:17.017 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:07:17.017 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:07:17.017 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:17.017 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:17.018 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:17.018 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:17.018 00:07:17.018 00:07:17.018 CUnit - A unit testing framework for C - Version 2.1-3 00:07:17.018 http://cunit.sourceforge.net/ 00:07:17.018 00:07:17.018 00:07:17.018 Suite: bdevio tests on: Nvme3n1 00:07:17.018 Test: blockdev write read block ...passed 00:07:17.018 Test: blockdev write zeroes read block ...passed 00:07:17.018 Test: blockdev write zeroes read no split ...passed 00:07:17.018 Test: blockdev write zeroes read split ...passed 00:07:17.018 Test: blockdev write zeroes read split partial ...passed 00:07:17.018 Test: blockdev reset ...[2024-12-05 18:57:34.541203] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:07:17.018 [2024-12-05 18:57:34.542866] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:07:17.018 passed 00:07:17.018 Test: blockdev write read 8 blocks ...passed 00:07:17.018 Test: blockdev write read size > 128k ...passed 00:07:17.018 Test: blockdev write read invalid size ...passed 00:07:17.018 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:17.018 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:17.018 Test: blockdev write read max offset ...passed 00:07:17.018 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:17.018 Test: blockdev writev readv 8 blocks ...passed 00:07:17.018 Test: blockdev writev readv 30 x 1block ...passed 00:07:17.018 Test: blockdev writev readv block ...passed 00:07:17.018 Test: blockdev writev readv size > 128k ...passed 00:07:17.018 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:17.018 Test: blockdev comparev and writev ...passed 00:07:17.018 Test: blockdev nvme passthru rw ...[2024-12-05 18:57:34.548273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x29e20e000 len:0x1000 00:07:17.018 [2024-12-05 18:57:34.548320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:17.018 passed 00:07:17.018 Test: blockdev nvme passthru vendor specific ...[2024-12-05 18:57:34.548813] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:17.018 [2024-12-05 18:57:34.548836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:17.018 passed 00:07:17.018 Test: blockdev nvme admin passthru ...passed 00:07:17.018 Test: blockdev copy ...passed 00:07:17.018 Suite: bdevio tests on: Nvme2n3 00:07:17.018 Test: blockdev write read block ...passed 00:07:17.018 Test: blockdev write zeroes read block ...passed 00:07:17.018 Test: blockdev write zeroes read no split ...passed 00:07:17.018 Test: blockdev write zeroes read split ...passed 00:07:17.018 Test: blockdev write zeroes read split partial ...passed 00:07:17.018 Test: blockdev reset ...[2024-12-05 18:57:34.562485] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:17.018 passed 00:07:17.018 Test: blockdev write read 8 blocks ...[2024-12-05 18:57:34.564293] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:17.018 passed 00:07:17.018 Test: blockdev write read size > 128k ...passed 00:07:17.018 Test: blockdev write read invalid size ...passed 00:07:17.018 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:17.018 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:17.018 Test: blockdev write read max offset ...passed 00:07:17.018 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:17.018 Test: blockdev writev readv 8 blocks ...passed 00:07:17.018 Test: blockdev writev readv 30 x 1block ...passed 00:07:17.018 Test: blockdev writev readv block ...passed 00:07:17.018 Test: blockdev writev readv size > 128k ...passed 00:07:17.018 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:17.018 Test: blockdev comparev and writev ...[2024-12-05 18:57:34.568939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x29e208000 len:0x1000 00:07:17.018 [2024-12-05 18:57:34.568977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:17.018 passed 00:07:17.018 Test: blockdev nvme passthru rw ...passed 00:07:17.018 Test: blockdev nvme passthru vendor specific ...passed 00:07:17.018 Test: blockdev nvme admin passthru ...[2024-12-05 18:57:34.569569] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:17.018 [2024-12-05 18:57:34.569594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:17.018 passed 00:07:17.018 Test: blockdev copy ...passed 00:07:17.018 Suite: bdevio tests on: Nvme2n2 00:07:17.018 Test: blockdev write read block ...passed 00:07:17.018 Test: blockdev write zeroes read block ...passed 00:07:17.018 Test: blockdev write zeroes read no split ...passed 00:07:17.277 Test: blockdev write zeroes read split ...passed 00:07:17.277 Test: blockdev write zeroes read split partial ...passed 00:07:17.277 Test: blockdev reset ...[2024-12-05 18:57:34.583034] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:17.277 [2024-12-05 18:57:34.584701] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:17.277 passed 00:07:17.277 Test: blockdev write read 8 blocks ...passed 00:07:17.277 Test: blockdev write read size > 128k ...passed 00:07:17.277 Test: blockdev write read invalid size ...passed 00:07:17.277 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:17.277 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:17.277 Test: blockdev write read max offset ...passed 00:07:17.277 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:17.277 Test: blockdev writev readv 8 blocks ...passed 00:07:17.277 Test: blockdev writev readv 30 x 1block ...passed 00:07:17.277 Test: blockdev writev readv block ...passed 00:07:17.277 Test: blockdev writev readv size > 128k ...passed 00:07:17.277 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:17.277 Test: blockdev comparev and writev ...passed 00:07:17.277 Test: blockdev nvme passthru rw ...[2024-12-05 18:57:34.589244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x29e202000 len:0x1000 00:07:17.277 [2024-12-05 18:57:34.589292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:17.277 passed 00:07:17.277 Test: blockdev nvme passthru vendor specific ...[2024-12-05 18:57:34.589822] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:17.277 [2024-12-05 18:57:34.589843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:17.277 passed 00:07:17.277 Test: blockdev nvme admin passthru ...passed 00:07:17.277 Test: blockdev copy ...passed 00:07:17.277 Suite: bdevio tests on: Nvme2n1 00:07:17.277 Test: blockdev write read block ...passed 00:07:17.277 Test: blockdev write zeroes read block ...passed 00:07:17.277 Test: blockdev write zeroes read no split ...passed 00:07:17.277 Test: blockdev write zeroes read split ...passed 00:07:17.278 Test: blockdev write zeroes read split partial ...passed 00:07:17.278 Test: blockdev reset ...[2024-12-05 18:57:34.603873] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:17.278 passed 00:07:17.278 Test: blockdev write read 8 blocks ...[2024-12-05 18:57:34.605551] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:17.278 passed 00:07:17.278 Test: blockdev write read size > 128k ...passed 00:07:17.278 Test: blockdev write read invalid size ...passed 00:07:17.278 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:17.278 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:17.278 Test: blockdev write read max offset ...passed 00:07:17.278 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:17.278 Test: blockdev writev readv 8 blocks ...passed 00:07:17.278 Test: blockdev writev readv 30 x 1block ...passed 00:07:17.278 Test: blockdev writev readv block ...passed 00:07:17.278 Test: blockdev writev readv size > 128k ...passed 00:07:17.278 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:17.278 Test: blockdev comparev and writev ...[2024-12-05 18:57:34.609877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b6a04000 len:0x1000 00:07:17.278 [2024-12-05 18:57:34.609912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:17.278 passed 00:07:17.278 Test: blockdev nvme passthru rw ...passed 00:07:17.278 Test: blockdev nvme passthru vendor specific ...passed 00:07:17.278 Test: blockdev nvme admin passthru ...[2024-12-05 18:57:34.610422] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:17.278 [2024-12-05 18:57:34.610444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:17.278 passed 00:07:17.278 Test: blockdev copy ...passed 00:07:17.278 Suite: bdevio tests on: Nvme1n1p2 00:07:17.278 Test: blockdev write read block ...passed 00:07:17.278 Test: blockdev write zeroes read block ...passed 00:07:17.278 Test: blockdev write zeroes read no split ...passed 00:07:17.278 Test: blockdev write zeroes read split ...passed 00:07:17.278 Test: blockdev write zeroes read split partial ...passed 00:07:17.278 Test: blockdev reset ...[2024-12-05 18:57:34.626587] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:17.278 passed 00:07:17.278 Test: blockdev write read 8 blocks ...[2024-12-05 18:57:34.628084] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:07:17.278 passed 00:07:17.278 Test: blockdev write read size > 128k ...passed 00:07:17.278 Test: blockdev write read invalid size ...passed 00:07:17.278 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:17.278 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:17.278 Test: blockdev write read max offset ...passed 00:07:17.278 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:17.278 Test: blockdev writev readv 8 blocks ...passed 00:07:17.278 Test: blockdev writev readv 30 x 1block ...passed 00:07:17.278 Test: blockdev writev readv block ...passed 00:07:17.278 Test: blockdev writev readv size > 128k ...passed 00:07:17.278 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:17.278 Test: blockdev comparev and writev ...[2024-12-05 18:57:34.632517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2d223d000 len:0x1000 00:07:17.278 [2024-12-05 18:57:34.632554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:17.278 passed 00:07:17.278 Test: blockdev nvme passthru rw ...passed 00:07:17.278 Test: blockdev nvme passthru vendor specific ...passed 00:07:17.278 Test: blockdev nvme admin passthru ...passed 00:07:17.278 Test: blockdev copy ...passed 00:07:17.278 Suite: bdevio tests on: Nvme1n1p1 00:07:17.278 Test: blockdev write read block ...passed 00:07:17.278 Test: blockdev write zeroes read block ...passed 00:07:17.278 Test: blockdev write zeroes read no split ...passed 00:07:17.278 Test: blockdev write zeroes read split ...passed 00:07:17.278 Test: blockdev write zeroes read split partial ...passed 00:07:17.278 Test: blockdev reset ...[2024-12-05 18:57:34.643685] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:17.278 passed 00:07:17.278 Test: blockdev write read 8 blocks ...[2024-12-05 18:57:34.644966] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:07:17.278 passed 00:07:17.278 Test: blockdev write read size > 128k ...passed 00:07:17.278 Test: blockdev write read invalid size ...passed 00:07:17.278 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:17.278 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:17.278 Test: blockdev write read max offset ...passed 00:07:17.278 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:17.278 Test: blockdev writev readv 8 blocks ...passed 00:07:17.278 Test: blockdev writev readv 30 x 1block ...passed 00:07:17.278 Test: blockdev writev readv block ...passed 00:07:17.278 Test: blockdev writev readv size > 128k ...passed 00:07:17.278 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:17.278 Test: blockdev comparev and writev ...passed 00:07:17.278 Test: blockdev nvme passthru rw ...passed 00:07:17.278 Test: blockdev nvme passthru vendor specific ...passed 00:07:17.278 Test: blockdev nvme admin passthru ...passed 00:07:17.278 Test: blockdev copy ...[2024-12-05 18:57:34.648940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2d2239000 len:0x1000 00:07:17.278 [2024-12-05 18:57:34.648973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:17.278 passed 00:07:17.278 Suite: bdevio tests on: Nvme0n1 00:07:17.278 Test: blockdev write read block ...passed 00:07:17.278 Test: blockdev write zeroes read block ...passed 00:07:17.278 Test: blockdev write zeroes read no split ...passed 00:07:17.278 Test: blockdev write zeroes read split ...passed 00:07:17.278 Test: blockdev write zeroes read split partial ...passed 00:07:17.278 Test: blockdev reset ...[2024-12-05 18:57:34.659726] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:07:17.278 passed 00:07:17.278 Test: blockdev write read 8 blocks ...[2024-12-05 18:57:34.661183] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:07:17.278 passed 00:07:17.278 Test: blockdev write read size > 128k ...passed 00:07:17.278 Test: blockdev write read invalid size ...passed 00:07:17.278 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:17.278 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:17.278 Test: blockdev write read max offset ...passed 00:07:17.278 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:17.278 Test: blockdev writev readv 8 blocks ...passed 00:07:17.278 Test: blockdev writev readv 30 x 1block ...passed 00:07:17.278 Test: blockdev writev readv block ...passed 00:07:17.278 Test: blockdev writev readv size > 128k ...passed 00:07:17.278 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:17.278 Test: blockdev comparev and writev ...[2024-12-05 18:57:34.665194] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:17.278 separate metadata which is not supported yet. 00:07:17.278 passed 00:07:17.278 Test: blockdev nvme passthru rw ...passed 00:07:17.278 Test: blockdev nvme passthru vendor specific ...passed 00:07:17.278 Test: blockdev nvme admin passthru ...[2024-12-05 18:57:34.665569] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:17.278 [2024-12-05 18:57:34.665604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:17.278 passed 00:07:17.278 Test: blockdev copy ...passed 00:07:17.278 00:07:17.278 Run Summary: Type Total Ran Passed Failed Inactive 00:07:17.278 suites 7 7 n/a 0 0 00:07:17.278 tests 161 161 161 0 0 00:07:17.278 asserts 1025 1025 1025 0 n/a 00:07:17.278 00:07:17.278 Elapsed time = 0.325 seconds 00:07:17.278 0 00:07:17.278 18:57:34 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 72946 00:07:17.278 18:57:34 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 72946 ']' 00:07:17.278 18:57:34 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 72946 00:07:17.278 18:57:34 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:07:17.278 18:57:34 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:17.278 18:57:34 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72946 00:07:17.278 18:57:34 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:17.278 18:57:34 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:17.278 18:57:34 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72946' 00:07:17.278 killing process with pid 72946 00:07:17.278 18:57:34 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 72946 00:07:17.278 18:57:34 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 72946 00:07:17.538 18:57:34 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:17.538 00:07:17.538 real 0m1.295s 00:07:17.538 user 0m3.366s 00:07:17.538 sys 0m0.236s 00:07:17.538 18:57:34 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:17.538 18:57:34 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:17.538 ************************************ 00:07:17.538 END TEST bdev_bounds 00:07:17.538 ************************************ 00:07:17.538 18:57:34 blockdev_nvme_gpt -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:17.538 18:57:34 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:17.538 18:57:34 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:17.538 18:57:34 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:17.538 ************************************ 00:07:17.538 START TEST bdev_nbd 00:07:17.538 ************************************ 00:07:17.538 18:57:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:17.538 18:57:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:17.538 18:57:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:17.538 18:57:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:17.538 18:57:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:17.538 18:57:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:17.538 18:57:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:17.538 18:57:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:07:17.538 18:57:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:17.538 18:57:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:17.538 18:57:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:17.538 18:57:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:07:17.538 18:57:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:17.538 18:57:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:17.538 18:57:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:17.538 18:57:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:17.538 18:57:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=72989 00:07:17.538 18:57:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:17.538 18:57:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 72989 /var/tmp/spdk-nbd.sock 00:07:17.538 18:57:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 72989 ']' 00:07:17.538 18:57:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:17.538 18:57:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:17.538 18:57:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:17.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:17.538 18:57:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:17.538 18:57:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:17.538 18:57:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:17.538 [2024-12-05 18:57:34.949285] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:07:17.538 [2024-12-05 18:57:34.949391] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:17.538 [2024-12-05 18:57:35.091548] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.796 [2024-12-05 18:57:35.110991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.360 18:57:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:18.360 18:57:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:07:18.360 18:57:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:18.360 18:57:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:18.361 18:57:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:18.361 18:57:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:18.361 18:57:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:18.361 18:57:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:18.361 18:57:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:18.361 18:57:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:18.361 18:57:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:18.361 18:57:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:18.361 18:57:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:18.361 18:57:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:18.361 18:57:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:18.618 18:57:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:18.618 18:57:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:18.618 18:57:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:18.618 18:57:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:18.618 18:57:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:18.618 18:57:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:18.618 18:57:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:18.618 18:57:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:18.618 18:57:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:18.618 18:57:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:18.618 18:57:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:18.618 18:57:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:18.618 1+0 records in 00:07:18.618 1+0 records out 00:07:18.618 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000447479 s, 9.2 MB/s 00:07:18.618 18:57:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:18.618 18:57:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:18.618 18:57:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:18.618 18:57:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:18.618 18:57:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:18.618 18:57:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:18.618 18:57:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:18.618 18:57:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:07:18.875 18:57:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:18.875 18:57:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:18.875 18:57:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:18.875 18:57:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:18.875 18:57:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:18.875 18:57:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:18.875 18:57:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:18.875 18:57:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:18.875 18:57:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:18.875 18:57:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:18.875 18:57:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:18.875 18:57:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:18.875 1+0 records in 00:07:18.875 1+0 records out 00:07:18.875 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000423435 s, 9.7 MB/s 00:07:18.875 18:57:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:18.875 18:57:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:18.875 18:57:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:18.875 18:57:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:18.875 18:57:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:18.875 18:57:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:18.875 18:57:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:18.875 18:57:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:07:19.132 18:57:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:19.132 18:57:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:19.132 18:57:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:19.132 18:57:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:07:19.132 18:57:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:19.132 18:57:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:19.132 18:57:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:19.132 18:57:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:07:19.132 18:57:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:19.132 18:57:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:19.132 18:57:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:19.132 18:57:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:19.132 1+0 records in 00:07:19.132 1+0 records out 00:07:19.132 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000405305 s, 10.1 MB/s 00:07:19.132 18:57:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:19.132 18:57:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:19.132 18:57:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:19.132 18:57:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:19.132 18:57:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:19.132 18:57:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:19.132 18:57:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:19.132 18:57:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:19.389 18:57:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:19.389 18:57:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:19.389 18:57:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:19.389 18:57:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:07:19.389 18:57:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:19.389 18:57:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:19.389 18:57:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:19.389 18:57:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:07:19.389 18:57:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:19.390 18:57:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:19.390 18:57:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:19.390 18:57:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:19.390 1+0 records in 00:07:19.390 1+0 records out 00:07:19.390 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000507604 s, 8.1 MB/s 00:07:19.390 18:57:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:19.390 18:57:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:19.390 18:57:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:19.390 18:57:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:19.390 18:57:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:19.390 18:57:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:19.390 18:57:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:19.390 18:57:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:19.390 18:57:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:19.390 18:57:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:19.646 18:57:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:19.646 18:57:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:07:19.646 18:57:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:19.646 18:57:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:19.646 18:57:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:19.646 18:57:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:07:19.646 18:57:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:19.646 18:57:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:19.647 18:57:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:19.647 18:57:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:19.647 1+0 records in 00:07:19.647 1+0 records out 00:07:19.647 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000365983 s, 11.2 MB/s 00:07:19.647 18:57:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:19.647 18:57:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:19.647 18:57:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:19.647 18:57:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:19.647 18:57:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:19.647 18:57:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:19.647 18:57:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:19.647 18:57:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:19.647 18:57:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:19.647 18:57:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:19.647 18:57:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:19.647 18:57:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:07:19.647 18:57:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:19.647 18:57:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:19.647 18:57:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:19.647 18:57:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:07:19.647 18:57:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:19.647 18:57:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:19.647 18:57:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:19.647 18:57:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:19.647 1+0 records in 00:07:19.647 1+0 records out 00:07:19.647 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00041913 s, 9.8 MB/s 00:07:19.647 18:57:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:19.647 18:57:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:19.647 18:57:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:19.647 18:57:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:19.647 18:57:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:19.647 18:57:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:19.647 18:57:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:19.647 18:57:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:19.905 18:57:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:07:19.905 18:57:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:07:19.905 18:57:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:07:19.905 18:57:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:07:19.905 18:57:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:19.905 18:57:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:19.905 18:57:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:19.905 18:57:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:07:19.905 18:57:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:19.905 18:57:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:19.905 18:57:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:19.905 18:57:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:19.905 1+0 records in 00:07:19.905 1+0 records out 00:07:19.905 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000494179 s, 8.3 MB/s 00:07:19.905 18:57:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:19.905 18:57:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:19.905 18:57:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:19.905 18:57:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:19.905 18:57:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:19.905 18:57:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:19.905 18:57:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:19.905 18:57:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:20.163 18:57:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:20.163 { 00:07:20.163 "nbd_device": "/dev/nbd0", 00:07:20.163 "bdev_name": "Nvme0n1" 00:07:20.163 }, 00:07:20.163 { 00:07:20.163 "nbd_device": "/dev/nbd1", 00:07:20.163 "bdev_name": "Nvme1n1p1" 00:07:20.163 }, 00:07:20.163 { 00:07:20.163 "nbd_device": "/dev/nbd2", 00:07:20.163 "bdev_name": "Nvme1n1p2" 00:07:20.163 }, 00:07:20.163 { 00:07:20.163 "nbd_device": "/dev/nbd3", 00:07:20.163 "bdev_name": "Nvme2n1" 00:07:20.163 }, 00:07:20.163 { 00:07:20.163 "nbd_device": "/dev/nbd4", 00:07:20.163 "bdev_name": "Nvme2n2" 00:07:20.163 }, 00:07:20.163 { 00:07:20.163 "nbd_device": "/dev/nbd5", 00:07:20.163 "bdev_name": "Nvme2n3" 00:07:20.163 }, 00:07:20.163 { 00:07:20.163 "nbd_device": "/dev/nbd6", 00:07:20.163 "bdev_name": "Nvme3n1" 00:07:20.163 } 00:07:20.163 ]' 00:07:20.163 18:57:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:20.163 18:57:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:20.163 { 00:07:20.163 "nbd_device": "/dev/nbd0", 00:07:20.163 "bdev_name": "Nvme0n1" 00:07:20.163 }, 00:07:20.163 { 00:07:20.163 "nbd_device": "/dev/nbd1", 00:07:20.163 "bdev_name": "Nvme1n1p1" 00:07:20.163 }, 00:07:20.163 { 00:07:20.163 "nbd_device": "/dev/nbd2", 00:07:20.163 "bdev_name": "Nvme1n1p2" 00:07:20.163 }, 00:07:20.163 { 00:07:20.163 "nbd_device": "/dev/nbd3", 00:07:20.163 "bdev_name": "Nvme2n1" 00:07:20.163 }, 00:07:20.163 { 00:07:20.163 "nbd_device": "/dev/nbd4", 00:07:20.163 "bdev_name": "Nvme2n2" 00:07:20.163 }, 00:07:20.163 { 00:07:20.163 "nbd_device": "/dev/nbd5", 00:07:20.163 "bdev_name": "Nvme2n3" 00:07:20.163 }, 00:07:20.163 { 00:07:20.163 "nbd_device": "/dev/nbd6", 00:07:20.163 "bdev_name": "Nvme3n1" 00:07:20.163 } 00:07:20.163 ]' 00:07:20.163 18:57:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:20.163 18:57:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:07:20.163 18:57:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:20.163 18:57:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:07:20.163 18:57:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:20.163 18:57:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:20.163 18:57:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:20.163 18:57:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:20.421 18:57:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:20.422 18:57:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:20.422 18:57:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:20.422 18:57:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:20.422 18:57:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:20.422 18:57:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:20.422 18:57:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:20.422 18:57:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:20.422 18:57:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:20.422 18:57:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:20.680 18:57:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:20.680 18:57:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:20.680 18:57:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:20.680 18:57:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:20.680 18:57:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:20.680 18:57:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:20.680 18:57:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:20.680 18:57:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:20.680 18:57:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:20.680 18:57:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:20.939 18:57:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:20.939 18:57:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:20.939 18:57:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:20.939 18:57:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:20.939 18:57:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:20.939 18:57:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:20.939 18:57:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:20.939 18:57:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:20.939 18:57:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:20.939 18:57:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:20.939 18:57:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:20.939 18:57:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:20.939 18:57:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:20.939 18:57:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:20.939 18:57:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:20.939 18:57:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:20.939 18:57:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:20.939 18:57:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:20.939 18:57:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:20.939 18:57:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:21.198 18:57:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:21.198 18:57:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:21.198 18:57:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:21.198 18:57:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:21.198 18:57:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:21.198 18:57:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:21.198 18:57:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:21.198 18:57:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:21.198 18:57:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:21.198 18:57:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:21.456 18:57:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:21.456 18:57:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:21.456 18:57:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:21.456 18:57:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:21.456 18:57:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:21.456 18:57:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:21.456 18:57:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:21.456 18:57:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:21.456 18:57:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:21.456 18:57:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:07:21.715 18:57:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:07:21.715 18:57:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:07:21.715 18:57:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:07:21.715 18:57:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:21.715 18:57:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:21.715 18:57:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:07:21.715 18:57:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:21.715 18:57:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:21.715 18:57:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:21.715 18:57:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:21.715 18:57:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:21.974 18:57:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:21.974 18:57:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:21.974 18:57:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:21.974 18:57:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:21.974 18:57:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:21.974 18:57:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:21.974 18:57:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:21.974 18:57:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:21.974 18:57:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:21.974 18:57:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:21.974 18:57:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:21.974 18:57:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:21.974 18:57:39 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:21.974 18:57:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:21.974 18:57:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:21.974 18:57:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:21.974 18:57:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:21.974 18:57:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:21.974 18:57:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:21.974 18:57:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:21.974 18:57:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:21.974 18:57:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:21.974 18:57:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:21.974 18:57:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:21.974 18:57:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:21.974 18:57:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:21.974 18:57:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:21.974 18:57:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:22.243 /dev/nbd0 00:07:22.243 18:57:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:22.243 18:57:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:22.243 18:57:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:22.243 18:57:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:22.243 18:57:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:22.243 18:57:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:22.243 18:57:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:22.243 18:57:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:22.243 18:57:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:22.243 18:57:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:22.243 18:57:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:22.243 1+0 records in 00:07:22.243 1+0 records out 00:07:22.243 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00027812 s, 14.7 MB/s 00:07:22.243 18:57:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:22.243 18:57:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:22.243 18:57:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:22.243 18:57:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:22.243 18:57:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:22.243 18:57:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:22.243 18:57:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:22.243 18:57:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:07:22.243 /dev/nbd1 00:07:22.517 18:57:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:22.517 18:57:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:22.517 18:57:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:22.517 18:57:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:22.517 18:57:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:22.517 18:57:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:22.517 18:57:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:22.517 18:57:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:22.517 18:57:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:22.517 18:57:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:22.517 18:57:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:22.517 1+0 records in 00:07:22.517 1+0 records out 00:07:22.517 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000439686 s, 9.3 MB/s 00:07:22.517 18:57:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:22.517 18:57:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:22.517 18:57:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:22.517 18:57:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:22.517 18:57:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:22.517 18:57:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:22.517 18:57:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:22.517 18:57:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:07:22.517 /dev/nbd10 00:07:22.517 18:57:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:22.517 18:57:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:22.517 18:57:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:07:22.517 18:57:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:22.517 18:57:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:22.517 18:57:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:22.517 18:57:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:07:22.518 18:57:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:22.518 18:57:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:22.518 18:57:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:22.518 18:57:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:22.518 1+0 records in 00:07:22.518 1+0 records out 00:07:22.518 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000429893 s, 9.5 MB/s 00:07:22.518 18:57:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:22.518 18:57:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:22.518 18:57:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:22.518 18:57:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:22.518 18:57:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:22.518 18:57:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:22.518 18:57:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:22.518 18:57:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:07:22.776 /dev/nbd11 00:07:22.776 18:57:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:22.776 18:57:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:22.776 18:57:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:07:22.776 18:57:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:22.776 18:57:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:22.776 18:57:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:22.776 18:57:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:07:22.776 18:57:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:22.776 18:57:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:22.776 18:57:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:22.776 18:57:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:22.776 1+0 records in 00:07:22.776 1+0 records out 00:07:22.776 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000250667 s, 16.3 MB/s 00:07:22.776 18:57:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:22.776 18:57:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:22.776 18:57:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:22.776 18:57:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:22.776 18:57:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:22.776 18:57:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:22.776 18:57:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:22.776 18:57:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:07:23.034 /dev/nbd12 00:07:23.034 18:57:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:23.034 18:57:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:23.034 18:57:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:07:23.034 18:57:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:23.034 18:57:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:23.034 18:57:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:23.034 18:57:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:07:23.034 18:57:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:23.034 18:57:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:23.034 18:57:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:23.034 18:57:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:23.034 1+0 records in 00:07:23.034 1+0 records out 00:07:23.034 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000348103 s, 11.8 MB/s 00:07:23.034 18:57:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:23.034 18:57:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:23.034 18:57:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:23.034 18:57:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:23.034 18:57:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:23.034 18:57:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:23.034 18:57:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:23.034 18:57:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:07:23.293 /dev/nbd13 00:07:23.293 18:57:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:23.293 18:57:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:23.293 18:57:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:07:23.293 18:57:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:23.293 18:57:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:23.293 18:57:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:23.293 18:57:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:07:23.293 18:57:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:23.293 18:57:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:23.293 18:57:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:23.293 18:57:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:23.293 1+0 records in 00:07:23.293 1+0 records out 00:07:23.293 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000376137 s, 10.9 MB/s 00:07:23.293 18:57:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:23.293 18:57:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:23.293 18:57:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:23.293 18:57:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:23.293 18:57:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:23.293 18:57:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:23.293 18:57:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:23.293 18:57:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:07:23.554 /dev/nbd14 00:07:23.554 18:57:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:07:23.554 18:57:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:07:23.554 18:57:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:07:23.554 18:57:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:23.554 18:57:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:23.554 18:57:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:23.554 18:57:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:07:23.554 18:57:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:23.554 18:57:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:23.554 18:57:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:23.554 18:57:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:23.554 1+0 records in 00:07:23.554 1+0 records out 00:07:23.554 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000819623 s, 5.0 MB/s 00:07:23.554 18:57:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:23.554 18:57:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:23.554 18:57:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:23.554 18:57:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:23.554 18:57:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:23.554 18:57:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:23.554 18:57:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:23.554 18:57:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:23.554 18:57:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:23.554 18:57:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:23.814 18:57:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:23.814 { 00:07:23.814 "nbd_device": "/dev/nbd0", 00:07:23.814 "bdev_name": "Nvme0n1" 00:07:23.814 }, 00:07:23.814 { 00:07:23.814 "nbd_device": "/dev/nbd1", 00:07:23.814 "bdev_name": "Nvme1n1p1" 00:07:23.814 }, 00:07:23.814 { 00:07:23.814 "nbd_device": "/dev/nbd10", 00:07:23.814 "bdev_name": "Nvme1n1p2" 00:07:23.814 }, 00:07:23.814 { 00:07:23.814 "nbd_device": "/dev/nbd11", 00:07:23.814 "bdev_name": "Nvme2n1" 00:07:23.814 }, 00:07:23.814 { 00:07:23.814 "nbd_device": "/dev/nbd12", 00:07:23.814 "bdev_name": "Nvme2n2" 00:07:23.814 }, 00:07:23.814 { 00:07:23.814 "nbd_device": "/dev/nbd13", 00:07:23.814 "bdev_name": "Nvme2n3" 00:07:23.814 }, 00:07:23.814 { 00:07:23.814 "nbd_device": "/dev/nbd14", 00:07:23.814 "bdev_name": "Nvme3n1" 00:07:23.814 } 00:07:23.814 ]' 00:07:23.814 18:57:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:23.814 { 00:07:23.814 "nbd_device": "/dev/nbd0", 00:07:23.814 "bdev_name": "Nvme0n1" 00:07:23.814 }, 00:07:23.814 { 00:07:23.814 "nbd_device": "/dev/nbd1", 00:07:23.814 "bdev_name": "Nvme1n1p1" 00:07:23.814 }, 00:07:23.814 { 00:07:23.814 "nbd_device": "/dev/nbd10", 00:07:23.814 "bdev_name": "Nvme1n1p2" 00:07:23.814 }, 00:07:23.814 { 00:07:23.814 "nbd_device": "/dev/nbd11", 00:07:23.814 "bdev_name": "Nvme2n1" 00:07:23.814 }, 00:07:23.814 { 00:07:23.814 "nbd_device": "/dev/nbd12", 00:07:23.814 "bdev_name": "Nvme2n2" 00:07:23.814 }, 00:07:23.814 { 00:07:23.814 "nbd_device": "/dev/nbd13", 00:07:23.814 "bdev_name": "Nvme2n3" 00:07:23.814 }, 00:07:23.814 { 00:07:23.814 "nbd_device": "/dev/nbd14", 00:07:23.814 "bdev_name": "Nvme3n1" 00:07:23.814 } 00:07:23.814 ]' 00:07:23.814 18:57:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:23.814 18:57:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:23.814 /dev/nbd1 00:07:23.814 /dev/nbd10 00:07:23.814 /dev/nbd11 00:07:23.814 /dev/nbd12 00:07:23.814 /dev/nbd13 00:07:23.814 /dev/nbd14' 00:07:23.814 18:57:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:23.814 18:57:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:23.814 /dev/nbd1 00:07:23.814 /dev/nbd10 00:07:23.814 /dev/nbd11 00:07:23.814 /dev/nbd12 00:07:23.814 /dev/nbd13 00:07:23.814 /dev/nbd14' 00:07:23.814 18:57:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:07:23.814 18:57:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:07:23.814 18:57:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:07:23.814 18:57:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:07:23.814 18:57:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:07:23.814 18:57:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:23.814 18:57:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:23.814 18:57:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:23.814 18:57:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:23.814 18:57:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:23.814 18:57:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:23.814 256+0 records in 00:07:23.814 256+0 records out 00:07:23.814 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00908492 s, 115 MB/s 00:07:23.814 18:57:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:23.814 18:57:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:23.814 256+0 records in 00:07:23.814 256+0 records out 00:07:23.814 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0660661 s, 15.9 MB/s 00:07:23.814 18:57:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:23.815 18:57:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:24.073 256+0 records in 00:07:24.073 256+0 records out 00:07:24.073 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0702233 s, 14.9 MB/s 00:07:24.073 18:57:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:24.073 18:57:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:24.073 256+0 records in 00:07:24.073 256+0 records out 00:07:24.073 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0691242 s, 15.2 MB/s 00:07:24.073 18:57:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:24.073 18:57:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:24.073 256+0 records in 00:07:24.073 256+0 records out 00:07:24.073 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0710111 s, 14.8 MB/s 00:07:24.073 18:57:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:24.073 18:57:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:24.073 256+0 records in 00:07:24.073 256+0 records out 00:07:24.073 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.07414 s, 14.1 MB/s 00:07:24.073 18:57:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:24.073 18:57:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:24.334 256+0 records in 00:07:24.334 256+0 records out 00:07:24.334 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0779928 s, 13.4 MB/s 00:07:24.334 18:57:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:24.334 18:57:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:07:24.334 256+0 records in 00:07:24.334 256+0 records out 00:07:24.334 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0685839 s, 15.3 MB/s 00:07:24.334 18:57:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:07:24.334 18:57:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:24.334 18:57:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:24.334 18:57:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:24.334 18:57:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:24.334 18:57:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:24.334 18:57:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:24.334 18:57:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:24.334 18:57:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:24.334 18:57:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:24.334 18:57:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:24.334 18:57:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:24.334 18:57:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:24.334 18:57:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:24.334 18:57:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:24.334 18:57:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:24.334 18:57:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:24.334 18:57:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:24.334 18:57:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:24.334 18:57:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:24.334 18:57:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:07:24.334 18:57:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:24.334 18:57:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:24.334 18:57:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:24.334 18:57:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:24.334 18:57:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:24.334 18:57:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:24.334 18:57:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:24.334 18:57:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:24.596 18:57:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:24.596 18:57:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:24.596 18:57:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:24.596 18:57:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:24.596 18:57:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:24.596 18:57:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:24.596 18:57:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:24.596 18:57:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:24.596 18:57:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:24.596 18:57:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:24.854 18:57:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:24.854 18:57:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:24.854 18:57:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:24.854 18:57:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:24.854 18:57:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:24.854 18:57:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:24.854 18:57:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:24.854 18:57:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:24.854 18:57:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:24.854 18:57:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:24.854 18:57:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:24.854 18:57:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:24.854 18:57:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:24.854 18:57:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:24.854 18:57:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:24.854 18:57:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:24.854 18:57:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:24.854 18:57:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:24.854 18:57:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:24.854 18:57:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:25.114 18:57:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:25.114 18:57:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:25.114 18:57:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:25.114 18:57:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:25.114 18:57:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:25.114 18:57:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:25.114 18:57:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:25.114 18:57:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:25.114 18:57:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:25.114 18:57:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:25.371 18:57:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:25.372 18:57:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:25.372 18:57:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:25.372 18:57:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:25.372 18:57:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:25.372 18:57:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:25.372 18:57:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:25.372 18:57:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:25.372 18:57:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:25.372 18:57:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:25.630 18:57:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:25.630 18:57:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:25.630 18:57:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:25.630 18:57:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:25.630 18:57:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:25.630 18:57:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:25.630 18:57:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:25.630 18:57:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:25.630 18:57:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:25.630 18:57:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:07:25.887 18:57:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:07:25.887 18:57:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:07:25.887 18:57:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:07:25.887 18:57:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:25.887 18:57:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:25.887 18:57:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:07:25.887 18:57:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:25.887 18:57:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:25.887 18:57:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:25.887 18:57:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:25.887 18:57:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:26.144 18:57:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:26.144 18:57:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:26.144 18:57:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:26.144 18:57:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:26.144 18:57:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:26.144 18:57:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:26.144 18:57:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:26.144 18:57:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:26.144 18:57:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:26.144 18:57:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:26.144 18:57:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:26.144 18:57:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:26.144 18:57:43 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:26.144 18:57:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:26.144 18:57:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:26.144 18:57:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:26.144 malloc_lvol_verify 00:07:26.144 18:57:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:26.400 784f18c3-22e3-4cc0-8f86-a61fd62c9b32 00:07:26.400 18:57:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:26.657 571af8ba-a248-4b2e-b74e-d30ebd65f434 00:07:26.657 18:57:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:26.914 /dev/nbd0 00:07:26.914 18:57:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:26.914 18:57:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:26.914 18:57:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:26.914 18:57:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:26.914 18:57:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:26.914 mke2fs 1.47.0 (5-Feb-2023) 00:07:26.914 Discarding device blocks: 0/4096 done 00:07:26.914 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:26.914 00:07:26.914 Allocating group tables: 0/1 done 00:07:26.914 Writing inode tables: 0/1 done 00:07:26.914 Creating journal (1024 blocks): done 00:07:26.914 Writing superblocks and filesystem accounting information: 0/1 done 00:07:26.914 00:07:26.914 18:57:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:26.914 18:57:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:26.914 18:57:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:26.914 18:57:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:26.914 18:57:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:26.914 18:57:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:26.914 18:57:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:27.172 18:57:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:27.172 18:57:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:27.172 18:57:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:27.172 18:57:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:27.172 18:57:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:27.172 18:57:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:27.172 18:57:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:27.172 18:57:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:27.172 18:57:44 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 72989 00:07:27.172 18:57:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 72989 ']' 00:07:27.172 18:57:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 72989 00:07:27.172 18:57:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:07:27.172 18:57:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:27.172 18:57:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72989 00:07:27.172 killing process with pid 72989 00:07:27.172 18:57:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:27.172 18:57:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:27.172 18:57:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72989' 00:07:27.172 18:57:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 72989 00:07:27.172 18:57:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 72989 00:07:27.429 18:57:44 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:27.429 00:07:27.429 real 0m9.844s 00:07:27.429 user 0m14.385s 00:07:27.429 sys 0m3.400s 00:07:27.429 18:57:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:27.429 18:57:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:27.429 ************************************ 00:07:27.429 END TEST bdev_nbd 00:07:27.429 ************************************ 00:07:27.429 18:57:44 blockdev_nvme_gpt -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:07:27.429 18:57:44 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = nvme ']' 00:07:27.429 18:57:44 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = gpt ']' 00:07:27.429 skipping fio tests on NVMe due to multi-ns failures. 00:07:27.429 18:57:44 blockdev_nvme_gpt -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:27.429 18:57:44 blockdev_nvme_gpt -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:27.429 18:57:44 blockdev_nvme_gpt -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:27.429 18:57:44 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:27.429 18:57:44 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:27.429 18:57:44 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:27.429 ************************************ 00:07:27.429 START TEST bdev_verify 00:07:27.429 ************************************ 00:07:27.429 18:57:44 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:27.429 [2024-12-05 18:57:44.833246] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:07:27.429 [2024-12-05 18:57:44.833375] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73389 ] 00:07:27.429 [2024-12-05 18:57:44.975538] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:27.687 [2024-12-05 18:57:44.996715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:27.687 [2024-12-05 18:57:44.996838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.945 Running I/O for 5 seconds... 00:07:30.282 19776.00 IOPS, 77.25 MiB/s [2024-12-05T18:57:48.778Z] 20224.00 IOPS, 79.00 MiB/s [2024-12-05T18:57:49.720Z] 20181.33 IOPS, 78.83 MiB/s [2024-12-05T18:57:50.666Z] 20080.00 IOPS, 78.44 MiB/s [2024-12-05T18:57:50.666Z] 19968.00 IOPS, 78.00 MiB/s 00:07:33.107 Latency(us) 00:07:33.107 [2024-12-05T18:57:50.666Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:33.107 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:33.107 Verification LBA range: start 0x0 length 0xbd0bd 00:07:33.107 Nvme0n1 : 5.08 1371.97 5.36 0.00 0.00 92705.40 16938.54 89532.26 00:07:33.107 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:33.107 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:33.107 Nvme0n1 : 5.09 1421.35 5.55 0.00 0.00 89501.40 14317.10 91145.45 00:07:33.107 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:33.107 Verification LBA range: start 0x0 length 0x4ff80 00:07:33.107 Nvme1n1p1 : 5.10 1379.21 5.39 0.00 0.00 92459.75 15627.82 81062.99 00:07:33.107 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:33.107 Verification LBA range: start 0x4ff80 length 0x4ff80 00:07:33.107 Nvme1n1p1 : 5.09 1420.81 5.55 0.00 0.00 89334.03 16434.41 82676.18 00:07:33.107 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:33.107 Verification LBA range: start 0x0 length 0x4ff7f 00:07:33.107 Nvme1n1p2 : 5.11 1378.34 5.38 0.00 0.00 92332.33 17543.48 78239.90 00:07:33.107 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:33.107 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:07:33.107 Nvme1n1p2 : 5.11 1428.82 5.58 0.00 0.00 89041.55 11191.53 77030.01 00:07:33.107 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:33.107 Verification LBA range: start 0x0 length 0x80000 00:07:33.107 Nvme2n1 : 5.11 1377.37 5.38 0.00 0.00 92195.46 19963.27 78239.90 00:07:33.107 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:33.107 Verification LBA range: start 0x80000 length 0x80000 00:07:33.107 Nvme2n1 : 5.11 1428.41 5.58 0.00 0.00 88884.99 11342.77 71383.83 00:07:33.107 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:33.107 Verification LBA range: start 0x0 length 0x80000 00:07:33.107 Nvme2n2 : 5.11 1376.36 5.38 0.00 0.00 92049.28 21475.64 79046.50 00:07:33.107 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:33.107 Verification LBA range: start 0x80000 length 0x80000 00:07:33.107 Nvme2n2 : 5.11 1427.40 5.58 0.00 0.00 88748.47 12905.55 72593.72 00:07:33.107 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:33.107 Verification LBA range: start 0x0 length 0x80000 00:07:33.107 Nvme2n3 : 5.12 1375.40 5.37 0.00 0.00 91902.85 19862.45 78239.90 00:07:33.107 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:33.107 Verification LBA range: start 0x80000 length 0x80000 00:07:33.107 Nvme2n3 : 5.12 1426.36 5.57 0.00 0.00 88615.20 15426.17 73803.62 00:07:33.107 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:33.107 Verification LBA range: start 0x0 length 0x20000 00:07:33.107 Nvme3n1 : 5.12 1374.44 5.37 0.00 0.00 91757.63 17543.48 79449.80 00:07:33.107 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:33.107 Verification LBA range: start 0x20000 length 0x20000 00:07:33.107 Nvme3n1 : 5.12 1425.36 5.57 0.00 0.00 88485.12 14922.04 75416.81 00:07:33.107 [2024-12-05T18:57:50.666Z] =================================================================================================================== 00:07:33.107 [2024-12-05T18:57:50.666Z] Total : 19611.60 76.61 0.00 0.00 90543.38 11191.53 91145.45 00:07:33.679 00:07:33.679 real 0m6.399s 00:07:33.679 user 0m12.077s 00:07:33.679 sys 0m0.212s 00:07:33.679 18:57:51 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:33.679 ************************************ 00:07:33.679 END TEST bdev_verify 00:07:33.679 ************************************ 00:07:33.679 18:57:51 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:33.679 18:57:51 blockdev_nvme_gpt -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:33.679 18:57:51 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:33.679 18:57:51 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:33.679 18:57:51 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:33.940 ************************************ 00:07:33.940 START TEST bdev_verify_big_io 00:07:33.940 ************************************ 00:07:33.940 18:57:51 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:33.940 [2024-12-05 18:57:51.308938] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:07:33.940 [2024-12-05 18:57:51.309075] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73481 ] 00:07:33.940 [2024-12-05 18:57:51.457424] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:33.940 [2024-12-05 18:57:51.488846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:33.940 [2024-12-05 18:57:51.488894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.512 Running I/O for 5 seconds... 00:07:40.359 977.00 IOPS, 61.06 MiB/s [2024-12-05T18:57:58.180Z] 2567.50 IOPS, 160.47 MiB/s [2024-12-05T18:57:58.180Z] 2963.33 IOPS, 185.21 MiB/s 00:07:40.621 Latency(us) 00:07:40.621 [2024-12-05T18:57:58.180Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:40.621 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:40.621 Verification LBA range: start 0x0 length 0xbd0b 00:07:40.621 Nvme0n1 : 6.01 101.01 6.31 0.00 0.00 1226739.66 25609.45 1290555.08 00:07:40.621 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:40.621 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:40.621 Nvme0n1 : 5.91 108.20 6.76 0.00 0.00 1113416.63 30045.74 1174405.12 00:07:40.621 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:40.621 Verification LBA range: start 0x0 length 0x4ff8 00:07:40.621 Nvme1n1p1 : 6.04 100.83 6.30 0.00 0.00 1190745.66 89532.26 1193763.45 00:07:40.621 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:40.621 Verification LBA range: start 0x4ff8 length 0x4ff8 00:07:40.621 Nvme1n1p1 : 5.77 111.01 6.94 0.00 0.00 1080044.62 104857.60 1013085.74 00:07:40.621 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:40.621 Verification LBA range: start 0x0 length 0x4ff7 00:07:40.621 Nvme1n1p2 : 6.04 101.58 6.35 0.00 0.00 1148737.21 91548.75 1148594.02 00:07:40.621 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:40.621 Verification LBA range: start 0x4ff7 length 0x4ff7 00:07:40.622 Nvme1n1p2 : 5.92 112.00 7.00 0.00 0.00 1032442.24 150833.62 896935.78 00:07:40.622 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:40.622 Verification LBA range: start 0x0 length 0x8000 00:07:40.622 Nvme2n1 : 6.05 105.80 6.61 0.00 0.00 1081910.04 30650.68 1116330.14 00:07:40.622 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:40.622 Verification LBA range: start 0x8000 length 0x8000 00:07:40.622 Nvme2n1 : 6.00 116.09 7.26 0.00 0.00 977002.10 82272.89 916294.10 00:07:40.622 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:40.622 Verification LBA range: start 0x0 length 0x8000 00:07:40.622 Nvme2n2 : 6.05 105.75 6.61 0.00 0.00 1049085.01 31658.93 1109877.37 00:07:40.622 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:40.622 Verification LBA range: start 0x8000 length 0x8000 00:07:40.622 Nvme2n2 : 6.04 122.36 7.65 0.00 0.00 908196.07 25609.45 942105.21 00:07:40.622 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:40.622 Verification LBA range: start 0x0 length 0x8000 00:07:40.622 Nvme2n3 : 6.06 109.24 6.83 0.00 0.00 986740.41 5797.42 1116330.14 00:07:40.622 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:40.622 Verification LBA range: start 0x8000 length 0x8000 00:07:40.622 Nvme2n3 : 6.05 127.03 7.94 0.00 0.00 851188.15 10384.94 961463.53 00:07:40.622 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:40.622 Verification LBA range: start 0x0 length 0x2000 00:07:40.622 Nvme3n1 : 6.07 108.66 6.79 0.00 0.00 961085.90 6402.36 1935832.62 00:07:40.622 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:40.622 Verification LBA range: start 0x2000 length 0x2000 00:07:40.622 Nvme3n1 : 6.06 124.01 7.75 0.00 0.00 844138.89 6604.01 1884210.41 00:07:40.622 [2024-12-05T18:57:58.181Z] =================================================================================================================== 00:07:40.622 [2024-12-05T18:57:58.181Z] Total : 1553.57 97.10 0.00 0.00 1024073.74 5797.42 1935832.62 00:07:42.537 00:07:42.537 real 0m8.375s 00:07:42.537 user 0m15.881s 00:07:42.537 sys 0m0.312s 00:07:42.537 18:57:59 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:42.537 ************************************ 00:07:42.537 END TEST bdev_verify_big_io 00:07:42.537 ************************************ 00:07:42.537 18:57:59 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:42.537 18:57:59 blockdev_nvme_gpt -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:42.537 18:57:59 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:42.537 18:57:59 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:42.537 18:57:59 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:42.537 ************************************ 00:07:42.537 START TEST bdev_write_zeroes 00:07:42.537 ************************************ 00:07:42.537 18:57:59 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:42.537 [2024-12-05 18:57:59.758342] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:07:42.537 [2024-12-05 18:57:59.758480] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73586 ] 00:07:42.537 [2024-12-05 18:57:59.906018] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.537 [2024-12-05 18:57:59.935309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.109 Running I/O for 1 seconds... 00:07:44.064 48219.00 IOPS, 188.36 MiB/s 00:07:44.064 Latency(us) 00:07:44.064 [2024-12-05T18:58:01.623Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:44.064 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:44.064 Nvme0n1 : 1.02 6837.96 26.71 0.00 0.00 18671.31 6704.84 38918.30 00:07:44.064 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:44.064 Nvme1n1p1 : 1.03 6927.95 27.06 0.00 0.00 18402.93 12199.78 26819.35 00:07:44.064 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:44.064 Nvme1n1p2 : 1.03 6919.44 27.03 0.00 0.00 18355.89 12048.54 26012.75 00:07:44.064 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:44.064 Nvme2n1 : 1.03 6911.63 27.00 0.00 0.00 18345.95 12351.02 25609.45 00:07:44.064 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:44.064 Nvme2n2 : 1.03 6903.75 26.97 0.00 0.00 18337.60 12804.73 25609.45 00:07:44.064 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:44.064 Nvme2n3 : 1.03 6895.98 26.94 0.00 0.00 18319.59 12199.78 25710.28 00:07:44.064 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:44.064 Nvme3n1 : 1.03 6826.17 26.66 0.00 0.00 18460.92 10939.47 29239.14 00:07:44.064 [2024-12-05T18:58:01.623Z] =================================================================================================================== 00:07:44.064 [2024-12-05T18:58:01.623Z] Total : 48222.88 188.37 0.00 0.00 18412.87 6704.84 38918.30 00:07:44.325 00:07:44.325 real 0m1.935s 00:07:44.325 user 0m1.591s 00:07:44.325 sys 0m0.227s 00:07:44.325 18:58:01 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:44.325 ************************************ 00:07:44.325 END TEST bdev_write_zeroes 00:07:44.325 ************************************ 00:07:44.325 18:58:01 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:44.325 18:58:01 blockdev_nvme_gpt -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:44.325 18:58:01 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:44.325 18:58:01 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:44.325 18:58:01 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:44.325 ************************************ 00:07:44.325 START TEST bdev_json_nonenclosed 00:07:44.325 ************************************ 00:07:44.325 18:58:01 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:44.325 [2024-12-05 18:58:01.757442] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:07:44.325 [2024-12-05 18:58:01.757587] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73628 ] 00:07:44.586 [2024-12-05 18:58:01.905546] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.586 [2024-12-05 18:58:01.934167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.586 [2024-12-05 18:58:01.934280] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:44.586 [2024-12-05 18:58:01.934299] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:44.586 [2024-12-05 18:58:01.934311] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:44.586 00:07:44.586 real 0m0.322s 00:07:44.586 user 0m0.129s 00:07:44.586 sys 0m0.089s 00:07:44.586 18:58:02 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:44.586 18:58:02 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:44.586 ************************************ 00:07:44.586 END TEST bdev_json_nonenclosed 00:07:44.586 ************************************ 00:07:44.586 18:58:02 blockdev_nvme_gpt -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:44.586 18:58:02 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:44.586 18:58:02 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:44.586 18:58:02 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:44.586 ************************************ 00:07:44.586 START TEST bdev_json_nonarray 00:07:44.586 ************************************ 00:07:44.587 18:58:02 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:44.587 [2024-12-05 18:58:02.143834] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:07:44.587 [2024-12-05 18:58:02.143967] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73648 ] 00:07:44.848 [2024-12-05 18:58:02.292197] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.848 [2024-12-05 18:58:02.321040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.848 [2024-12-05 18:58:02.321147] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:44.848 [2024-12-05 18:58:02.321171] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:44.848 [2024-12-05 18:58:02.321184] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:44.848 00:07:44.848 real 0m0.328s 00:07:44.848 user 0m0.134s 00:07:44.848 sys 0m0.090s 00:07:44.848 18:58:02 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:44.848 ************************************ 00:07:44.848 END TEST bdev_json_nonarray 00:07:44.848 ************************************ 00:07:44.848 18:58:02 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:45.110 18:58:02 blockdev_nvme_gpt -- bdev/blockdev.sh@824 -- # [[ gpt == bdev ]] 00:07:45.110 18:58:02 blockdev_nvme_gpt -- bdev/blockdev.sh@832 -- # [[ gpt == gpt ]] 00:07:45.110 18:58:02 blockdev_nvme_gpt -- bdev/blockdev.sh@833 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:07:45.110 18:58:02 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:45.110 18:58:02 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:45.110 18:58:02 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:45.110 ************************************ 00:07:45.110 START TEST bdev_gpt_uuid 00:07:45.110 ************************************ 00:07:45.110 18:58:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:07:45.110 18:58:02 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@651 -- # local bdev 00:07:45.110 18:58:02 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@653 -- # start_spdk_tgt 00:07:45.110 18:58:02 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=73668 00:07:45.110 18:58:02 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:45.110 18:58:02 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 73668 00:07:45.110 18:58:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 73668 ']' 00:07:45.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:45.110 18:58:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:45.110 18:58:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:45.110 18:58:02 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:45.110 18:58:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:45.110 18:58:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:45.110 18:58:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:45.110 [2024-12-05 18:58:02.555491] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:07:45.110 [2024-12-05 18:58:02.555849] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73668 ] 00:07:45.373 [2024-12-05 18:58:02.704843] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.373 [2024-12-05 18:58:02.735385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.948 18:58:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:45.948 18:58:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:07:45.948 18:58:03 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@655 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:45.948 18:58:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.948 18:58:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:46.211 Some configs were skipped because the RPC state that can call them passed over. 00:07:46.211 18:58:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.211 18:58:03 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@656 -- # rpc_cmd bdev_wait_for_examine 00:07:46.211 18:58:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.211 18:58:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:46.211 18:58:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.211 18:58:03 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:07:46.211 18:58:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.211 18:58:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:46.211 18:58:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.211 18:58:03 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # bdev='[ 00:07:46.211 { 00:07:46.211 "name": "Nvme1n1p1", 00:07:46.211 "aliases": [ 00:07:46.211 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:07:46.211 ], 00:07:46.211 "product_name": "GPT Disk", 00:07:46.211 "block_size": 4096, 00:07:46.211 "num_blocks": 655104, 00:07:46.211 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:46.211 "assigned_rate_limits": { 00:07:46.211 "rw_ios_per_sec": 0, 00:07:46.211 "rw_mbytes_per_sec": 0, 00:07:46.211 "r_mbytes_per_sec": 0, 00:07:46.211 "w_mbytes_per_sec": 0 00:07:46.211 }, 00:07:46.211 "claimed": false, 00:07:46.211 "zoned": false, 00:07:46.211 "supported_io_types": { 00:07:46.211 "read": true, 00:07:46.211 "write": true, 00:07:46.211 "unmap": true, 00:07:46.211 "flush": true, 00:07:46.211 "reset": true, 00:07:46.211 "nvme_admin": false, 00:07:46.211 "nvme_io": false, 00:07:46.211 "nvme_io_md": false, 00:07:46.211 "write_zeroes": true, 00:07:46.211 "zcopy": false, 00:07:46.211 "get_zone_info": false, 00:07:46.211 "zone_management": false, 00:07:46.211 "zone_append": false, 00:07:46.211 "compare": true, 00:07:46.211 "compare_and_write": false, 00:07:46.211 "abort": true, 00:07:46.211 "seek_hole": false, 00:07:46.211 "seek_data": false, 00:07:46.211 "copy": true, 00:07:46.211 "nvme_iov_md": false 00:07:46.211 }, 00:07:46.211 "driver_specific": { 00:07:46.211 "gpt": { 00:07:46.211 "base_bdev": "Nvme1n1", 00:07:46.211 "offset_blocks": 256, 00:07:46.211 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:07:46.211 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:46.211 "partition_name": "SPDK_TEST_first" 00:07:46.211 } 00:07:46.211 } 00:07:46.211 } 00:07:46.211 ]' 00:07:46.211 18:58:03 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # jq -r length 00:07:46.474 18:58:03 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # [[ 1 == \1 ]] 00:07:46.474 18:58:03 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # jq -r '.[0].aliases[0]' 00:07:46.474 18:58:03 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:46.474 18:58:03 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:46.474 18:58:03 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:46.474 18:58:03 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:46.474 18:58:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.474 18:58:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:46.474 18:58:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.474 18:58:03 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # bdev='[ 00:07:46.474 { 00:07:46.474 "name": "Nvme1n1p2", 00:07:46.474 "aliases": [ 00:07:46.474 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:07:46.474 ], 00:07:46.474 "product_name": "GPT Disk", 00:07:46.474 "block_size": 4096, 00:07:46.474 "num_blocks": 655103, 00:07:46.474 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:46.474 "assigned_rate_limits": { 00:07:46.474 "rw_ios_per_sec": 0, 00:07:46.474 "rw_mbytes_per_sec": 0, 00:07:46.474 "r_mbytes_per_sec": 0, 00:07:46.474 "w_mbytes_per_sec": 0 00:07:46.474 }, 00:07:46.474 "claimed": false, 00:07:46.474 "zoned": false, 00:07:46.474 "supported_io_types": { 00:07:46.474 "read": true, 00:07:46.474 "write": true, 00:07:46.474 "unmap": true, 00:07:46.474 "flush": true, 00:07:46.474 "reset": true, 00:07:46.474 "nvme_admin": false, 00:07:46.474 "nvme_io": false, 00:07:46.474 "nvme_io_md": false, 00:07:46.474 "write_zeroes": true, 00:07:46.474 "zcopy": false, 00:07:46.474 "get_zone_info": false, 00:07:46.474 "zone_management": false, 00:07:46.474 "zone_append": false, 00:07:46.474 "compare": true, 00:07:46.474 "compare_and_write": false, 00:07:46.474 "abort": true, 00:07:46.474 "seek_hole": false, 00:07:46.474 "seek_data": false, 00:07:46.474 "copy": true, 00:07:46.474 "nvme_iov_md": false 00:07:46.474 }, 00:07:46.474 "driver_specific": { 00:07:46.474 "gpt": { 00:07:46.474 "base_bdev": "Nvme1n1", 00:07:46.474 "offset_blocks": 655360, 00:07:46.474 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:07:46.474 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:46.474 "partition_name": "SPDK_TEST_second" 00:07:46.474 } 00:07:46.474 } 00:07:46.474 } 00:07:46.474 ]' 00:07:46.474 18:58:03 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # jq -r length 00:07:46.474 18:58:03 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # [[ 1 == \1 ]] 00:07:46.474 18:58:03 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # jq -r '.[0].aliases[0]' 00:07:46.474 18:58:03 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:46.474 18:58:03 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:46.474 18:58:03 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:46.474 18:58:03 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@668 -- # killprocess 73668 00:07:46.474 18:58:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 73668 ']' 00:07:46.474 18:58:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 73668 00:07:46.474 18:58:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:07:46.474 18:58:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:46.474 18:58:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73668 00:07:46.474 killing process with pid 73668 00:07:46.474 18:58:04 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:46.474 18:58:04 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:46.474 18:58:04 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73668' 00:07:46.474 18:58:04 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 73668 00:07:46.474 18:58:04 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 73668 00:07:47.046 ************************************ 00:07:47.046 END TEST bdev_gpt_uuid 00:07:47.046 ************************************ 00:07:47.046 00:07:47.046 real 0m1.846s 00:07:47.046 user 0m1.983s 00:07:47.046 sys 0m0.408s 00:07:47.046 18:58:04 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:47.046 18:58:04 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:47.046 18:58:04 blockdev_nvme_gpt -- bdev/blockdev.sh@836 -- # [[ gpt == crypto_sw ]] 00:07:47.047 18:58:04 blockdev_nvme_gpt -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:07:47.047 18:58:04 blockdev_nvme_gpt -- bdev/blockdev.sh@849 -- # cleanup 00:07:47.047 18:58:04 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:47.047 18:58:04 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:47.047 18:58:04 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:07:47.047 18:58:04 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:07:47.047 18:58:04 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:07:47.047 18:58:04 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:47.307 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:47.307 Waiting for block devices as requested 00:07:47.568 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:47.568 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:47.568 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:47.568 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:52.860 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:52.860 18:58:10 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:07:52.860 18:58:10 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:07:53.120 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:07:53.120 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:07:53.120 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:07:53.120 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:07:53.120 18:58:10 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:07:53.120 00:07:53.120 real 0m48.543s 00:07:53.120 user 1m1.776s 00:07:53.120 sys 0m7.838s 00:07:53.120 ************************************ 00:07:53.120 END TEST blockdev_nvme_gpt 00:07:53.120 ************************************ 00:07:53.120 18:58:10 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:53.120 18:58:10 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:53.120 18:58:10 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:53.120 18:58:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:53.120 18:58:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:53.120 18:58:10 -- common/autotest_common.sh@10 -- # set +x 00:07:53.120 ************************************ 00:07:53.120 START TEST nvme 00:07:53.120 ************************************ 00:07:53.120 18:58:10 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:53.120 * Looking for test storage... 00:07:53.120 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:07:53.120 18:58:10 nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:53.120 18:58:10 nvme -- common/autotest_common.sh@1711 -- # lcov --version 00:07:53.120 18:58:10 nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:53.120 18:58:10 nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:53.120 18:58:10 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:53.120 18:58:10 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:53.120 18:58:10 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:53.120 18:58:10 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:07:53.120 18:58:10 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:07:53.120 18:58:10 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:07:53.120 18:58:10 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:07:53.120 18:58:10 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:07:53.120 18:58:10 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:07:53.120 18:58:10 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:07:53.120 18:58:10 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:53.120 18:58:10 nvme -- scripts/common.sh@344 -- # case "$op" in 00:07:53.120 18:58:10 nvme -- scripts/common.sh@345 -- # : 1 00:07:53.120 18:58:10 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:53.120 18:58:10 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:53.120 18:58:10 nvme -- scripts/common.sh@365 -- # decimal 1 00:07:53.120 18:58:10 nvme -- scripts/common.sh@353 -- # local d=1 00:07:53.120 18:58:10 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:53.120 18:58:10 nvme -- scripts/common.sh@355 -- # echo 1 00:07:53.120 18:58:10 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:07:53.120 18:58:10 nvme -- scripts/common.sh@366 -- # decimal 2 00:07:53.120 18:58:10 nvme -- scripts/common.sh@353 -- # local d=2 00:07:53.120 18:58:10 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:53.120 18:58:10 nvme -- scripts/common.sh@355 -- # echo 2 00:07:53.120 18:58:10 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:07:53.120 18:58:10 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:53.120 18:58:10 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:53.120 18:58:10 nvme -- scripts/common.sh@368 -- # return 0 00:07:53.120 18:58:10 nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:53.120 18:58:10 nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:53.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.120 --rc genhtml_branch_coverage=1 00:07:53.120 --rc genhtml_function_coverage=1 00:07:53.120 --rc genhtml_legend=1 00:07:53.120 --rc geninfo_all_blocks=1 00:07:53.120 --rc geninfo_unexecuted_blocks=1 00:07:53.120 00:07:53.120 ' 00:07:53.120 18:58:10 nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:53.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.120 --rc genhtml_branch_coverage=1 00:07:53.120 --rc genhtml_function_coverage=1 00:07:53.120 --rc genhtml_legend=1 00:07:53.120 --rc geninfo_all_blocks=1 00:07:53.120 --rc geninfo_unexecuted_blocks=1 00:07:53.120 00:07:53.120 ' 00:07:53.120 18:58:10 nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:53.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.120 --rc genhtml_branch_coverage=1 00:07:53.120 --rc genhtml_function_coverage=1 00:07:53.120 --rc genhtml_legend=1 00:07:53.120 --rc geninfo_all_blocks=1 00:07:53.120 --rc geninfo_unexecuted_blocks=1 00:07:53.120 00:07:53.120 ' 00:07:53.120 18:58:10 nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:53.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.120 --rc genhtml_branch_coverage=1 00:07:53.120 --rc genhtml_function_coverage=1 00:07:53.120 --rc genhtml_legend=1 00:07:53.120 --rc geninfo_all_blocks=1 00:07:53.120 --rc geninfo_unexecuted_blocks=1 00:07:53.120 00:07:53.120 ' 00:07:53.120 18:58:10 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:53.689 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:54.256 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:54.256 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:54.256 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:54.256 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:54.256 18:58:11 nvme -- nvme/nvme.sh@79 -- # uname 00:07:54.256 18:58:11 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:07:54.256 18:58:11 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:07:54.256 18:58:11 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:07:54.256 18:58:11 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:07:54.256 18:58:11 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:07:54.256 18:58:11 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:07:54.256 Waiting for stub to ready for secondary processes... 00:07:54.256 18:58:11 nvme -- common/autotest_common.sh@1075 -- # stubpid=74297 00:07:54.256 18:58:11 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:07:54.256 18:58:11 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:54.256 18:58:11 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/74297 ]] 00:07:54.257 18:58:11 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:07:54.257 18:58:11 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:07:54.257 [2024-12-05 18:58:11.781819] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:07:54.257 [2024-12-05 18:58:11.781939] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:07:55.639 18:58:12 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:55.639 18:58:12 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/74297 ]] 00:07:55.639 18:58:12 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:07:55.639 [2024-12-05 18:58:12.863221] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:55.639 [2024-12-05 18:58:12.881014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:55.639 [2024-12-05 18:58:12.881284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:55.639 [2024-12-05 18:58:12.881347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:55.639 [2024-12-05 18:58:12.896917] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:07:55.639 [2024-12-05 18:58:12.897001] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:55.639 [2024-12-05 18:58:12.910991] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:07:55.639 [2024-12-05 18:58:12.911220] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:07:55.639 [2024-12-05 18:58:12.912722] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:55.639 [2024-12-05 18:58:12.913145] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:07:55.639 [2024-12-05 18:58:12.913300] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:07:55.639 [2024-12-05 18:58:12.914285] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:55.639 [2024-12-05 18:58:12.914613] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:07:55.639 [2024-12-05 18:58:12.914721] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:07:55.639 [2024-12-05 18:58:12.916742] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:55.639 [2024-12-05 18:58:12.917018] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:07:55.639 [2024-12-05 18:58:12.917109] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:07:55.639 [2024-12-05 18:58:12.917176] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:07:55.639 [2024-12-05 18:58:12.917239] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:07:56.212 done. 00:07:56.212 18:58:13 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:56.212 18:58:13 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:07:56.212 18:58:13 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:56.213 18:58:13 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:07:56.213 18:58:13 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:56.213 18:58:13 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:56.474 ************************************ 00:07:56.474 START TEST nvme_reset 00:07:56.474 ************************************ 00:07:56.474 18:58:13 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:56.474 Initializing NVMe Controllers 00:07:56.474 Skipping QEMU NVMe SSD at 0000:00:13.0 00:07:56.474 Skipping QEMU NVMe SSD at 0000:00:10.0 00:07:56.474 Skipping QEMU NVMe SSD at 0000:00:11.0 00:07:56.474 Skipping QEMU NVMe SSD at 0000:00:12.0 00:07:56.474 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:07:56.474 00:07:56.474 real 0m0.198s 00:07:56.474 user 0m0.065s 00:07:56.474 sys 0m0.088s 00:07:56.474 ************************************ 00:07:56.474 END TEST nvme_reset 00:07:56.474 ************************************ 00:07:56.474 18:58:13 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:56.474 18:58:13 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:07:56.474 18:58:14 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:07:56.474 18:58:14 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:56.474 18:58:14 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:56.474 18:58:14 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:56.739 ************************************ 00:07:56.739 START TEST nvme_identify 00:07:56.739 ************************************ 00:07:56.739 18:58:14 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:07:56.739 18:58:14 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:07:56.739 18:58:14 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:07:56.739 18:58:14 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:07:56.739 18:58:14 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:07:56.739 18:58:14 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:56.739 18:58:14 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:07:56.739 18:58:14 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:56.739 18:58:14 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:56.739 18:58:14 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:56.739 18:58:14 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:07:56.739 18:58:14 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:56.739 18:58:14 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:07:56.740 [2024-12-05 18:58:14.251309] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 74330 terminated unexpected 00:07:56.740 ===================================================== 00:07:56.740 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:56.740 ===================================================== 00:07:56.740 Controller Capabilities/Features 00:07:56.740 ================================ 00:07:56.740 Vendor ID: 1b36 00:07:56.740 Subsystem Vendor ID: 1af4 00:07:56.740 Serial Number: 12343 00:07:56.740 Model Number: QEMU NVMe Ctrl 00:07:56.740 Firmware Version: 8.0.0 00:07:56.740 Recommended Arb Burst: 6 00:07:56.740 IEEE OUI Identifier: 00 54 52 00:07:56.740 Multi-path I/O 00:07:56.740 May have multiple subsystem ports: No 00:07:56.740 May have multiple controllers: Yes 00:07:56.740 Associated with SR-IOV VF: No 00:07:56.740 Max Data Transfer Size: 524288 00:07:56.740 Max Number of Namespaces: 256 00:07:56.740 Max Number of I/O Queues: 64 00:07:56.740 NVMe Specification Version (VS): 1.4 00:07:56.740 NVMe Specification Version (Identify): 1.4 00:07:56.740 Maximum Queue Entries: 2048 00:07:56.740 Contiguous Queues Required: Yes 00:07:56.740 Arbitration Mechanisms Supported 00:07:56.740 Weighted Round Robin: Not Supported 00:07:56.740 Vendor Specific: Not Supported 00:07:56.740 Reset Timeout: 7500 ms 00:07:56.740 Doorbell Stride: 4 bytes 00:07:56.740 NVM Subsystem Reset: Not Supported 00:07:56.740 Command Sets Supported 00:07:56.740 NVM Command Set: Supported 00:07:56.740 Boot Partition: Not Supported 00:07:56.740 Memory Page Size Minimum: 4096 bytes 00:07:56.740 Memory Page Size Maximum: 65536 bytes 00:07:56.740 Persistent Memory Region: Not Supported 00:07:56.740 Optional Asynchronous Events Supported 00:07:56.740 Namespace Attribute Notices: Supported 00:07:56.740 Firmware Activation Notices: Not Supported 00:07:56.740 ANA Change Notices: Not Supported 00:07:56.740 PLE Aggregate Log Change Notices: Not Supported 00:07:56.740 LBA Status Info Alert Notices: Not Supported 00:07:56.740 EGE Aggregate Log Change Notices: Not Supported 00:07:56.740 Normal NVM Subsystem Shutdown event: Not Supported 00:07:56.740 Zone Descriptor Change Notices: Not Supported 00:07:56.740 Discovery Log Change Notices: Not Supported 00:07:56.740 Controller Attributes 00:07:56.740 128-bit Host Identifier: Not Supported 00:07:56.740 Non-Operational Permissive Mode: Not Supported 00:07:56.740 NVM Sets: Not Supported 00:07:56.740 Read Recovery Levels: Not Supported 00:07:56.740 Endurance Groups: Supported 00:07:56.740 Predictable Latency Mode: Not Supported 00:07:56.740 Traffic Based Keep ALive: Not Supported 00:07:56.740 Namespace Granularity: Not Supported 00:07:56.740 SQ Associations: Not Supported 00:07:56.740 UUID List: Not Supported 00:07:56.740 Multi-Domain Subsystem: Not Supported 00:07:56.740 Fixed Capacity Management: Not Supported 00:07:56.740 Variable Capacity Management: Not Supported 00:07:56.740 Delete Endurance Group: Not Supported 00:07:56.740 Delete NVM Set: Not Supported 00:07:56.740 Extended LBA Formats Supported: Supported 00:07:56.740 Flexible Data Placement Supported: Supported 00:07:56.740 00:07:56.740 Controller Memory Buffer Support 00:07:56.740 ================================ 00:07:56.740 Supported: No 00:07:56.740 00:07:56.740 Persistent Memory Region Support 00:07:56.740 ================================ 00:07:56.740 Supported: No 00:07:56.740 00:07:56.740 Admin Command Set Attributes 00:07:56.740 ============================ 00:07:56.740 Security Send/Receive: Not Supported 00:07:56.740 Format NVM: Supported 00:07:56.740 Firmware Activate/Download: Not Supported 00:07:56.740 Namespace Management: Supported 00:07:56.740 Device Self-Test: Not Supported 00:07:56.740 Directives: Supported 00:07:56.740 NVMe-MI: Not Supported 00:07:56.740 Virtualization Management: Not Supported 00:07:56.740 Doorbell Buffer Config: Supported 00:07:56.740 Get LBA Status Capability: Not Supported 00:07:56.740 Command & Feature Lockdown Capability: Not Supported 00:07:56.740 Abort Command Limit: 4 00:07:56.740 Async Event Request Limit: 4 00:07:56.740 Number of Firmware Slots: N/A 00:07:56.740 Firmware Slot 1 Read-Only: N/A 00:07:56.740 Firmware Activation Without Reset: N/A 00:07:56.740 Multiple Update Detection Support: N/A 00:07:56.740 Firmware Update Granularity: No Information Provided 00:07:56.740 Per-Namespace SMART Log: Yes 00:07:56.740 Asymmetric Namespace Access Log Page: Not Supported 00:07:56.740 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:56.740 Command Effects Log Page: Supported 00:07:56.740 Get Log Page Extended Data: Supported 00:07:56.740 Telemetry Log Pages: Not Supported 00:07:56.740 Persistent Event Log Pages: Not Supported 00:07:56.740 Supported Log Pages Log Page: May Support 00:07:56.740 Commands Supported & Effects Log Page: Not Supported 00:07:56.740 Feature Identifiers & Effects Log Page:May Support 00:07:56.740 NVMe-MI Commands & Effects Log Page: May Support 00:07:56.740 Data Area 4 for Telemetry Log: Not Supported 00:07:56.740 Error Log Page Entries Supported: 1 00:07:56.740 Keep Alive: Not Supported 00:07:56.740 00:07:56.740 NVM Command Set Attributes 00:07:56.740 ========================== 00:07:56.740 Submission Queue Entry Size 00:07:56.740 Max: 64 00:07:56.740 Min: 64 00:07:56.740 Completion Queue Entry Size 00:07:56.740 Max: 16 00:07:56.740 Min: 16 00:07:56.740 Number of Namespaces: 256 00:07:56.740 Compare Command: Supported 00:07:56.740 Write Uncorrectable Command: Not Supported 00:07:56.740 Dataset Management Command: Supported 00:07:56.740 Write Zeroes Command: Supported 00:07:56.740 Set Features Save Field: Supported 00:07:56.740 Reservations: Not Supported 00:07:56.740 Timestamp: Supported 00:07:56.740 Copy: Supported 00:07:56.740 Volatile Write Cache: Present 00:07:56.740 Atomic Write Unit (Normal): 1 00:07:56.740 Atomic Write Unit (PFail): 1 00:07:56.740 Atomic Compare & Write Unit: 1 00:07:56.740 Fused Compare & Write: Not Supported 00:07:56.740 Scatter-Gather List 00:07:56.740 SGL Command Set: Supported 00:07:56.740 SGL Keyed: Not Supported 00:07:56.740 SGL Bit Bucket Descriptor: Not Supported 00:07:56.740 SGL Metadata Pointer: Not Supported 00:07:56.740 Oversized SGL: Not Supported 00:07:56.740 SGL Metadata Address: Not Supported 00:07:56.740 SGL Offset: Not Supported 00:07:56.740 Transport SGL Data Block: Not Supported 00:07:56.740 Replay Protected Memory Block: Not Supported 00:07:56.740 00:07:56.740 Firmware Slot Information 00:07:56.740 ========================= 00:07:56.740 Active slot: 1 00:07:56.740 Slot 1 Firmware Revision: 1.0 00:07:56.740 00:07:56.740 00:07:56.741 Commands Supported and Effects 00:07:56.741 ============================== 00:07:56.741 Admin Commands 00:07:56.741 -------------- 00:07:56.741 Delete I/O Submission Queue (00h): Supported 00:07:56.741 Create I/O Submission Queue (01h): Supported 00:07:56.741 Get Log Page (02h): Supported 00:07:56.741 Delete I/O Completion Queue (04h): Supported 00:07:56.741 Create I/O Completion Queue (05h): Supported 00:07:56.741 Identify (06h): Supported 00:07:56.741 Abort (08h): Supported 00:07:56.741 Set Features (09h): Supported 00:07:56.741 Get Features (0Ah): Supported 00:07:56.741 Asynchronous Event Request (0Ch): Supported 00:07:56.741 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:56.741 Directive Send (19h): Supported 00:07:56.741 Directive Receive (1Ah): Supported 00:07:56.741 Virtualization Management (1Ch): Supported 00:07:56.741 Doorbell Buffer Config (7Ch): Supported 00:07:56.741 Format NVM (80h): Supported LBA-Change 00:07:56.741 I/O Commands 00:07:56.741 ------------ 00:07:56.741 Flush (00h): Supported LBA-Change 00:07:56.741 Write (01h): Supported LBA-Change 00:07:56.741 Read (02h): Supported 00:07:56.741 Compare (05h): Supported 00:07:56.741 Write Zeroes (08h): Supported LBA-Change 00:07:56.741 Dataset Management (09h): Supported LBA-Change 00:07:56.741 Unknown (0Ch): Supported 00:07:56.741 Unknown (12h): Supported 00:07:56.741 Copy (19h): Supported LBA-Change 00:07:56.741 Unknown (1Dh): Supported LBA-Change 00:07:56.741 00:07:56.741 Error Log 00:07:56.741 ========= 00:07:56.741 00:07:56.741 Arbitration 00:07:56.741 =========== 00:07:56.741 Arbitration Burst: no limit 00:07:56.741 00:07:56.741 Power Management 00:07:56.741 ================ 00:07:56.741 Number of Power States: 1 00:07:56.741 Current Power State: Power State #0 00:07:56.741 Power State #0: 00:07:56.741 Max Power: 25.00 W 00:07:56.741 Non-Operational State: Operational 00:07:56.741 Entry Latency: 16 microseconds 00:07:56.741 Exit Latency: 4 microseconds 00:07:56.741 Relative Read Throughput: 0 00:07:56.741 Relative Read Latency: 0 00:07:56.741 Relative Write Throughput: 0 00:07:56.741 Relative Write Latency: 0 00:07:56.741 Idle Power: Not Reported 00:07:56.741 Active Power: Not Reported 00:07:56.741 Non-Operational Permissive Mode: Not Supported 00:07:56.741 00:07:56.741 Health Information 00:07:56.741 ================== 00:07:56.741 Critical Warnings: 00:07:56.741 Available Spare Space: OK 00:07:56.741 Temperature: OK 00:07:56.741 Device Reliability: OK 00:07:56.741 Read Only: No 00:07:56.741 Volatile Memory Backup: OK 00:07:56.741 Current Temperature: 323 Kelvin (50 Celsius) 00:07:56.741 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:56.741 Available Spare: 0% 00:07:56.741 Available Spare Threshold: 0% 00:07:56.741 Life Percentage Used: 0% 00:07:56.741 Data Units Read: 796 00:07:56.741 Data Units Written: 725 00:07:56.741 Host Read Commands: 34919 00:07:56.741 Host Write Commands: 34342 00:07:56.741 Controller Busy Time: 0 minutes 00:07:56.741 Power Cycles: 0 00:07:56.741 Power On Hours: 0 hours 00:07:56.741 Unsafe Shutdowns: 0 00:07:56.741 Unrecoverable Media Errors: 0 00:07:56.741 Lifetime Error Log Entries: 0 00:07:56.741 Warning Temperature Time: 0 minutes 00:07:56.741 Critical Temperature Time: 0 minutes 00:07:56.741 00:07:56.741 Number of Queues 00:07:56.741 ================ 00:07:56.741 Number of I/O Submission Queues: 64 00:07:56.741 Number of I/O Completion Queues: 64 00:07:56.741 00:07:56.741 ZNS Specific Controller Data 00:07:56.741 ============================ 00:07:56.741 Zone Append Size Limit: 0 00:07:56.741 00:07:56.741 00:07:56.741 Active Namespaces 00:07:56.741 ================= 00:07:56.741 Namespace ID:1 00:07:56.741 Error Recovery Timeout: Unlimited 00:07:56.741 Command Set Identifier: NVM (00h) 00:07:56.741 Deallocate: Supported 00:07:56.741 Deallocated/Unwritten Error: Supported 00:07:56.741 Deallocated Read Value: All 0x00 00:07:56.741 Deallocate in Write Zeroes: Not Supported 00:07:56.741 Deallocated Guard Field: 0xFFFF 00:07:56.741 Flush: Supported 00:07:56.741 Reservation: Not Supported 00:07:56.741 Namespace Sharing Capabilities: Multiple Controllers 00:07:56.741 Size (in LBAs): 262144 (1GiB) 00:07:56.741 Capacity (in LBAs): 262144 (1GiB) 00:07:56.741 Utilization (in LBAs): 262144 (1GiB) 00:07:56.741 Thin Provisioning: Not Supported 00:07:56.741 Per-NS Atomic Units: No 00:07:56.741 Maximum Single Source Range Length: 128 00:07:56.741 Maximum Copy Length: 128 00:07:56.741 Maximum Source Range Count: 128 00:07:56.741 NGUID/EUI64 Never Reused: No 00:07:56.741 Namespace Write Protected: No 00:07:56.741 Endurance group ID: 1 00:07:56.741 Number of LBA Formats: 8 00:07:56.741 Current LBA Format: LBA Format #04 00:07:56.741 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:56.741 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:56.741 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:56.741 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:56.741 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:56.741 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:56.741 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:56.741 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:56.741 00:07:56.741 Get Feature FDP: 00:07:56.741 ================ 00:07:56.741 Enabled: Yes 00:07:56.741 FDP configuration index: 0 00:07:56.741 00:07:56.741 FDP configurations log page 00:07:56.741 =========================== 00:07:56.741 Number of FDP configurations: 1 00:07:56.741 Version: 0 00:07:56.741 Size: 112 00:07:56.741 FDP Configuration Descriptor: 0 00:07:56.741 Descriptor Size: 96 00:07:56.741 Reclaim Group Identifier format: 2 00:07:56.741 FDP Volatile Write Cache: Not Present 00:07:56.741 FDP Configuration: Valid 00:07:56.741 Vendor Specific Size: 0 00:07:56.741 Number of Reclaim Groups: 2 00:07:56.741 Number of Recalim Unit Handles: 8 00:07:56.742 Max Placement Identifiers: 128 00:07:56.742 Number of Namespaces Suppprted: 256 00:07:56.742 Reclaim unit Nominal Size: 6000000 bytes 00:07:56.742 Estimated Reclaim Unit Time Limit: Not Reported 00:07:56.742 RUH Desc #000: RUH Type: Initially Isolated 00:07:56.742 RUH Desc #001: RUH Type: Initially Isolated 00:07:56.742 RUH Desc #002: RUH Type: Initially Isolated 00:07:56.742 RUH Desc #003: RUH Type: Initially Isolated 00:07:56.742 RUH Desc #004: RUH Type: Initially Isolated 00:07:56.742 RUH Desc #005: RUH Type: Initially Isolated 00:07:56.742 RUH Desc #006: RUH Type: Initially Isolated 00:07:56.742 RUH Desc #007: RUH Type: Initially Isolated 00:07:56.742 00:07:56.742 FDP reclaim unit handle usage log page 00:07:56.742 ==================================[2024-12-05 18:58:14.255823] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 74330 terminated unexpected 00:07:56.742 ==== 00:07:56.742 Number of Reclaim Unit Handles: 8 00:07:56.742 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:56.742 RUH Usage Desc #001: RUH Attributes: Unused 00:07:56.742 RUH Usage Desc #002: RUH Attributes: Unused 00:07:56.742 RUH Usage Desc #003: RUH Attributes: Unused 00:07:56.742 RUH Usage Desc #004: RUH Attributes: Unused 00:07:56.742 RUH Usage Desc #005: RUH Attributes: Unused 00:07:56.742 RUH Usage Desc #006: RUH Attributes: Unused 00:07:56.742 RUH Usage Desc #007: RUH Attributes: Unused 00:07:56.742 00:07:56.742 FDP statistics log page 00:07:56.742 ======================= 00:07:56.742 Host bytes with metadata written: 450600960 00:07:56.742 Media bytes with metadata written: 450654208 00:07:56.742 Media bytes erased: 0 00:07:56.742 00:07:56.742 FDP events log page 00:07:56.742 =================== 00:07:56.742 Number of FDP events: 0 00:07:56.742 00:07:56.742 NVM Specific Namespace Data 00:07:56.742 =========================== 00:07:56.742 Logical Block Storage Tag Mask: 0 00:07:56.742 Protection Information Capabilities: 00:07:56.742 16b Guard Protection Information Storage Tag Support: No 00:07:56.742 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:56.742 Storage Tag Check Read Support: No 00:07:56.742 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.742 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.742 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.742 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.742 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.742 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.742 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.742 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.742 ===================================================== 00:07:56.742 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:56.742 ===================================================== 00:07:56.742 Controller Capabilities/Features 00:07:56.742 ================================ 00:07:56.742 Vendor ID: 1b36 00:07:56.742 Subsystem Vendor ID: 1af4 00:07:56.742 Serial Number: 12340 00:07:56.742 Model Number: QEMU NVMe Ctrl 00:07:56.742 Firmware Version: 8.0.0 00:07:56.742 Recommended Arb Burst: 6 00:07:56.742 IEEE OUI Identifier: 00 54 52 00:07:56.742 Multi-path I/O 00:07:56.742 May have multiple subsystem ports: No 00:07:56.742 May have multiple controllers: No 00:07:56.742 Associated with SR-IOV VF: No 00:07:56.742 Max Data Transfer Size: 524288 00:07:56.742 Max Number of Namespaces: 256 00:07:56.742 Max Number of I/O Queues: 64 00:07:56.742 NVMe Specification Version (VS): 1.4 00:07:56.742 NVMe Specification Version (Identify): 1.4 00:07:56.742 Maximum Queue Entries: 2048 00:07:56.742 Contiguous Queues Required: Yes 00:07:56.742 Arbitration Mechanisms Supported 00:07:56.742 Weighted Round Robin: Not Supported 00:07:56.742 Vendor Specific: Not Supported 00:07:56.742 Reset Timeout: 7500 ms 00:07:56.742 Doorbell Stride: 4 bytes 00:07:56.742 NVM Subsystem Reset: Not Supported 00:07:56.742 Command Sets Supported 00:07:56.742 NVM Command Set: Supported 00:07:56.742 Boot Partition: Not Supported 00:07:56.742 Memory Page Size Minimum: 4096 bytes 00:07:56.742 Memory Page Size Maximum: 65536 bytes 00:07:56.742 Persistent Memory Region: Not Supported 00:07:56.742 Optional Asynchronous Events Supported 00:07:56.742 Namespace Attribute Notices: Supported 00:07:56.742 Firmware Activation Notices: Not Supported 00:07:56.742 ANA Change Notices: Not Supported 00:07:56.742 PLE Aggregate Log Change Notices: Not Supported 00:07:56.742 LBA Status Info Alert Notices: Not Supported 00:07:56.742 EGE Aggregate Log Change Notices: Not Supported 00:07:56.742 Normal NVM Subsystem Shutdown event: Not Supported 00:07:56.742 Zone Descriptor Change Notices: Not Supported 00:07:56.742 Discovery Log Change Notices: Not Supported 00:07:56.742 Controller Attributes 00:07:56.742 128-bit Host Identifier: Not Supported 00:07:56.742 Non-Operational Permissive Mode: Not Supported 00:07:56.742 NVM Sets: Not Supported 00:07:56.742 Read Recovery Levels: Not Supported 00:07:56.742 Endurance Groups: Not Supported 00:07:56.742 Predictable Latency Mode: Not Supported 00:07:56.742 Traffic Based Keep ALive: Not Supported 00:07:56.742 Namespace Granularity: Not Supported 00:07:56.742 SQ Associations: Not Supported 00:07:56.742 UUID List: Not Supported 00:07:56.742 Multi-Domain Subsystem: Not Supported 00:07:56.742 Fixed Capacity Management: Not Supported 00:07:56.742 Variable Capacity Management: Not Supported 00:07:56.742 Delete Endurance Group: Not Supported 00:07:56.742 Delete NVM Set: Not Supported 00:07:56.742 Extended LBA Formats Supported: Supported 00:07:56.742 Flexible Data Placement Supported: Not Supported 00:07:56.742 00:07:56.742 Controller Memory Buffer Support 00:07:56.742 ================================ 00:07:56.742 Supported: No 00:07:56.742 00:07:56.742 Persistent Memory Region Support 00:07:56.742 ================================ 00:07:56.742 Supported: No 00:07:56.742 00:07:56.742 Admin Command Set Attributes 00:07:56.742 ============================ 00:07:56.742 Security Send/Receive: Not Supported 00:07:56.742 Format NVM: Supported 00:07:56.742 Firmware Activate/Download: Not Supported 00:07:56.742 Namespace Management: Supported 00:07:56.742 Device Self-Test: Not Supported 00:07:56.743 Directives: Supported 00:07:56.743 NVMe-MI: Not Supported 00:07:56.743 Virtualization Management: Not Supported 00:07:56.743 Doorbell Buffer Config: Supported 00:07:56.743 Get LBA Status Capability: Not Supported 00:07:56.743 Command & Feature Lockdown Capability: Not Supported 00:07:56.743 Abort Command Limit: 4 00:07:56.743 Async Event Request Limit: 4 00:07:56.743 Number of Firmware Slots: N/A 00:07:56.743 Firmware Slot 1 Read-Only: N/A 00:07:56.743 Firmware Activation Without Reset: N/A 00:07:56.743 Multiple Update Detection Support: N/A 00:07:56.743 Firmware Update Granularity: No Information Provided 00:07:56.743 Per-Namespace SMART Log: Yes 00:07:56.743 Asymmetric Namespace Access Log Page: Not Supported 00:07:56.743 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:56.743 Command Effects Log Page: Supported 00:07:56.743 Get Log Page Extended Data: Supported 00:07:56.743 Telemetry Log Pages: Not Supported 00:07:56.743 Persistent Event Log Pages: Not Supported 00:07:56.743 Supported Log Pages Log Page: May Support 00:07:56.743 Commands Supported & Effects Log Page: Not Supported 00:07:56.743 Feature Identifiers & Effects Log Page:May Support 00:07:56.743 NVMe-MI Commands & Effects Log Page: May Support 00:07:56.743 Data Area 4 for Telemetry Log: Not Supported 00:07:56.743 Error Log Page Entries Supported: 1 00:07:56.743 Keep Alive: Not Supported 00:07:56.743 00:07:56.743 NVM Command Set Attributes 00:07:56.743 ========================== 00:07:56.743 Submission Queue Entry Size 00:07:56.743 Max: 64 00:07:56.743 Min: 64 00:07:56.743 Completion Queue Entry Size 00:07:56.743 Max: 16 00:07:56.743 Min: 16 00:07:56.743 Number of Namespaces: 256 00:07:56.743 Compare Command: Supported 00:07:56.743 Write Uncorrectable Command: Not Supported 00:07:56.743 Dataset Management Command: Supported 00:07:56.743 Write Zeroes Command: Supported 00:07:56.743 Set Features Save Field: Supported 00:07:56.743 Reservations: Not Supported 00:07:56.743 Timestamp: Supported 00:07:56.743 Copy: Supported 00:07:56.743 Volatile Write Cache: Present 00:07:56.743 Atomic Write Unit (Normal): 1 00:07:56.743 Atomic Write Unit (PFail): 1 00:07:56.743 Atomic Compare & Write Unit: 1 00:07:56.743 Fused Compare & Write: Not Supported 00:07:56.743 Scatter-Gather List 00:07:56.743 SGL Command Set: Supported 00:07:56.743 SGL Keyed: Not Supported 00:07:56.743 SGL Bit Bucket Descriptor: Not Supported 00:07:56.743 SGL Metadata Pointer: Not Supported 00:07:56.743 Oversized SGL: Not Supported 00:07:56.743 SGL Metadata Address: Not Supported 00:07:56.743 SGL Offset: Not Supported 00:07:56.743 Transport SGL Data Block: Not Supported 00:07:56.743 Replay Protected Memory Block: Not Supported 00:07:56.743 00:07:56.743 Firmware Slot Information 00:07:56.743 ========================= 00:07:56.743 Active slot: 1 00:07:56.743 Slot 1 Firmware Revision: 1.0 00:07:56.743 00:07:56.743 00:07:56.743 Commands Supported and Effects 00:07:56.743 ============================== 00:07:56.743 Admin Commands 00:07:56.743 -------------- 00:07:56.743 Delete I/O Submission Queue (00h): Supported 00:07:56.743 Create I/O Submission Queue (01h): Supported 00:07:56.743 Get Log Page (02h): Supported 00:07:56.743 Delete I/O Completion Queue (04h): Supported 00:07:56.743 Create I/O Completion Queue (05h): Supported 00:07:56.743 Identify (06h): Supported 00:07:56.743 Abort (08h): Supported 00:07:56.743 Set Features (09h): Supported 00:07:56.743 Get Features (0Ah): Supported 00:07:56.743 Asynchronous Event Request (0Ch): Supported 00:07:56.743 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:56.743 Directive Send (19h): Supported 00:07:56.743 Directive Receive (1Ah): Supported 00:07:56.743 Virtualization Management (1Ch): Supported 00:07:56.743 Doorbell Buffer Config (7Ch): Supported 00:07:56.743 Format NVM (80h): Supported LBA-Change 00:07:56.743 I/O Commands 00:07:56.743 ------------ 00:07:56.743 Flush (00h): Supported LBA-Change 00:07:56.743 Write (01h): Supported LBA-Change 00:07:56.743 Read (02h): Supported 00:07:56.743 Compare (05h): Supported 00:07:56.743 Write Zeroes (08h): Supported LBA-Change 00:07:56.743 Dataset Management (09h): Supported LBA-Change 00:07:56.743 Unknown (0Ch): Supported 00:07:56.743 Unknown (12h): Supported 00:07:56.743 Copy (19h): Supported LBA-Change 00:07:56.743 Unknown (1Dh): Supported LBA-Change 00:07:56.743 00:07:56.743 Error Log 00:07:56.743 ========= 00:07:56.743 00:07:56.743 Arbitration 00:07:56.743 =========== 00:07:56.743 Arbitration Burst: no limit 00:07:56.743 00:07:56.743 Power Management 00:07:56.743 ================ 00:07:56.743 Number of Power States: 1 00:07:56.743 Current Power State: Power State #0 00:07:56.743 Power State #0: 00:07:56.743 Max Power: 25.00 W 00:07:56.743 Non-Operational State: Operational 00:07:56.743 Entry Latency: 16 microseconds 00:07:56.743 Exit Latency: 4 microseconds 00:07:56.743 Relative Read Throughput: 0 00:07:56.743 Relative Read Latency: 0 00:07:56.743 Relative Write Throughput: 0 00:07:56.743 Relative Write Latency: 0 00:07:56.743 Idle Power: Not Reported 00:07:56.743 Active Power: Not Reported 00:07:56.743 Non-Operational Permissive Mode: Not Supported 00:07:56.743 00:07:56.743 Health Information 00:07:56.743 ================== 00:07:56.743 Critical Warnings: 00:07:56.743 Available Spare Space: OK 00:07:56.743 Temperature: OK 00:07:56.743 Device Reliability: OK 00:07:56.743 Read Only: No 00:07:56.743 Volatile Memory Backup: OK 00:07:56.743 Current Temperature: 323 Kelvin (50 Celsius) 00:07:56.743 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:56.743 Available Spare: 0% 00:07:56.743 Available Spare Threshold: 0% 00:07:56.743 Life Percentage Used: 0% 00:07:56.743 Data Units Read: 638 00:07:56.743 Data Units Written: 566 00:07:56.743 Host Read Commands: 33262 00:07:56.743 Host Write Commands: 33048 00:07:56.743 Controller Busy Time: 0 minutes 00:07:56.743 Power Cycles: 0 00:07:56.743 Power On Hours: 0 hours 00:07:56.743 Unsafe Shutdowns: 0 00:07:56.743 Unrecoverable Media Errors: 0 00:07:56.744 Lifetime Error Log Entries: 0 00:07:56.744 Warning Temperature Time: 0 minutes 00:07:56.744 Critical Temperature Time: 0 minutes 00:07:56.744 00:07:56.744 Number of Queues 00:07:56.744 ================ 00:07:56.744 Number of I/O Submission Queues: 64 00:07:56.744 Number of I/O Completion Queues: 64 00:07:56.744 00:07:56.744 ZNS Specific Controller Data 00:07:56.744 ============================ 00:07:56.744 Zone Append Size Limit: 0 00:07:56.744 00:07:56.744 00:07:56.744 Active Namespaces 00:07:56.744 ================= 00:07:56.744 Namespace ID:1 00:07:56.744 Error Recovery Timeout: Unlimited 00:07:56.744 Command Set Identifier: NVM (00h) 00:07:56.744 Deallocate: Supported 00:07:56.744 Deallocated/Unwritten Error: Supported 00:07:56.744 Deallocated Read Value: All 0x00 00:07:56.744 Deallocate in Write Zeroes: Not Supported 00:07:56.744 Deallocated Guard Field: 0xFFFF 00:07:56.744 Flush: Supported 00:07:56.744 Reservation: Not Supported 00:07:56.744 Metadata Transferred as: Separate Metadata Buffer 00:07:56.744 Namespace Sharing Capabilities: Private 00:07:56.744 Size (in LBAs): 1548666 (5GiB) 00:07:56.744 Capacity (in LBAs): 1548666 (5GiB) 00:07:56.744 Utilization (in LBAs): 1548666 (5GiB) 00:07:56.744 Thin Provisioning: Not Supported 00:07:56.744 Per-NS Atomic Units: No 00:07:56.744 Maximum Single Source Range Length: 128 00:07:56.744 Maximum Copy Length: 128 00:07:56.744 Maximum Source Range Count: 128 00:07:56.744 NGUID/EUI64 Never Reused: No 00:07:56.744 Namespace Write Protected: No 00:07:56.744 Number of LBA Formats: 8 00:07:56.744 Current LBA Format: [2024-12-05 18:58:14.257827] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 74330 terminated unexpected 00:07:56.744 LBA Format #07 00:07:56.744 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:56.744 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:56.744 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:56.744 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:56.744 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:56.744 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:56.744 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:56.744 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:56.744 00:07:56.744 NVM Specific Namespace Data 00:07:56.744 =========================== 00:07:56.744 Logical Block Storage Tag Mask: 0 00:07:56.744 Protection Information Capabilities: 00:07:56.744 16b Guard Protection Information Storage Tag Support: No 00:07:56.744 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:56.744 Storage Tag Check Read Support: No 00:07:56.744 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.744 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.744 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.744 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.744 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.744 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.744 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.744 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.744 ===================================================== 00:07:56.744 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:56.744 ===================================================== 00:07:56.744 Controller Capabilities/Features 00:07:56.744 ================================ 00:07:56.744 Vendor ID: 1b36 00:07:56.744 Subsystem Vendor ID: 1af4 00:07:56.744 Serial Number: 12341 00:07:56.744 Model Number: QEMU NVMe Ctrl 00:07:56.744 Firmware Version: 8.0.0 00:07:56.744 Recommended Arb Burst: 6 00:07:56.744 IEEE OUI Identifier: 00 54 52 00:07:56.744 Multi-path I/O 00:07:56.744 May have multiple subsystem ports: No 00:07:56.744 May have multiple controllers: No 00:07:56.744 Associated with SR-IOV VF: No 00:07:56.744 Max Data Transfer Size: 524288 00:07:56.744 Max Number of Namespaces: 256 00:07:56.744 Max Number of I/O Queues: 64 00:07:56.744 NVMe Specification Version (VS): 1.4 00:07:56.744 NVMe Specification Version (Identify): 1.4 00:07:56.744 Maximum Queue Entries: 2048 00:07:56.744 Contiguous Queues Required: Yes 00:07:56.744 Arbitration Mechanisms Supported 00:07:56.744 Weighted Round Robin: Not Supported 00:07:56.744 Vendor Specific: Not Supported 00:07:56.744 Reset Timeout: 7500 ms 00:07:56.744 Doorbell Stride: 4 bytes 00:07:56.744 NVM Subsystem Reset: Not Supported 00:07:56.744 Command Sets Supported 00:07:56.744 NVM Command Set: Supported 00:07:56.744 Boot Partition: Not Supported 00:07:56.744 Memory Page Size Minimum: 4096 bytes 00:07:56.744 Memory Page Size Maximum: 65536 bytes 00:07:56.744 Persistent Memory Region: Not Supported 00:07:56.744 Optional Asynchronous Events Supported 00:07:56.744 Namespace Attribute Notices: Supported 00:07:56.744 Firmware Activation Notices: Not Supported 00:07:56.744 ANA Change Notices: Not Supported 00:07:56.744 PLE Aggregate Log Change Notices: Not Supported 00:07:56.744 LBA Status Info Alert Notices: Not Supported 00:07:56.744 EGE Aggregate Log Change Notices: Not Supported 00:07:56.744 Normal NVM Subsystem Shutdown event: Not Supported 00:07:56.744 Zone Descriptor Change Notices: Not Supported 00:07:56.744 Discovery Log Change Notices: Not Supported 00:07:56.744 Controller Attributes 00:07:56.744 128-bit Host Identifier: Not Supported 00:07:56.744 Non-Operational Permissive Mode: Not Supported 00:07:56.744 NVM Sets: Not Supported 00:07:56.744 Read Recovery Levels: Not Supported 00:07:56.744 Endurance Groups: Not Supported 00:07:56.744 Predictable Latency Mode: Not Supported 00:07:56.744 Traffic Based Keep ALive: Not Supported 00:07:56.744 Namespace Granularity: Not Supported 00:07:56.744 SQ Associations: Not Supported 00:07:56.744 UUID List: Not Supported 00:07:56.744 Multi-Domain Subsystem: Not Supported 00:07:56.744 Fixed Capacity Management: Not Supported 00:07:56.744 Variable Capacity Management: Not Supported 00:07:56.744 Delete Endurance Group: Not Supported 00:07:56.744 Delete NVM Set: Not Supported 00:07:56.744 Extended LBA Formats Supported: Supported 00:07:56.744 Flexible Data Placement Supported: Not Supported 00:07:56.744 00:07:56.744 Controller Memory Buffer Support 00:07:56.744 ================================ 00:07:56.744 Supported: No 00:07:56.744 00:07:56.744 Persistent Memory Region Support 00:07:56.744 ================================ 00:07:56.744 Supported: No 00:07:56.744 00:07:56.745 Admin Command Set Attributes 00:07:56.745 ============================ 00:07:56.745 Security Send/Receive: Not Supported 00:07:56.745 Format NVM: Supported 00:07:56.745 Firmware Activate/Download: Not Supported 00:07:56.745 Namespace Management: Supported 00:07:56.745 Device Self-Test: Not Supported 00:07:56.745 Directives: Supported 00:07:56.745 NVMe-MI: Not Supported 00:07:56.745 Virtualization Management: Not Supported 00:07:56.745 Doorbell Buffer Config: Supported 00:07:56.745 Get LBA Status Capability: Not Supported 00:07:56.745 Command & Feature Lockdown Capability: Not Supported 00:07:56.745 Abort Command Limit: 4 00:07:56.745 Async Event Request Limit: 4 00:07:56.745 Number of Firmware Slots: N/A 00:07:56.745 Firmware Slot 1 Read-Only: N/A 00:07:56.745 Firmware Activation Without Reset: N/A 00:07:56.745 Multiple Update Detection Support: N/A 00:07:56.745 Firmware Update Granularity: No Information Provided 00:07:56.745 Per-Namespace SMART Log: Yes 00:07:56.745 Asymmetric Namespace Access Log Page: Not Supported 00:07:56.745 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:56.745 Command Effects Log Page: Supported 00:07:56.745 Get Log Page Extended Data: Supported 00:07:56.745 Telemetry Log Pages: Not Supported 00:07:56.745 Persistent Event Log Pages: Not Supported 00:07:56.745 Supported Log Pages Log Page: May Support 00:07:56.745 Commands Supported & Effects Log Page: Not Supported 00:07:56.745 Feature Identifiers & Effects Log Page:May Support 00:07:56.745 NVMe-MI Commands & Effects Log Page: May Support 00:07:56.745 Data Area 4 for Telemetry Log: Not Supported 00:07:56.745 Error Log Page Entries Supported: 1 00:07:56.745 Keep Alive: Not Supported 00:07:56.745 00:07:56.745 NVM Command Set Attributes 00:07:56.745 ========================== 00:07:56.745 Submission Queue Entry Size 00:07:56.745 Max: 64 00:07:56.745 Min: 64 00:07:56.745 Completion Queue Entry Size 00:07:56.745 Max: 16 00:07:56.745 Min: 16 00:07:56.745 Number of Namespaces: 256 00:07:56.745 Compare Command: Supported 00:07:56.745 Write Uncorrectable Command: Not Supported 00:07:56.745 Dataset Management Command: Supported 00:07:56.745 Write Zeroes Command: Supported 00:07:56.745 Set Features Save Field: Supported 00:07:56.745 Reservations: Not Supported 00:07:56.745 Timestamp: Supported 00:07:56.745 Copy: Supported 00:07:56.745 Volatile Write Cache: Present 00:07:56.745 Atomic Write Unit (Normal): 1 00:07:56.745 Atomic Write Unit (PFail): 1 00:07:56.745 Atomic Compare & Write Unit: 1 00:07:56.745 Fused Compare & Write: Not Supported 00:07:56.745 Scatter-Gather List 00:07:56.745 SGL Command Set: Supported 00:07:56.745 SGL Keyed: Not Supported 00:07:56.745 SGL Bit Bucket Descriptor: Not Supported 00:07:56.745 SGL Metadata Pointer: Not Supported 00:07:56.745 Oversized SGL: Not Supported 00:07:56.745 SGL Metadata Address: Not Supported 00:07:56.745 SGL Offset: Not Supported 00:07:56.745 Transport SGL Data Block: Not Supported 00:07:56.745 Replay Protected Memory Block: Not Supported 00:07:56.745 00:07:56.745 Firmware Slot Information 00:07:56.745 ========================= 00:07:56.745 Active slot: 1 00:07:56.745 Slot 1 Firmware Revision: 1.0 00:07:56.745 00:07:56.745 00:07:56.745 Commands Supported and Effects 00:07:56.745 ============================== 00:07:56.745 Admin Commands 00:07:56.745 -------------- 00:07:56.745 Delete I/O Submission Queue (00h): Supported 00:07:56.745 Create I/O Submission Queue (01h): Supported 00:07:56.745 Get Log Page (02h): Supported 00:07:56.745 Delete I/O Completion Queue (04h): Supported 00:07:56.745 Create I/O Completion Queue (05h): Supported 00:07:56.745 Identify (06h): Supported 00:07:56.745 Abort (08h): Supported 00:07:56.745 Set Features (09h): Supported 00:07:56.745 Get Features (0Ah): Supported 00:07:56.745 Asynchronous Event Request (0Ch): Supported 00:07:56.745 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:56.745 Directive Send (19h): Supported 00:07:56.745 Directive Receive (1Ah): Supported 00:07:56.745 Virtualization Management (1Ch): Supported 00:07:56.745 Doorbell Buffer Config (7Ch): Supported 00:07:56.745 Format NVM (80h): Supported LBA-Change 00:07:56.745 I/O Commands 00:07:56.745 ------------ 00:07:56.745 Flush (00h): Supported LBA-Change 00:07:56.745 Write (01h): Supported LBA-Change 00:07:56.745 Read (02h): Supported 00:07:56.745 Compare (05h): Supported 00:07:56.745 Write Zeroes (08h): Supported LBA-Change 00:07:56.745 Dataset Management (09h): Supported LBA-Change 00:07:56.745 Unknown (0Ch): Supported 00:07:56.745 Unknown (12h): Supported 00:07:56.745 Copy (19h): Supported LBA-Change 00:07:56.745 Unknown (1Dh): Supported LBA-Change 00:07:56.745 00:07:56.745 Error Log 00:07:56.745 ========= 00:07:56.745 00:07:56.745 Arbitration 00:07:56.745 =========== 00:07:56.745 Arbitration Burst: no limit 00:07:56.745 00:07:56.745 Power Management 00:07:56.745 ================ 00:07:56.745 Number of Power States: 1 00:07:56.745 Current Power State: Power State #0 00:07:56.745 Power State #0: 00:07:56.745 Max Power: 25.00 W 00:07:56.745 Non-Operational State: Operational 00:07:56.745 Entry Latency: 16 microseconds 00:07:56.745 Exit Latency: 4 microseconds 00:07:56.745 Relative Read Throughput: 0 00:07:56.745 Relative Read Latency: 0 00:07:56.745 Relative Write Throughput: 0 00:07:56.745 Relative Write Latency: 0 00:07:56.745 Idle Power: Not Reported 00:07:56.745 Active Power: Not Reported 00:07:56.745 Non-Operational Permissive Mode: Not Supported 00:07:56.745 00:07:56.745 Health Information 00:07:56.745 ================== 00:07:56.745 Critical Warnings: 00:07:56.745 Available Spare Space: OK 00:07:56.745 Temperature: OK 00:07:56.745 Device Reliability: OK 00:07:56.745 Read Only: No 00:07:56.745 Volatile Memory Backup: OK 00:07:56.745 Current Temperature: 323 Kelvin (50 Celsius) 00:07:56.745 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:56.745 Available Spare: 0% 00:07:56.745 Available Spare Threshold: 0% 00:07:56.745 Life Percentage Used: 0% 00:07:56.745 Data Units Read: 983 00:07:56.746 Data Units Written: 851 00:07:56.746 Host Read Commands: 49997 00:07:56.746 Host Write Commands: 48775 00:07:56.746 Controller Busy Time: 0 minutes 00:07:56.746 Power Cycles: 0 00:07:56.746 Power On Hours: 0 hours 00:07:56.746 Unsafe Shutdowns: 0 00:07:56.746 Unrecoverable Media Errors: 0 00:07:56.746 Lifetime Error Log Entries: 0 00:07:56.746 Warning Temperature Time: 0 minutes 00:07:56.746 Critical Temperature Time: 0 minutes 00:07:56.746 00:07:56.746 Number of Queues 00:07:56.746 ================ 00:07:56.746 Number of I/O Submission Queues: 64 00:07:56.746 Number of I/O Completion Queues: 64 00:07:56.746 00:07:56.746 ZNS Specific Controller Data 00:07:56.746 ============================ 00:07:56.746 Zone Append Size Limit: 0 00:07:56.746 00:07:56.746 00:07:56.746 Active Namespaces 00:07:56.746 ================= 00:07:56.746 Namespace ID:1 00:07:56.746 Error Recovery Timeout: Unlimited 00:07:56.746 Command Set Identifier: NVM (00h) 00:07:56.746 Deallocate: Supported 00:07:56.746 Deallocated/Unwritten Error: Supported 00:07:56.746 Deallocated Read Value: All 0x00 00:07:56.746 Deallocate in Write Zeroes: Not Supported 00:07:56.746 Deallocated Guard Field: 0xFFFF 00:07:56.746 Flush: Supported 00:07:56.746 Reservation: Not Supported 00:07:56.746 Namespace Sharing Capabilities: Private 00:07:56.746 Size (in LBAs): 1310720 (5GiB) 00:07:56.746 Capacity (in LBAs): 1310720 (5GiB) 00:07:56.746 Utilization (in LBAs): 1310720 (5GiB) 00:07:56.746 Thin Provisioning: Not Supported 00:07:56.746 Per-NS Atomic Units: No 00:07:56.746 Maximum Single Source Range Length: 128 00:07:56.746 Maximum Copy Length: 128 00:07:56.746 Maximum Source Range Count: 128 00:07:56.746 NGUID/EUI64 Never Reused: No 00:07:56.746 Namespace Write Protected: No 00:07:56.746 Number of LBA Formats: 8 00:07:56.746 Current LBA Format: LBA Format #04 00:07:56.746 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:56.746 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:56.746 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:56.746 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:56.746 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:56.746 LBA Format[2024-12-05 18:58:14.259186] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 74330 terminated unexpected 00:07:56.746 #05: Data Size: 4096 Metadata Size: 8 00:07:56.746 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:56.746 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:56.746 00:07:56.746 NVM Specific Namespace Data 00:07:56.746 =========================== 00:07:56.746 Logical Block Storage Tag Mask: 0 00:07:56.746 Protection Information Capabilities: 00:07:56.746 16b Guard Protection Information Storage Tag Support: No 00:07:56.746 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:56.746 Storage Tag Check Read Support: No 00:07:56.746 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.746 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.746 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.746 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.746 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.746 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.746 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.746 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.746 ===================================================== 00:07:56.746 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:56.746 ===================================================== 00:07:56.746 Controller Capabilities/Features 00:07:56.746 ================================ 00:07:56.746 Vendor ID: 1b36 00:07:56.746 Subsystem Vendor ID: 1af4 00:07:56.746 Serial Number: 12342 00:07:56.746 Model Number: QEMU NVMe Ctrl 00:07:56.746 Firmware Version: 8.0.0 00:07:56.746 Recommended Arb Burst: 6 00:07:56.746 IEEE OUI Identifier: 00 54 52 00:07:56.746 Multi-path I/O 00:07:56.746 May have multiple subsystem ports: No 00:07:56.746 May have multiple controllers: No 00:07:56.746 Associated with SR-IOV VF: No 00:07:56.746 Max Data Transfer Size: 524288 00:07:56.746 Max Number of Namespaces: 256 00:07:56.746 Max Number of I/O Queues: 64 00:07:56.746 NVMe Specification Version (VS): 1.4 00:07:56.746 NVMe Specification Version (Identify): 1.4 00:07:56.746 Maximum Queue Entries: 2048 00:07:56.746 Contiguous Queues Required: Yes 00:07:56.746 Arbitration Mechanisms Supported 00:07:56.746 Weighted Round Robin: Not Supported 00:07:56.746 Vendor Specific: Not Supported 00:07:56.746 Reset Timeout: 7500 ms 00:07:56.746 Doorbell Stride: 4 bytes 00:07:56.746 NVM Subsystem Reset: Not Supported 00:07:56.746 Command Sets Supported 00:07:56.746 NVM Command Set: Supported 00:07:56.747 Boot Partition: Not Supported 00:07:56.747 Memory Page Size Minimum: 4096 bytes 00:07:56.747 Memory Page Size Maximum: 65536 bytes 00:07:56.747 Persistent Memory Region: Not Supported 00:07:56.747 Optional Asynchronous Events Supported 00:07:56.747 Namespace Attribute Notices: Supported 00:07:56.747 Firmware Activation Notices: Not Supported 00:07:56.747 ANA Change Notices: Not Supported 00:07:56.747 PLE Aggregate Log Change Notices: Not Supported 00:07:56.747 LBA Status Info Alert Notices: Not Supported 00:07:56.747 EGE Aggregate Log Change Notices: Not Supported 00:07:56.747 Normal NVM Subsystem Shutdown event: Not Supported 00:07:56.747 Zone Descriptor Change Notices: Not Supported 00:07:56.747 Discovery Log Change Notices: Not Supported 00:07:56.747 Controller Attributes 00:07:56.747 128-bit Host Identifier: Not Supported 00:07:56.747 Non-Operational Permissive Mode: Not Supported 00:07:56.747 NVM Sets: Not Supported 00:07:56.747 Read Recovery Levels: Not Supported 00:07:56.747 Endurance Groups: Not Supported 00:07:56.747 Predictable Latency Mode: Not Supported 00:07:56.747 Traffic Based Keep ALive: Not Supported 00:07:56.747 Namespace Granularity: Not Supported 00:07:56.747 SQ Associations: Not Supported 00:07:56.747 UUID List: Not Supported 00:07:56.747 Multi-Domain Subsystem: Not Supported 00:07:56.747 Fixed Capacity Management: Not Supported 00:07:56.747 Variable Capacity Management: Not Supported 00:07:56.747 Delete Endurance Group: Not Supported 00:07:56.747 Delete NVM Set: Not Supported 00:07:56.747 Extended LBA Formats Supported: Supported 00:07:56.747 Flexible Data Placement Supported: Not Supported 00:07:56.747 00:07:56.747 Controller Memory Buffer Support 00:07:56.747 ================================ 00:07:56.747 Supported: No 00:07:56.747 00:07:56.747 Persistent Memory Region Support 00:07:56.747 ================================ 00:07:56.747 Supported: No 00:07:56.747 00:07:56.747 Admin Command Set Attributes 00:07:56.747 ============================ 00:07:56.747 Security Send/Receive: Not Supported 00:07:56.747 Format NVM: Supported 00:07:56.747 Firmware Activate/Download: Not Supported 00:07:56.747 Namespace Management: Supported 00:07:56.747 Device Self-Test: Not Supported 00:07:56.747 Directives: Supported 00:07:56.747 NVMe-MI: Not Supported 00:07:56.747 Virtualization Management: Not Supported 00:07:56.747 Doorbell Buffer Config: Supported 00:07:56.747 Get LBA Status Capability: Not Supported 00:07:56.747 Command & Feature Lockdown Capability: Not Supported 00:07:56.747 Abort Command Limit: 4 00:07:56.747 Async Event Request Limit: 4 00:07:56.747 Number of Firmware Slots: N/A 00:07:56.747 Firmware Slot 1 Read-Only: N/A 00:07:56.747 Firmware Activation Without Reset: N/A 00:07:56.747 Multiple Update Detection Support: N/A 00:07:56.747 Firmware Update Granularity: No Information Provided 00:07:56.747 Per-Namespace SMART Log: Yes 00:07:56.747 Asymmetric Namespace Access Log Page: Not Supported 00:07:56.747 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:56.747 Command Effects Log Page: Supported 00:07:56.747 Get Log Page Extended Data: Supported 00:07:56.747 Telemetry Log Pages: Not Supported 00:07:56.747 Persistent Event Log Pages: Not Supported 00:07:56.747 Supported Log Pages Log Page: May Support 00:07:56.747 Commands Supported & Effects Log Page: Not Supported 00:07:56.747 Feature Identifiers & Effects Log Page:May Support 00:07:56.747 NVMe-MI Commands & Effects Log Page: May Support 00:07:56.747 Data Area 4 for Telemetry Log: Not Supported 00:07:56.747 Error Log Page Entries Supported: 1 00:07:56.747 Keep Alive: Not Supported 00:07:56.747 00:07:56.747 NVM Command Set Attributes 00:07:56.747 ========================== 00:07:56.747 Submission Queue Entry Size 00:07:56.747 Max: 64 00:07:56.747 Min: 64 00:07:56.747 Completion Queue Entry Size 00:07:56.747 Max: 16 00:07:56.747 Min: 16 00:07:56.747 Number of Namespaces: 256 00:07:56.747 Compare Command: Supported 00:07:56.747 Write Uncorrectable Command: Not Supported 00:07:56.747 Dataset Management Command: Supported 00:07:56.747 Write Zeroes Command: Supported 00:07:56.747 Set Features Save Field: Supported 00:07:56.747 Reservations: Not Supported 00:07:56.747 Timestamp: Supported 00:07:56.747 Copy: Supported 00:07:56.747 Volatile Write Cache: Present 00:07:56.747 Atomic Write Unit (Normal): 1 00:07:56.747 Atomic Write Unit (PFail): 1 00:07:56.747 Atomic Compare & Write Unit: 1 00:07:56.747 Fused Compare & Write: Not Supported 00:07:56.747 Scatter-Gather List 00:07:56.747 SGL Command Set: Supported 00:07:56.747 SGL Keyed: Not Supported 00:07:56.747 SGL Bit Bucket Descriptor: Not Supported 00:07:56.747 SGL Metadata Pointer: Not Supported 00:07:56.747 Oversized SGL: Not Supported 00:07:56.747 SGL Metadata Address: Not Supported 00:07:56.747 SGL Offset: Not Supported 00:07:56.747 Transport SGL Data Block: Not Supported 00:07:56.747 Replay Protected Memory Block: Not Supported 00:07:56.747 00:07:56.747 Firmware Slot Information 00:07:56.747 ========================= 00:07:56.747 Active slot: 1 00:07:56.747 Slot 1 Firmware Revision: 1.0 00:07:56.747 00:07:56.747 00:07:56.747 Commands Supported and Effects 00:07:56.747 ============================== 00:07:56.747 Admin Commands 00:07:56.747 -------------- 00:07:56.747 Delete I/O Submission Queue (00h): Supported 00:07:56.747 Create I/O Submission Queue (01h): Supported 00:07:56.747 Get Log Page (02h): Supported 00:07:56.747 Delete I/O Completion Queue (04h): Supported 00:07:56.747 Create I/O Completion Queue (05h): Supported 00:07:56.747 Identify (06h): Supported 00:07:56.747 Abort (08h): Supported 00:07:56.747 Set Features (09h): Supported 00:07:56.747 Get Features (0Ah): Supported 00:07:56.747 Asynchronous Event Request (0Ch): Supported 00:07:56.747 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:56.747 Directive Send (19h): Supported 00:07:56.747 Directive Receive (1Ah): Supported 00:07:56.747 Virtualization Management (1Ch): Supported 00:07:56.747 Doorbell Buffer Config (7Ch): Supported 00:07:56.747 Format NVM (80h): Supported LBA-Change 00:07:56.747 I/O Commands 00:07:56.747 ------------ 00:07:56.747 Flush (00h): Supported LBA-Change 00:07:56.747 Write (01h): Supported LBA-Change 00:07:56.747 Read (02h): Supported 00:07:56.747 Compare (05h): Supported 00:07:56.747 Write Zeroes (08h): Supported LBA-Change 00:07:56.747 Dataset Management (09h): Supported LBA-Change 00:07:56.747 Unknown (0Ch): Supported 00:07:56.747 Unknown (12h): Supported 00:07:56.747 Copy (19h): Supported LBA-Change 00:07:56.747 Unknown (1Dh): Supported LBA-Change 00:07:56.747 00:07:56.747 Error Log 00:07:56.747 ========= 00:07:56.747 00:07:56.747 Arbitration 00:07:56.748 =========== 00:07:56.748 Arbitration Burst: no limit 00:07:56.748 00:07:56.748 Power Management 00:07:56.748 ================ 00:07:56.748 Number of Power States: 1 00:07:56.748 Current Power State: Power State #0 00:07:56.748 Power State #0: 00:07:56.748 Max Power: 25.00 W 00:07:56.748 Non-Operational State: Operational 00:07:56.748 Entry Latency: 16 microseconds 00:07:56.748 Exit Latency: 4 microseconds 00:07:56.748 Relative Read Throughput: 0 00:07:56.748 Relative Read Latency: 0 00:07:56.748 Relative Write Throughput: 0 00:07:56.748 Relative Write Latency: 0 00:07:56.748 Idle Power: Not Reported 00:07:56.748 Active Power: Not Reported 00:07:56.748 Non-Operational Permissive Mode: Not Supported 00:07:56.748 00:07:56.748 Health Information 00:07:56.748 ================== 00:07:56.748 Critical Warnings: 00:07:56.748 Available Spare Space: OK 00:07:56.748 Temperature: OK 00:07:56.748 Device Reliability: OK 00:07:56.748 Read Only: No 00:07:56.748 Volatile Memory Backup: OK 00:07:56.748 Current Temperature: 323 Kelvin (50 Celsius) 00:07:56.748 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:56.748 Available Spare: 0% 00:07:56.748 Available Spare Threshold: 0% 00:07:56.748 Life Percentage Used: 0% 00:07:56.748 Data Units Read: 2035 00:07:56.748 Data Units Written: 1822 00:07:56.748 Host Read Commands: 101937 00:07:56.748 Host Write Commands: 100206 00:07:56.748 Controller Busy Time: 0 minutes 00:07:56.748 Power Cycles: 0 00:07:56.748 Power On Hours: 0 hours 00:07:56.748 Unsafe Shutdowns: 0 00:07:56.748 Unrecoverable Media Errors: 0 00:07:56.748 Lifetime Error Log Entries: 0 00:07:56.748 Warning Temperature Time: 0 minutes 00:07:56.748 Critical Temperature Time: 0 minutes 00:07:56.748 00:07:56.748 Number of Queues 00:07:56.748 ================ 00:07:56.748 Number of I/O Submission Queues: 64 00:07:56.748 Number of I/O Completion Queues: 64 00:07:56.748 00:07:56.748 ZNS Specific Controller Data 00:07:56.748 ============================ 00:07:56.748 Zone Append Size Limit: 0 00:07:56.748 00:07:56.748 00:07:56.748 Active Namespaces 00:07:56.748 ================= 00:07:56.748 Namespace ID:1 00:07:56.748 Error Recovery Timeout: Unlimited 00:07:56.748 Command Set Identifier: NVM (00h) 00:07:56.748 Deallocate: Supported 00:07:56.748 Deallocated/Unwritten Error: Supported 00:07:56.748 Deallocated Read Value: All 0x00 00:07:56.748 Deallocate in Write Zeroes: Not Supported 00:07:56.748 Deallocated Guard Field: 0xFFFF 00:07:56.748 Flush: Supported 00:07:56.748 Reservation: Not Supported 00:07:56.748 Namespace Sharing Capabilities: Private 00:07:56.748 Size (in LBAs): 1048576 (4GiB) 00:07:56.748 Capacity (in LBAs): 1048576 (4GiB) 00:07:56.748 Utilization (in LBAs): 1048576 (4GiB) 00:07:56.748 Thin Provisioning: Not Supported 00:07:56.748 Per-NS Atomic Units: No 00:07:56.748 Maximum Single Source Range Length: 128 00:07:56.748 Maximum Copy Length: 128 00:07:56.748 Maximum Source Range Count: 128 00:07:56.748 NGUID/EUI64 Never Reused: No 00:07:56.748 Namespace Write Protected: No 00:07:56.748 Number of LBA Formats: 8 00:07:56.748 Current LBA Format: LBA Format #04 00:07:56.748 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:56.748 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:56.748 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:56.748 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:56.748 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:56.748 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:56.748 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:56.748 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:56.748 00:07:56.748 NVM Specific Namespace Data 00:07:56.748 =========================== 00:07:56.748 Logical Block Storage Tag Mask: 0 00:07:56.748 Protection Information Capabilities: 00:07:56.748 16b Guard Protection Information Storage Tag Support: No 00:07:56.748 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:56.748 Storage Tag Check Read Support: No 00:07:56.748 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.748 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.748 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.748 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.748 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.748 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.748 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.748 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.748 Namespace ID:2 00:07:56.748 Error Recovery Timeout: Unlimited 00:07:56.748 Command Set Identifier: NVM (00h) 00:07:56.748 Deallocate: Supported 00:07:56.748 Deallocated/Unwritten Error: Supported 00:07:56.748 Deallocated Read Value: All 0x00 00:07:56.748 Deallocate in Write Zeroes: Not Supported 00:07:56.748 Deallocated Guard Field: 0xFFFF 00:07:56.748 Flush: Supported 00:07:56.748 Reservation: Not Supported 00:07:56.748 Namespace Sharing Capabilities: Private 00:07:56.748 Size (in LBAs): 1048576 (4GiB) 00:07:56.748 Capacity (in LBAs): 1048576 (4GiB) 00:07:56.748 Utilization (in LBAs): 1048576 (4GiB) 00:07:56.748 Thin Provisioning: Not Supported 00:07:56.748 Per-NS Atomic Units: No 00:07:56.748 Maximum Single Source Range Length: 128 00:07:56.748 Maximum Copy Length: 128 00:07:56.748 Maximum Source Range Count: 128 00:07:56.748 NGUID/EUI64 Never Reused: No 00:07:56.748 Namespace Write Protected: No 00:07:56.748 Number of LBA Formats: 8 00:07:56.748 Current LBA Format: LBA Format #04 00:07:56.748 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:56.748 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:56.748 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:56.748 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:56.748 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:56.748 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:56.748 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:56.748 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:56.748 00:07:56.748 NVM Specific Namespace Data 00:07:56.748 =========================== 00:07:56.748 Logical Block Storage Tag Mask: 0 00:07:56.748 Protection Information Capabilities: 00:07:56.748 16b Guard Protection Information Storage Tag Support: No 00:07:56.748 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:56.748 Storage Tag Check Read Support: No 00:07:56.748 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.748 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.748 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.748 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.748 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.749 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.749 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.749 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.749 Namespace ID:3 00:07:56.749 Error Recovery Timeout: Unlimited 00:07:56.749 Command Set Identifier: NVM (00h) 00:07:56.749 Deallocate: Supported 00:07:56.749 Deallocated/Unwritten Error: Supported 00:07:56.749 Deallocated Read Value: All 0x00 00:07:56.749 Deallocate in Write Zeroes: Not Supported 00:07:56.749 Deallocated Guard Field: 0xFFFF 00:07:56.749 Flush: Supported 00:07:56.749 Reservation: Not Supported 00:07:56.749 Namespace Sharing Capabilities: Private 00:07:56.749 Size (in LBAs): 1048576 (4GiB) 00:07:56.749 Capacity (in LBAs): 1048576 (4GiB) 00:07:56.749 Utilization (in LBAs): 1048576 (4GiB) 00:07:56.749 Thin Provisioning: Not Supported 00:07:56.749 Per-NS Atomic Units: No 00:07:56.749 Maximum Single Source Range Length: 128 00:07:56.749 Maximum Copy Length: 128 00:07:56.749 Maximum Source Range Count: 128 00:07:56.749 NGUID/EUI64 Never Reused: No 00:07:56.749 Namespace Write Protected: No 00:07:56.749 Number of LBA Formats: 8 00:07:56.749 Current LBA Format: LBA Format #04 00:07:56.749 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:56.749 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:56.749 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:56.749 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:56.749 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:56.749 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:56.749 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:56.749 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:56.749 00:07:56.749 NVM Specific Namespace Data 00:07:56.749 =========================== 00:07:56.749 Logical Block Storage Tag Mask: 0 00:07:56.749 Protection Information Capabilities: 00:07:56.749 16b Guard Protection Information Storage Tag Support: No 00:07:56.749 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:56.749 Storage Tag Check Read Support: No 00:07:56.749 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.749 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.749 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.749 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.749 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.749 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.749 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.749 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.015 18:58:14 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:57.015 18:58:14 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:07:57.015 ===================================================== 00:07:57.015 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:57.015 ===================================================== 00:07:57.015 Controller Capabilities/Features 00:07:57.015 ================================ 00:07:57.015 Vendor ID: 1b36 00:07:57.015 Subsystem Vendor ID: 1af4 00:07:57.015 Serial Number: 12340 00:07:57.015 Model Number: QEMU NVMe Ctrl 00:07:57.015 Firmware Version: 8.0.0 00:07:57.015 Recommended Arb Burst: 6 00:07:57.015 IEEE OUI Identifier: 00 54 52 00:07:57.015 Multi-path I/O 00:07:57.015 May have multiple subsystem ports: No 00:07:57.015 May have multiple controllers: No 00:07:57.015 Associated with SR-IOV VF: No 00:07:57.015 Max Data Transfer Size: 524288 00:07:57.015 Max Number of Namespaces: 256 00:07:57.015 Max Number of I/O Queues: 64 00:07:57.015 NVMe Specification Version (VS): 1.4 00:07:57.015 NVMe Specification Version (Identify): 1.4 00:07:57.015 Maximum Queue Entries: 2048 00:07:57.015 Contiguous Queues Required: Yes 00:07:57.015 Arbitration Mechanisms Supported 00:07:57.015 Weighted Round Robin: Not Supported 00:07:57.015 Vendor Specific: Not Supported 00:07:57.015 Reset Timeout: 7500 ms 00:07:57.015 Doorbell Stride: 4 bytes 00:07:57.015 NVM Subsystem Reset: Not Supported 00:07:57.015 Command Sets Supported 00:07:57.015 NVM Command Set: Supported 00:07:57.015 Boot Partition: Not Supported 00:07:57.015 Memory Page Size Minimum: 4096 bytes 00:07:57.015 Memory Page Size Maximum: 65536 bytes 00:07:57.015 Persistent Memory Region: Not Supported 00:07:57.015 Optional Asynchronous Events Supported 00:07:57.015 Namespace Attribute Notices: Supported 00:07:57.015 Firmware Activation Notices: Not Supported 00:07:57.015 ANA Change Notices: Not Supported 00:07:57.015 PLE Aggregate Log Change Notices: Not Supported 00:07:57.015 LBA Status Info Alert Notices: Not Supported 00:07:57.015 EGE Aggregate Log Change Notices: Not Supported 00:07:57.015 Normal NVM Subsystem Shutdown event: Not Supported 00:07:57.015 Zone Descriptor Change Notices: Not Supported 00:07:57.015 Discovery Log Change Notices: Not Supported 00:07:57.015 Controller Attributes 00:07:57.015 128-bit Host Identifier: Not Supported 00:07:57.015 Non-Operational Permissive Mode: Not Supported 00:07:57.015 NVM Sets: Not Supported 00:07:57.015 Read Recovery Levels: Not Supported 00:07:57.015 Endurance Groups: Not Supported 00:07:57.015 Predictable Latency Mode: Not Supported 00:07:57.015 Traffic Based Keep ALive: Not Supported 00:07:57.015 Namespace Granularity: Not Supported 00:07:57.015 SQ Associations: Not Supported 00:07:57.015 UUID List: Not Supported 00:07:57.015 Multi-Domain Subsystem: Not Supported 00:07:57.015 Fixed Capacity Management: Not Supported 00:07:57.015 Variable Capacity Management: Not Supported 00:07:57.015 Delete Endurance Group: Not Supported 00:07:57.015 Delete NVM Set: Not Supported 00:07:57.015 Extended LBA Formats Supported: Supported 00:07:57.015 Flexible Data Placement Supported: Not Supported 00:07:57.015 00:07:57.015 Controller Memory Buffer Support 00:07:57.015 ================================ 00:07:57.015 Supported: No 00:07:57.015 00:07:57.015 Persistent Memory Region Support 00:07:57.015 ================================ 00:07:57.015 Supported: No 00:07:57.015 00:07:57.015 Admin Command Set Attributes 00:07:57.015 ============================ 00:07:57.015 Security Send/Receive: Not Supported 00:07:57.015 Format NVM: Supported 00:07:57.015 Firmware Activate/Download: Not Supported 00:07:57.015 Namespace Management: Supported 00:07:57.015 Device Self-Test: Not Supported 00:07:57.015 Directives: Supported 00:07:57.015 NVMe-MI: Not Supported 00:07:57.015 Virtualization Management: Not Supported 00:07:57.015 Doorbell Buffer Config: Supported 00:07:57.015 Get LBA Status Capability: Not Supported 00:07:57.015 Command & Feature Lockdown Capability: Not Supported 00:07:57.015 Abort Command Limit: 4 00:07:57.015 Async Event Request Limit: 4 00:07:57.015 Number of Firmware Slots: N/A 00:07:57.015 Firmware Slot 1 Read-Only: N/A 00:07:57.015 Firmware Activation Without Reset: N/A 00:07:57.015 Multiple Update Detection Support: N/A 00:07:57.015 Firmware Update Granularity: No Information Provided 00:07:57.015 Per-Namespace SMART Log: Yes 00:07:57.015 Asymmetric Namespace Access Log Page: Not Supported 00:07:57.015 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:57.015 Command Effects Log Page: Supported 00:07:57.015 Get Log Page Extended Data: Supported 00:07:57.015 Telemetry Log Pages: Not Supported 00:07:57.015 Persistent Event Log Pages: Not Supported 00:07:57.015 Supported Log Pages Log Page: May Support 00:07:57.015 Commands Supported & Effects Log Page: Not Supported 00:07:57.015 Feature Identifiers & Effects Log Page:May Support 00:07:57.015 NVMe-MI Commands & Effects Log Page: May Support 00:07:57.015 Data Area 4 for Telemetry Log: Not Supported 00:07:57.015 Error Log Page Entries Supported: 1 00:07:57.015 Keep Alive: Not Supported 00:07:57.015 00:07:57.015 NVM Command Set Attributes 00:07:57.015 ========================== 00:07:57.015 Submission Queue Entry Size 00:07:57.015 Max: 64 00:07:57.015 Min: 64 00:07:57.015 Completion Queue Entry Size 00:07:57.015 Max: 16 00:07:57.015 Min: 16 00:07:57.015 Number of Namespaces: 256 00:07:57.015 Compare Command: Supported 00:07:57.015 Write Uncorrectable Command: Not Supported 00:07:57.015 Dataset Management Command: Supported 00:07:57.015 Write Zeroes Command: Supported 00:07:57.015 Set Features Save Field: Supported 00:07:57.015 Reservations: Not Supported 00:07:57.015 Timestamp: Supported 00:07:57.015 Copy: Supported 00:07:57.015 Volatile Write Cache: Present 00:07:57.015 Atomic Write Unit (Normal): 1 00:07:57.015 Atomic Write Unit (PFail): 1 00:07:57.015 Atomic Compare & Write Unit: 1 00:07:57.015 Fused Compare & Write: Not Supported 00:07:57.015 Scatter-Gather List 00:07:57.015 SGL Command Set: Supported 00:07:57.015 SGL Keyed: Not Supported 00:07:57.015 SGL Bit Bucket Descriptor: Not Supported 00:07:57.015 SGL Metadata Pointer: Not Supported 00:07:57.015 Oversized SGL: Not Supported 00:07:57.015 SGL Metadata Address: Not Supported 00:07:57.015 SGL Offset: Not Supported 00:07:57.015 Transport SGL Data Block: Not Supported 00:07:57.015 Replay Protected Memory Block: Not Supported 00:07:57.015 00:07:57.015 Firmware Slot Information 00:07:57.015 ========================= 00:07:57.015 Active slot: 1 00:07:57.015 Slot 1 Firmware Revision: 1.0 00:07:57.015 00:07:57.016 00:07:57.016 Commands Supported and Effects 00:07:57.016 ============================== 00:07:57.016 Admin Commands 00:07:57.016 -------------- 00:07:57.016 Delete I/O Submission Queue (00h): Supported 00:07:57.016 Create I/O Submission Queue (01h): Supported 00:07:57.016 Get Log Page (02h): Supported 00:07:57.016 Delete I/O Completion Queue (04h): Supported 00:07:57.016 Create I/O Completion Queue (05h): Supported 00:07:57.016 Identify (06h): Supported 00:07:57.016 Abort (08h): Supported 00:07:57.016 Set Features (09h): Supported 00:07:57.016 Get Features (0Ah): Supported 00:07:57.016 Asynchronous Event Request (0Ch): Supported 00:07:57.016 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:57.016 Directive Send (19h): Supported 00:07:57.016 Directive Receive (1Ah): Supported 00:07:57.016 Virtualization Management (1Ch): Supported 00:07:57.016 Doorbell Buffer Config (7Ch): Supported 00:07:57.016 Format NVM (80h): Supported LBA-Change 00:07:57.016 I/O Commands 00:07:57.016 ------------ 00:07:57.016 Flush (00h): Supported LBA-Change 00:07:57.016 Write (01h): Supported LBA-Change 00:07:57.016 Read (02h): Supported 00:07:57.016 Compare (05h): Supported 00:07:57.016 Write Zeroes (08h): Supported LBA-Change 00:07:57.016 Dataset Management (09h): Supported LBA-Change 00:07:57.016 Unknown (0Ch): Supported 00:07:57.016 Unknown (12h): Supported 00:07:57.016 Copy (19h): Supported LBA-Change 00:07:57.016 Unknown (1Dh): Supported LBA-Change 00:07:57.016 00:07:57.016 Error Log 00:07:57.016 ========= 00:07:57.016 00:07:57.016 Arbitration 00:07:57.016 =========== 00:07:57.016 Arbitration Burst: no limit 00:07:57.016 00:07:57.016 Power Management 00:07:57.016 ================ 00:07:57.016 Number of Power States: 1 00:07:57.016 Current Power State: Power State #0 00:07:57.016 Power State #0: 00:07:57.016 Max Power: 25.00 W 00:07:57.016 Non-Operational State: Operational 00:07:57.016 Entry Latency: 16 microseconds 00:07:57.016 Exit Latency: 4 microseconds 00:07:57.016 Relative Read Throughput: 0 00:07:57.016 Relative Read Latency: 0 00:07:57.016 Relative Write Throughput: 0 00:07:57.016 Relative Write Latency: 0 00:07:57.016 Idle Power: Not Reported 00:07:57.016 Active Power: Not Reported 00:07:57.016 Non-Operational Permissive Mode: Not Supported 00:07:57.016 00:07:57.016 Health Information 00:07:57.016 ================== 00:07:57.016 Critical Warnings: 00:07:57.016 Available Spare Space: OK 00:07:57.016 Temperature: OK 00:07:57.016 Device Reliability: OK 00:07:57.016 Read Only: No 00:07:57.016 Volatile Memory Backup: OK 00:07:57.016 Current Temperature: 323 Kelvin (50 Celsius) 00:07:57.016 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:57.016 Available Spare: 0% 00:07:57.016 Available Spare Threshold: 0% 00:07:57.016 Life Percentage Used: 0% 00:07:57.016 Data Units Read: 638 00:07:57.016 Data Units Written: 566 00:07:57.016 Host Read Commands: 33262 00:07:57.016 Host Write Commands: 33048 00:07:57.016 Controller Busy Time: 0 minutes 00:07:57.016 Power Cycles: 0 00:07:57.016 Power On Hours: 0 hours 00:07:57.016 Unsafe Shutdowns: 0 00:07:57.016 Unrecoverable Media Errors: 0 00:07:57.016 Lifetime Error Log Entries: 0 00:07:57.016 Warning Temperature Time: 0 minutes 00:07:57.016 Critical Temperature Time: 0 minutes 00:07:57.016 00:07:57.016 Number of Queues 00:07:57.016 ================ 00:07:57.016 Number of I/O Submission Queues: 64 00:07:57.016 Number of I/O Completion Queues: 64 00:07:57.016 00:07:57.016 ZNS Specific Controller Data 00:07:57.016 ============================ 00:07:57.016 Zone Append Size Limit: 0 00:07:57.016 00:07:57.016 00:07:57.016 Active Namespaces 00:07:57.016 ================= 00:07:57.016 Namespace ID:1 00:07:57.016 Error Recovery Timeout: Unlimited 00:07:57.016 Command Set Identifier: NVM (00h) 00:07:57.016 Deallocate: Supported 00:07:57.016 Deallocated/Unwritten Error: Supported 00:07:57.016 Deallocated Read Value: All 0x00 00:07:57.016 Deallocate in Write Zeroes: Not Supported 00:07:57.016 Deallocated Guard Field: 0xFFFF 00:07:57.016 Flush: Supported 00:07:57.016 Reservation: Not Supported 00:07:57.016 Metadata Transferred as: Separate Metadata Buffer 00:07:57.016 Namespace Sharing Capabilities: Private 00:07:57.016 Size (in LBAs): 1548666 (5GiB) 00:07:57.016 Capacity (in LBAs): 1548666 (5GiB) 00:07:57.016 Utilization (in LBAs): 1548666 (5GiB) 00:07:57.016 Thin Provisioning: Not Supported 00:07:57.016 Per-NS Atomic Units: No 00:07:57.016 Maximum Single Source Range Length: 128 00:07:57.016 Maximum Copy Length: 128 00:07:57.016 Maximum Source Range Count: 128 00:07:57.016 NGUID/EUI64 Never Reused: No 00:07:57.016 Namespace Write Protected: No 00:07:57.016 Number of LBA Formats: 8 00:07:57.016 Current LBA Format: LBA Format #07 00:07:57.016 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:57.016 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:57.016 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:57.016 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:57.016 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:57.016 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:57.016 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:57.016 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:57.016 00:07:57.016 NVM Specific Namespace Data 00:07:57.016 =========================== 00:07:57.016 Logical Block Storage Tag Mask: 0 00:07:57.016 Protection Information Capabilities: 00:07:57.016 16b Guard Protection Information Storage Tag Support: No 00:07:57.016 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:57.016 Storage Tag Check Read Support: No 00:07:57.016 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.016 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.016 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.016 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.016 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.016 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.016 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.016 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.016 18:58:14 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:57.016 18:58:14 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:07:57.302 ===================================================== 00:07:57.302 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:57.302 ===================================================== 00:07:57.302 Controller Capabilities/Features 00:07:57.302 ================================ 00:07:57.302 Vendor ID: 1b36 00:07:57.302 Subsystem Vendor ID: 1af4 00:07:57.302 Serial Number: 12341 00:07:57.302 Model Number: QEMU NVMe Ctrl 00:07:57.302 Firmware Version: 8.0.0 00:07:57.302 Recommended Arb Burst: 6 00:07:57.302 IEEE OUI Identifier: 00 54 52 00:07:57.302 Multi-path I/O 00:07:57.302 May have multiple subsystem ports: No 00:07:57.302 May have multiple controllers: No 00:07:57.302 Associated with SR-IOV VF: No 00:07:57.302 Max Data Transfer Size: 524288 00:07:57.302 Max Number of Namespaces: 256 00:07:57.302 Max Number of I/O Queues: 64 00:07:57.302 NVMe Specification Version (VS): 1.4 00:07:57.302 NVMe Specification Version (Identify): 1.4 00:07:57.302 Maximum Queue Entries: 2048 00:07:57.302 Contiguous Queues Required: Yes 00:07:57.302 Arbitration Mechanisms Supported 00:07:57.302 Weighted Round Robin: Not Supported 00:07:57.302 Vendor Specific: Not Supported 00:07:57.302 Reset Timeout: 7500 ms 00:07:57.302 Doorbell Stride: 4 bytes 00:07:57.302 NVM Subsystem Reset: Not Supported 00:07:57.302 Command Sets Supported 00:07:57.302 NVM Command Set: Supported 00:07:57.302 Boot Partition: Not Supported 00:07:57.302 Memory Page Size Minimum: 4096 bytes 00:07:57.302 Memory Page Size Maximum: 65536 bytes 00:07:57.302 Persistent Memory Region: Not Supported 00:07:57.302 Optional Asynchronous Events Supported 00:07:57.302 Namespace Attribute Notices: Supported 00:07:57.302 Firmware Activation Notices: Not Supported 00:07:57.302 ANA Change Notices: Not Supported 00:07:57.302 PLE Aggregate Log Change Notices: Not Supported 00:07:57.302 LBA Status Info Alert Notices: Not Supported 00:07:57.302 EGE Aggregate Log Change Notices: Not Supported 00:07:57.302 Normal NVM Subsystem Shutdown event: Not Supported 00:07:57.302 Zone Descriptor Change Notices: Not Supported 00:07:57.302 Discovery Log Change Notices: Not Supported 00:07:57.303 Controller Attributes 00:07:57.303 128-bit Host Identifier: Not Supported 00:07:57.303 Non-Operational Permissive Mode: Not Supported 00:07:57.303 NVM Sets: Not Supported 00:07:57.303 Read Recovery Levels: Not Supported 00:07:57.303 Endurance Groups: Not Supported 00:07:57.303 Predictable Latency Mode: Not Supported 00:07:57.303 Traffic Based Keep ALive: Not Supported 00:07:57.303 Namespace Granularity: Not Supported 00:07:57.303 SQ Associations: Not Supported 00:07:57.303 UUID List: Not Supported 00:07:57.303 Multi-Domain Subsystem: Not Supported 00:07:57.303 Fixed Capacity Management: Not Supported 00:07:57.303 Variable Capacity Management: Not Supported 00:07:57.303 Delete Endurance Group: Not Supported 00:07:57.303 Delete NVM Set: Not Supported 00:07:57.303 Extended LBA Formats Supported: Supported 00:07:57.303 Flexible Data Placement Supported: Not Supported 00:07:57.303 00:07:57.303 Controller Memory Buffer Support 00:07:57.303 ================================ 00:07:57.303 Supported: No 00:07:57.303 00:07:57.303 Persistent Memory Region Support 00:07:57.303 ================================ 00:07:57.303 Supported: No 00:07:57.303 00:07:57.303 Admin Command Set Attributes 00:07:57.303 ============================ 00:07:57.303 Security Send/Receive: Not Supported 00:07:57.303 Format NVM: Supported 00:07:57.303 Firmware Activate/Download: Not Supported 00:07:57.303 Namespace Management: Supported 00:07:57.303 Device Self-Test: Not Supported 00:07:57.303 Directives: Supported 00:07:57.303 NVMe-MI: Not Supported 00:07:57.303 Virtualization Management: Not Supported 00:07:57.303 Doorbell Buffer Config: Supported 00:07:57.303 Get LBA Status Capability: Not Supported 00:07:57.303 Command & Feature Lockdown Capability: Not Supported 00:07:57.303 Abort Command Limit: 4 00:07:57.303 Async Event Request Limit: 4 00:07:57.303 Number of Firmware Slots: N/A 00:07:57.303 Firmware Slot 1 Read-Only: N/A 00:07:57.303 Firmware Activation Without Reset: N/A 00:07:57.303 Multiple Update Detection Support: N/A 00:07:57.303 Firmware Update Granularity: No Information Provided 00:07:57.303 Per-Namespace SMART Log: Yes 00:07:57.303 Asymmetric Namespace Access Log Page: Not Supported 00:07:57.303 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:57.303 Command Effects Log Page: Supported 00:07:57.303 Get Log Page Extended Data: Supported 00:07:57.303 Telemetry Log Pages: Not Supported 00:07:57.303 Persistent Event Log Pages: Not Supported 00:07:57.303 Supported Log Pages Log Page: May Support 00:07:57.303 Commands Supported & Effects Log Page: Not Supported 00:07:57.303 Feature Identifiers & Effects Log Page:May Support 00:07:57.303 NVMe-MI Commands & Effects Log Page: May Support 00:07:57.303 Data Area 4 for Telemetry Log: Not Supported 00:07:57.303 Error Log Page Entries Supported: 1 00:07:57.303 Keep Alive: Not Supported 00:07:57.303 00:07:57.303 NVM Command Set Attributes 00:07:57.303 ========================== 00:07:57.303 Submission Queue Entry Size 00:07:57.303 Max: 64 00:07:57.303 Min: 64 00:07:57.303 Completion Queue Entry Size 00:07:57.303 Max: 16 00:07:57.303 Min: 16 00:07:57.303 Number of Namespaces: 256 00:07:57.303 Compare Command: Supported 00:07:57.303 Write Uncorrectable Command: Not Supported 00:07:57.303 Dataset Management Command: Supported 00:07:57.303 Write Zeroes Command: Supported 00:07:57.303 Set Features Save Field: Supported 00:07:57.303 Reservations: Not Supported 00:07:57.303 Timestamp: Supported 00:07:57.303 Copy: Supported 00:07:57.303 Volatile Write Cache: Present 00:07:57.303 Atomic Write Unit (Normal): 1 00:07:57.303 Atomic Write Unit (PFail): 1 00:07:57.303 Atomic Compare & Write Unit: 1 00:07:57.303 Fused Compare & Write: Not Supported 00:07:57.303 Scatter-Gather List 00:07:57.303 SGL Command Set: Supported 00:07:57.303 SGL Keyed: Not Supported 00:07:57.303 SGL Bit Bucket Descriptor: Not Supported 00:07:57.303 SGL Metadata Pointer: Not Supported 00:07:57.303 Oversized SGL: Not Supported 00:07:57.303 SGL Metadata Address: Not Supported 00:07:57.303 SGL Offset: Not Supported 00:07:57.303 Transport SGL Data Block: Not Supported 00:07:57.303 Replay Protected Memory Block: Not Supported 00:07:57.303 00:07:57.303 Firmware Slot Information 00:07:57.303 ========================= 00:07:57.303 Active slot: 1 00:07:57.303 Slot 1 Firmware Revision: 1.0 00:07:57.303 00:07:57.303 00:07:57.303 Commands Supported and Effects 00:07:57.303 ============================== 00:07:57.303 Admin Commands 00:07:57.303 -------------- 00:07:57.303 Delete I/O Submission Queue (00h): Supported 00:07:57.303 Create I/O Submission Queue (01h): Supported 00:07:57.303 Get Log Page (02h): Supported 00:07:57.303 Delete I/O Completion Queue (04h): Supported 00:07:57.303 Create I/O Completion Queue (05h): Supported 00:07:57.303 Identify (06h): Supported 00:07:57.303 Abort (08h): Supported 00:07:57.303 Set Features (09h): Supported 00:07:57.303 Get Features (0Ah): Supported 00:07:57.303 Asynchronous Event Request (0Ch): Supported 00:07:57.303 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:57.303 Directive Send (19h): Supported 00:07:57.303 Directive Receive (1Ah): Supported 00:07:57.303 Virtualization Management (1Ch): Supported 00:07:57.303 Doorbell Buffer Config (7Ch): Supported 00:07:57.303 Format NVM (80h): Supported LBA-Change 00:07:57.303 I/O Commands 00:07:57.303 ------------ 00:07:57.303 Flush (00h): Supported LBA-Change 00:07:57.303 Write (01h): Supported LBA-Change 00:07:57.303 Read (02h): Supported 00:07:57.303 Compare (05h): Supported 00:07:57.303 Write Zeroes (08h): Supported LBA-Change 00:07:57.303 Dataset Management (09h): Supported LBA-Change 00:07:57.303 Unknown (0Ch): Supported 00:07:57.303 Unknown (12h): Supported 00:07:57.303 Copy (19h): Supported LBA-Change 00:07:57.303 Unknown (1Dh): Supported LBA-Change 00:07:57.303 00:07:57.303 Error Log 00:07:57.303 ========= 00:07:57.303 00:07:57.303 Arbitration 00:07:57.303 =========== 00:07:57.303 Arbitration Burst: no limit 00:07:57.303 00:07:57.303 Power Management 00:07:57.303 ================ 00:07:57.303 Number of Power States: 1 00:07:57.303 Current Power State: Power State #0 00:07:57.303 Power State #0: 00:07:57.303 Max Power: 25.00 W 00:07:57.303 Non-Operational State: Operational 00:07:57.303 Entry Latency: 16 microseconds 00:07:57.303 Exit Latency: 4 microseconds 00:07:57.303 Relative Read Throughput: 0 00:07:57.303 Relative Read Latency: 0 00:07:57.303 Relative Write Throughput: 0 00:07:57.303 Relative Write Latency: 0 00:07:57.303 Idle Power: Not Reported 00:07:57.303 Active Power: Not Reported 00:07:57.303 Non-Operational Permissive Mode: Not Supported 00:07:57.303 00:07:57.303 Health Information 00:07:57.303 ================== 00:07:57.303 Critical Warnings: 00:07:57.303 Available Spare Space: OK 00:07:57.303 Temperature: OK 00:07:57.303 Device Reliability: OK 00:07:57.303 Read Only: No 00:07:57.303 Volatile Memory Backup: OK 00:07:57.303 Current Temperature: 323 Kelvin (50 Celsius) 00:07:57.303 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:57.303 Available Spare: 0% 00:07:57.303 Available Spare Threshold: 0% 00:07:57.303 Life Percentage Used: 0% 00:07:57.303 Data Units Read: 983 00:07:57.303 Data Units Written: 851 00:07:57.303 Host Read Commands: 49997 00:07:57.303 Host Write Commands: 48775 00:07:57.303 Controller Busy Time: 0 minutes 00:07:57.303 Power Cycles: 0 00:07:57.303 Power On Hours: 0 hours 00:07:57.303 Unsafe Shutdowns: 0 00:07:57.303 Unrecoverable Media Errors: 0 00:07:57.303 Lifetime Error Log Entries: 0 00:07:57.303 Warning Temperature Time: 0 minutes 00:07:57.303 Critical Temperature Time: 0 minutes 00:07:57.303 00:07:57.303 Number of Queues 00:07:57.303 ================ 00:07:57.303 Number of I/O Submission Queues: 64 00:07:57.303 Number of I/O Completion Queues: 64 00:07:57.303 00:07:57.303 ZNS Specific Controller Data 00:07:57.303 ============================ 00:07:57.303 Zone Append Size Limit: 0 00:07:57.303 00:07:57.303 00:07:57.303 Active Namespaces 00:07:57.303 ================= 00:07:57.303 Namespace ID:1 00:07:57.303 Error Recovery Timeout: Unlimited 00:07:57.303 Command Set Identifier: NVM (00h) 00:07:57.303 Deallocate: Supported 00:07:57.303 Deallocated/Unwritten Error: Supported 00:07:57.303 Deallocated Read Value: All 0x00 00:07:57.303 Deallocate in Write Zeroes: Not Supported 00:07:57.303 Deallocated Guard Field: 0xFFFF 00:07:57.303 Flush: Supported 00:07:57.303 Reservation: Not Supported 00:07:57.303 Namespace Sharing Capabilities: Private 00:07:57.303 Size (in LBAs): 1310720 (5GiB) 00:07:57.304 Capacity (in LBAs): 1310720 (5GiB) 00:07:57.304 Utilization (in LBAs): 1310720 (5GiB) 00:07:57.304 Thin Provisioning: Not Supported 00:07:57.304 Per-NS Atomic Units: No 00:07:57.304 Maximum Single Source Range Length: 128 00:07:57.304 Maximum Copy Length: 128 00:07:57.304 Maximum Source Range Count: 128 00:07:57.304 NGUID/EUI64 Never Reused: No 00:07:57.304 Namespace Write Protected: No 00:07:57.304 Number of LBA Formats: 8 00:07:57.304 Current LBA Format: LBA Format #04 00:07:57.304 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:57.304 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:57.304 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:57.304 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:57.304 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:57.304 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:57.304 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:57.304 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:57.304 00:07:57.304 NVM Specific Namespace Data 00:07:57.304 =========================== 00:07:57.304 Logical Block Storage Tag Mask: 0 00:07:57.304 Protection Information Capabilities: 00:07:57.304 16b Guard Protection Information Storage Tag Support: No 00:07:57.304 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:57.304 Storage Tag Check Read Support: No 00:07:57.304 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.304 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.304 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.304 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.304 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.304 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.304 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.304 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.304 18:58:14 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:57.304 18:58:14 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:07:57.568 ===================================================== 00:07:57.568 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:57.568 ===================================================== 00:07:57.568 Controller Capabilities/Features 00:07:57.568 ================================ 00:07:57.568 Vendor ID: 1b36 00:07:57.568 Subsystem Vendor ID: 1af4 00:07:57.568 Serial Number: 12342 00:07:57.568 Model Number: QEMU NVMe Ctrl 00:07:57.568 Firmware Version: 8.0.0 00:07:57.568 Recommended Arb Burst: 6 00:07:57.568 IEEE OUI Identifier: 00 54 52 00:07:57.568 Multi-path I/O 00:07:57.568 May have multiple subsystem ports: No 00:07:57.568 May have multiple controllers: No 00:07:57.568 Associated with SR-IOV VF: No 00:07:57.568 Max Data Transfer Size: 524288 00:07:57.568 Max Number of Namespaces: 256 00:07:57.568 Max Number of I/O Queues: 64 00:07:57.568 NVMe Specification Version (VS): 1.4 00:07:57.568 NVMe Specification Version (Identify): 1.4 00:07:57.568 Maximum Queue Entries: 2048 00:07:57.568 Contiguous Queues Required: Yes 00:07:57.568 Arbitration Mechanisms Supported 00:07:57.568 Weighted Round Robin: Not Supported 00:07:57.568 Vendor Specific: Not Supported 00:07:57.568 Reset Timeout: 7500 ms 00:07:57.568 Doorbell Stride: 4 bytes 00:07:57.568 NVM Subsystem Reset: Not Supported 00:07:57.568 Command Sets Supported 00:07:57.568 NVM Command Set: Supported 00:07:57.568 Boot Partition: Not Supported 00:07:57.568 Memory Page Size Minimum: 4096 bytes 00:07:57.568 Memory Page Size Maximum: 65536 bytes 00:07:57.568 Persistent Memory Region: Not Supported 00:07:57.568 Optional Asynchronous Events Supported 00:07:57.568 Namespace Attribute Notices: Supported 00:07:57.568 Firmware Activation Notices: Not Supported 00:07:57.568 ANA Change Notices: Not Supported 00:07:57.568 PLE Aggregate Log Change Notices: Not Supported 00:07:57.568 LBA Status Info Alert Notices: Not Supported 00:07:57.568 EGE Aggregate Log Change Notices: Not Supported 00:07:57.568 Normal NVM Subsystem Shutdown event: Not Supported 00:07:57.568 Zone Descriptor Change Notices: Not Supported 00:07:57.568 Discovery Log Change Notices: Not Supported 00:07:57.568 Controller Attributes 00:07:57.568 128-bit Host Identifier: Not Supported 00:07:57.568 Non-Operational Permissive Mode: Not Supported 00:07:57.568 NVM Sets: Not Supported 00:07:57.568 Read Recovery Levels: Not Supported 00:07:57.568 Endurance Groups: Not Supported 00:07:57.568 Predictable Latency Mode: Not Supported 00:07:57.568 Traffic Based Keep ALive: Not Supported 00:07:57.568 Namespace Granularity: Not Supported 00:07:57.568 SQ Associations: Not Supported 00:07:57.568 UUID List: Not Supported 00:07:57.568 Multi-Domain Subsystem: Not Supported 00:07:57.568 Fixed Capacity Management: Not Supported 00:07:57.568 Variable Capacity Management: Not Supported 00:07:57.568 Delete Endurance Group: Not Supported 00:07:57.568 Delete NVM Set: Not Supported 00:07:57.568 Extended LBA Formats Supported: Supported 00:07:57.568 Flexible Data Placement Supported: Not Supported 00:07:57.568 00:07:57.568 Controller Memory Buffer Support 00:07:57.568 ================================ 00:07:57.568 Supported: No 00:07:57.568 00:07:57.568 Persistent Memory Region Support 00:07:57.568 ================================ 00:07:57.568 Supported: No 00:07:57.568 00:07:57.568 Admin Command Set Attributes 00:07:57.568 ============================ 00:07:57.568 Security Send/Receive: Not Supported 00:07:57.568 Format NVM: Supported 00:07:57.568 Firmware Activate/Download: Not Supported 00:07:57.568 Namespace Management: Supported 00:07:57.568 Device Self-Test: Not Supported 00:07:57.568 Directives: Supported 00:07:57.568 NVMe-MI: Not Supported 00:07:57.568 Virtualization Management: Not Supported 00:07:57.568 Doorbell Buffer Config: Supported 00:07:57.568 Get LBA Status Capability: Not Supported 00:07:57.568 Command & Feature Lockdown Capability: Not Supported 00:07:57.568 Abort Command Limit: 4 00:07:57.568 Async Event Request Limit: 4 00:07:57.568 Number of Firmware Slots: N/A 00:07:57.568 Firmware Slot 1 Read-Only: N/A 00:07:57.568 Firmware Activation Without Reset: N/A 00:07:57.568 Multiple Update Detection Support: N/A 00:07:57.568 Firmware Update Granularity: No Information Provided 00:07:57.568 Per-Namespace SMART Log: Yes 00:07:57.568 Asymmetric Namespace Access Log Page: Not Supported 00:07:57.568 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:57.568 Command Effects Log Page: Supported 00:07:57.568 Get Log Page Extended Data: Supported 00:07:57.568 Telemetry Log Pages: Not Supported 00:07:57.568 Persistent Event Log Pages: Not Supported 00:07:57.568 Supported Log Pages Log Page: May Support 00:07:57.568 Commands Supported & Effects Log Page: Not Supported 00:07:57.568 Feature Identifiers & Effects Log Page:May Support 00:07:57.568 NVMe-MI Commands & Effects Log Page: May Support 00:07:57.568 Data Area 4 for Telemetry Log: Not Supported 00:07:57.568 Error Log Page Entries Supported: 1 00:07:57.568 Keep Alive: Not Supported 00:07:57.568 00:07:57.568 NVM Command Set Attributes 00:07:57.568 ========================== 00:07:57.568 Submission Queue Entry Size 00:07:57.568 Max: 64 00:07:57.568 Min: 64 00:07:57.568 Completion Queue Entry Size 00:07:57.568 Max: 16 00:07:57.568 Min: 16 00:07:57.568 Number of Namespaces: 256 00:07:57.568 Compare Command: Supported 00:07:57.568 Write Uncorrectable Command: Not Supported 00:07:57.568 Dataset Management Command: Supported 00:07:57.568 Write Zeroes Command: Supported 00:07:57.568 Set Features Save Field: Supported 00:07:57.568 Reservations: Not Supported 00:07:57.568 Timestamp: Supported 00:07:57.568 Copy: Supported 00:07:57.568 Volatile Write Cache: Present 00:07:57.568 Atomic Write Unit (Normal): 1 00:07:57.568 Atomic Write Unit (PFail): 1 00:07:57.568 Atomic Compare & Write Unit: 1 00:07:57.568 Fused Compare & Write: Not Supported 00:07:57.568 Scatter-Gather List 00:07:57.568 SGL Command Set: Supported 00:07:57.568 SGL Keyed: Not Supported 00:07:57.568 SGL Bit Bucket Descriptor: Not Supported 00:07:57.568 SGL Metadata Pointer: Not Supported 00:07:57.568 Oversized SGL: Not Supported 00:07:57.568 SGL Metadata Address: Not Supported 00:07:57.568 SGL Offset: Not Supported 00:07:57.568 Transport SGL Data Block: Not Supported 00:07:57.568 Replay Protected Memory Block: Not Supported 00:07:57.568 00:07:57.568 Firmware Slot Information 00:07:57.568 ========================= 00:07:57.568 Active slot: 1 00:07:57.568 Slot 1 Firmware Revision: 1.0 00:07:57.568 00:07:57.568 00:07:57.568 Commands Supported and Effects 00:07:57.568 ============================== 00:07:57.568 Admin Commands 00:07:57.568 -------------- 00:07:57.568 Delete I/O Submission Queue (00h): Supported 00:07:57.568 Create I/O Submission Queue (01h): Supported 00:07:57.568 Get Log Page (02h): Supported 00:07:57.568 Delete I/O Completion Queue (04h): Supported 00:07:57.568 Create I/O Completion Queue (05h): Supported 00:07:57.568 Identify (06h): Supported 00:07:57.568 Abort (08h): Supported 00:07:57.568 Set Features (09h): Supported 00:07:57.568 Get Features (0Ah): Supported 00:07:57.568 Asynchronous Event Request (0Ch): Supported 00:07:57.568 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:57.568 Directive Send (19h): Supported 00:07:57.568 Directive Receive (1Ah): Supported 00:07:57.568 Virtualization Management (1Ch): Supported 00:07:57.568 Doorbell Buffer Config (7Ch): Supported 00:07:57.568 Format NVM (80h): Supported LBA-Change 00:07:57.568 I/O Commands 00:07:57.568 ------------ 00:07:57.568 Flush (00h): Supported LBA-Change 00:07:57.568 Write (01h): Supported LBA-Change 00:07:57.568 Read (02h): Supported 00:07:57.568 Compare (05h): Supported 00:07:57.568 Write Zeroes (08h): Supported LBA-Change 00:07:57.568 Dataset Management (09h): Supported LBA-Change 00:07:57.568 Unknown (0Ch): Supported 00:07:57.568 Unknown (12h): Supported 00:07:57.569 Copy (19h): Supported LBA-Change 00:07:57.569 Unknown (1Dh): Supported LBA-Change 00:07:57.569 00:07:57.569 Error Log 00:07:57.569 ========= 00:07:57.569 00:07:57.569 Arbitration 00:07:57.569 =========== 00:07:57.569 Arbitration Burst: no limit 00:07:57.569 00:07:57.569 Power Management 00:07:57.569 ================ 00:07:57.569 Number of Power States: 1 00:07:57.569 Current Power State: Power State #0 00:07:57.569 Power State #0: 00:07:57.569 Max Power: 25.00 W 00:07:57.569 Non-Operational State: Operational 00:07:57.569 Entry Latency: 16 microseconds 00:07:57.569 Exit Latency: 4 microseconds 00:07:57.569 Relative Read Throughput: 0 00:07:57.569 Relative Read Latency: 0 00:07:57.569 Relative Write Throughput: 0 00:07:57.569 Relative Write Latency: 0 00:07:57.569 Idle Power: Not Reported 00:07:57.569 Active Power: Not Reported 00:07:57.569 Non-Operational Permissive Mode: Not Supported 00:07:57.569 00:07:57.569 Health Information 00:07:57.569 ================== 00:07:57.569 Critical Warnings: 00:07:57.569 Available Spare Space: OK 00:07:57.569 Temperature: OK 00:07:57.569 Device Reliability: OK 00:07:57.569 Read Only: No 00:07:57.569 Volatile Memory Backup: OK 00:07:57.569 Current Temperature: 323 Kelvin (50 Celsius) 00:07:57.569 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:57.569 Available Spare: 0% 00:07:57.569 Available Spare Threshold: 0% 00:07:57.569 Life Percentage Used: 0% 00:07:57.569 Data Units Read: 2035 00:07:57.569 Data Units Written: 1822 00:07:57.569 Host Read Commands: 101937 00:07:57.569 Host Write Commands: 100206 00:07:57.569 Controller Busy Time: 0 minutes 00:07:57.569 Power Cycles: 0 00:07:57.569 Power On Hours: 0 hours 00:07:57.569 Unsafe Shutdowns: 0 00:07:57.569 Unrecoverable Media Errors: 0 00:07:57.569 Lifetime Error Log Entries: 0 00:07:57.569 Warning Temperature Time: 0 minutes 00:07:57.569 Critical Temperature Time: 0 minutes 00:07:57.569 00:07:57.569 Number of Queues 00:07:57.569 ================ 00:07:57.569 Number of I/O Submission Queues: 64 00:07:57.569 Number of I/O Completion Queues: 64 00:07:57.569 00:07:57.569 ZNS Specific Controller Data 00:07:57.569 ============================ 00:07:57.569 Zone Append Size Limit: 0 00:07:57.569 00:07:57.569 00:07:57.569 Active Namespaces 00:07:57.569 ================= 00:07:57.569 Namespace ID:1 00:07:57.569 Error Recovery Timeout: Unlimited 00:07:57.569 Command Set Identifier: NVM (00h) 00:07:57.569 Deallocate: Supported 00:07:57.569 Deallocated/Unwritten Error: Supported 00:07:57.569 Deallocated Read Value: All 0x00 00:07:57.569 Deallocate in Write Zeroes: Not Supported 00:07:57.569 Deallocated Guard Field: 0xFFFF 00:07:57.569 Flush: Supported 00:07:57.569 Reservation: Not Supported 00:07:57.569 Namespace Sharing Capabilities: Private 00:07:57.569 Size (in LBAs): 1048576 (4GiB) 00:07:57.569 Capacity (in LBAs): 1048576 (4GiB) 00:07:57.569 Utilization (in LBAs): 1048576 (4GiB) 00:07:57.569 Thin Provisioning: Not Supported 00:07:57.569 Per-NS Atomic Units: No 00:07:57.569 Maximum Single Source Range Length: 128 00:07:57.569 Maximum Copy Length: 128 00:07:57.569 Maximum Source Range Count: 128 00:07:57.569 NGUID/EUI64 Never Reused: No 00:07:57.569 Namespace Write Protected: No 00:07:57.569 Number of LBA Formats: 8 00:07:57.569 Current LBA Format: LBA Format #04 00:07:57.569 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:57.569 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:57.569 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:57.569 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:57.569 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:57.569 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:57.569 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:57.569 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:57.569 00:07:57.569 NVM Specific Namespace Data 00:07:57.569 =========================== 00:07:57.569 Logical Block Storage Tag Mask: 0 00:07:57.569 Protection Information Capabilities: 00:07:57.569 16b Guard Protection Information Storage Tag Support: No 00:07:57.569 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:57.569 Storage Tag Check Read Support: No 00:07:57.569 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.569 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.569 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.569 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.569 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.569 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.569 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.569 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.569 Namespace ID:2 00:07:57.569 Error Recovery Timeout: Unlimited 00:07:57.569 Command Set Identifier: NVM (00h) 00:07:57.569 Deallocate: Supported 00:07:57.569 Deallocated/Unwritten Error: Supported 00:07:57.569 Deallocated Read Value: All 0x00 00:07:57.569 Deallocate in Write Zeroes: Not Supported 00:07:57.569 Deallocated Guard Field: 0xFFFF 00:07:57.569 Flush: Supported 00:07:57.569 Reservation: Not Supported 00:07:57.569 Namespace Sharing Capabilities: Private 00:07:57.569 Size (in LBAs): 1048576 (4GiB) 00:07:57.569 Capacity (in LBAs): 1048576 (4GiB) 00:07:57.569 Utilization (in LBAs): 1048576 (4GiB) 00:07:57.569 Thin Provisioning: Not Supported 00:07:57.569 Per-NS Atomic Units: No 00:07:57.569 Maximum Single Source Range Length: 128 00:07:57.569 Maximum Copy Length: 128 00:07:57.569 Maximum Source Range Count: 128 00:07:57.569 NGUID/EUI64 Never Reused: No 00:07:57.569 Namespace Write Protected: No 00:07:57.569 Number of LBA Formats: 8 00:07:57.569 Current LBA Format: LBA Format #04 00:07:57.569 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:57.569 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:57.569 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:57.569 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:57.569 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:57.569 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:57.569 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:57.569 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:57.569 00:07:57.569 NVM Specific Namespace Data 00:07:57.569 =========================== 00:07:57.569 Logical Block Storage Tag Mask: 0 00:07:57.569 Protection Information Capabilities: 00:07:57.569 16b Guard Protection Information Storage Tag Support: No 00:07:57.569 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:57.569 Storage Tag Check Read Support: No 00:07:57.569 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.569 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.569 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.569 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.569 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.569 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.569 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.569 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.569 Namespace ID:3 00:07:57.569 Error Recovery Timeout: Unlimited 00:07:57.569 Command Set Identifier: NVM (00h) 00:07:57.569 Deallocate: Supported 00:07:57.569 Deallocated/Unwritten Error: Supported 00:07:57.569 Deallocated Read Value: All 0x00 00:07:57.569 Deallocate in Write Zeroes: Not Supported 00:07:57.569 Deallocated Guard Field: 0xFFFF 00:07:57.569 Flush: Supported 00:07:57.569 Reservation: Not Supported 00:07:57.569 Namespace Sharing Capabilities: Private 00:07:57.569 Size (in LBAs): 1048576 (4GiB) 00:07:57.569 Capacity (in LBAs): 1048576 (4GiB) 00:07:57.569 Utilization (in LBAs): 1048576 (4GiB) 00:07:57.569 Thin Provisioning: Not Supported 00:07:57.569 Per-NS Atomic Units: No 00:07:57.569 Maximum Single Source Range Length: 128 00:07:57.569 Maximum Copy Length: 128 00:07:57.569 Maximum Source Range Count: 128 00:07:57.569 NGUID/EUI64 Never Reused: No 00:07:57.569 Namespace Write Protected: No 00:07:57.569 Number of LBA Formats: 8 00:07:57.569 Current LBA Format: LBA Format #04 00:07:57.569 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:57.569 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:57.569 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:57.569 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:57.569 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:57.569 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:57.569 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:57.569 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:57.569 00:07:57.569 NVM Specific Namespace Data 00:07:57.569 =========================== 00:07:57.569 Logical Block Storage Tag Mask: 0 00:07:57.569 Protection Information Capabilities: 00:07:57.570 16b Guard Protection Information Storage Tag Support: No 00:07:57.570 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:57.570 Storage Tag Check Read Support: No 00:07:57.570 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.570 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.570 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.570 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.570 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.570 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.570 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.570 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.570 18:58:14 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:57.570 18:58:14 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:07:57.570 ===================================================== 00:07:57.570 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:57.570 ===================================================== 00:07:57.570 Controller Capabilities/Features 00:07:57.570 ================================ 00:07:57.570 Vendor ID: 1b36 00:07:57.570 Subsystem Vendor ID: 1af4 00:07:57.570 Serial Number: 12343 00:07:57.570 Model Number: QEMU NVMe Ctrl 00:07:57.570 Firmware Version: 8.0.0 00:07:57.570 Recommended Arb Burst: 6 00:07:57.570 IEEE OUI Identifier: 00 54 52 00:07:57.570 Multi-path I/O 00:07:57.570 May have multiple subsystem ports: No 00:07:57.570 May have multiple controllers: Yes 00:07:57.570 Associated with SR-IOV VF: No 00:07:57.570 Max Data Transfer Size: 524288 00:07:57.570 Max Number of Namespaces: 256 00:07:57.570 Max Number of I/O Queues: 64 00:07:57.570 NVMe Specification Version (VS): 1.4 00:07:57.570 NVMe Specification Version (Identify): 1.4 00:07:57.570 Maximum Queue Entries: 2048 00:07:57.570 Contiguous Queues Required: Yes 00:07:57.570 Arbitration Mechanisms Supported 00:07:57.570 Weighted Round Robin: Not Supported 00:07:57.570 Vendor Specific: Not Supported 00:07:57.570 Reset Timeout: 7500 ms 00:07:57.570 Doorbell Stride: 4 bytes 00:07:57.570 NVM Subsystem Reset: Not Supported 00:07:57.570 Command Sets Supported 00:07:57.570 NVM Command Set: Supported 00:07:57.570 Boot Partition: Not Supported 00:07:57.570 Memory Page Size Minimum: 4096 bytes 00:07:57.570 Memory Page Size Maximum: 65536 bytes 00:07:57.570 Persistent Memory Region: Not Supported 00:07:57.570 Optional Asynchronous Events Supported 00:07:57.570 Namespace Attribute Notices: Supported 00:07:57.570 Firmware Activation Notices: Not Supported 00:07:57.570 ANA Change Notices: Not Supported 00:07:57.570 PLE Aggregate Log Change Notices: Not Supported 00:07:57.570 LBA Status Info Alert Notices: Not Supported 00:07:57.570 EGE Aggregate Log Change Notices: Not Supported 00:07:57.570 Normal NVM Subsystem Shutdown event: Not Supported 00:07:57.570 Zone Descriptor Change Notices: Not Supported 00:07:57.570 Discovery Log Change Notices: Not Supported 00:07:57.570 Controller Attributes 00:07:57.570 128-bit Host Identifier: Not Supported 00:07:57.570 Non-Operational Permissive Mode: Not Supported 00:07:57.570 NVM Sets: Not Supported 00:07:57.570 Read Recovery Levels: Not Supported 00:07:57.570 Endurance Groups: Supported 00:07:57.570 Predictable Latency Mode: Not Supported 00:07:57.570 Traffic Based Keep ALive: Not Supported 00:07:57.570 Namespace Granularity: Not Supported 00:07:57.570 SQ Associations: Not Supported 00:07:57.570 UUID List: Not Supported 00:07:57.570 Multi-Domain Subsystem: Not Supported 00:07:57.570 Fixed Capacity Management: Not Supported 00:07:57.570 Variable Capacity Management: Not Supported 00:07:57.570 Delete Endurance Group: Not Supported 00:07:57.570 Delete NVM Set: Not Supported 00:07:57.570 Extended LBA Formats Supported: Supported 00:07:57.570 Flexible Data Placement Supported: Supported 00:07:57.570 00:07:57.570 Controller Memory Buffer Support 00:07:57.570 ================================ 00:07:57.570 Supported: No 00:07:57.570 00:07:57.570 Persistent Memory Region Support 00:07:57.570 ================================ 00:07:57.570 Supported: No 00:07:57.570 00:07:57.570 Admin Command Set Attributes 00:07:57.570 ============================ 00:07:57.570 Security Send/Receive: Not Supported 00:07:57.570 Format NVM: Supported 00:07:57.570 Firmware Activate/Download: Not Supported 00:07:57.570 Namespace Management: Supported 00:07:57.570 Device Self-Test: Not Supported 00:07:57.570 Directives: Supported 00:07:57.570 NVMe-MI: Not Supported 00:07:57.570 Virtualization Management: Not Supported 00:07:57.570 Doorbell Buffer Config: Supported 00:07:57.570 Get LBA Status Capability: Not Supported 00:07:57.570 Command & Feature Lockdown Capability: Not Supported 00:07:57.570 Abort Command Limit: 4 00:07:57.570 Async Event Request Limit: 4 00:07:57.570 Number of Firmware Slots: N/A 00:07:57.570 Firmware Slot 1 Read-Only: N/A 00:07:57.570 Firmware Activation Without Reset: N/A 00:07:57.570 Multiple Update Detection Support: N/A 00:07:57.570 Firmware Update Granularity: No Information Provided 00:07:57.570 Per-Namespace SMART Log: Yes 00:07:57.570 Asymmetric Namespace Access Log Page: Not Supported 00:07:57.570 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:57.570 Command Effects Log Page: Supported 00:07:57.570 Get Log Page Extended Data: Supported 00:07:57.570 Telemetry Log Pages: Not Supported 00:07:57.570 Persistent Event Log Pages: Not Supported 00:07:57.570 Supported Log Pages Log Page: May Support 00:07:57.570 Commands Supported & Effects Log Page: Not Supported 00:07:57.570 Feature Identifiers & Effects Log Page:May Support 00:07:57.570 NVMe-MI Commands & Effects Log Page: May Support 00:07:57.570 Data Area 4 for Telemetry Log: Not Supported 00:07:57.570 Error Log Page Entries Supported: 1 00:07:57.570 Keep Alive: Not Supported 00:07:57.570 00:07:57.570 NVM Command Set Attributes 00:07:57.570 ========================== 00:07:57.570 Submission Queue Entry Size 00:07:57.570 Max: 64 00:07:57.570 Min: 64 00:07:57.570 Completion Queue Entry Size 00:07:57.570 Max: 16 00:07:57.570 Min: 16 00:07:57.570 Number of Namespaces: 256 00:07:57.570 Compare Command: Supported 00:07:57.570 Write Uncorrectable Command: Not Supported 00:07:57.570 Dataset Management Command: Supported 00:07:57.570 Write Zeroes Command: Supported 00:07:57.570 Set Features Save Field: Supported 00:07:57.570 Reservations: Not Supported 00:07:57.570 Timestamp: Supported 00:07:57.570 Copy: Supported 00:07:57.570 Volatile Write Cache: Present 00:07:57.570 Atomic Write Unit (Normal): 1 00:07:57.570 Atomic Write Unit (PFail): 1 00:07:57.570 Atomic Compare & Write Unit: 1 00:07:57.570 Fused Compare & Write: Not Supported 00:07:57.570 Scatter-Gather List 00:07:57.570 SGL Command Set: Supported 00:07:57.570 SGL Keyed: Not Supported 00:07:57.570 SGL Bit Bucket Descriptor: Not Supported 00:07:57.570 SGL Metadata Pointer: Not Supported 00:07:57.570 Oversized SGL: Not Supported 00:07:57.570 SGL Metadata Address: Not Supported 00:07:57.570 SGL Offset: Not Supported 00:07:57.570 Transport SGL Data Block: Not Supported 00:07:57.570 Replay Protected Memory Block: Not Supported 00:07:57.570 00:07:57.570 Firmware Slot Information 00:07:57.570 ========================= 00:07:57.570 Active slot: 1 00:07:57.570 Slot 1 Firmware Revision: 1.0 00:07:57.570 00:07:57.570 00:07:57.570 Commands Supported and Effects 00:07:57.570 ============================== 00:07:57.570 Admin Commands 00:07:57.570 -------------- 00:07:57.570 Delete I/O Submission Queue (00h): Supported 00:07:57.570 Create I/O Submission Queue (01h): Supported 00:07:57.570 Get Log Page (02h): Supported 00:07:57.570 Delete I/O Completion Queue (04h): Supported 00:07:57.570 Create I/O Completion Queue (05h): Supported 00:07:57.570 Identify (06h): Supported 00:07:57.570 Abort (08h): Supported 00:07:57.570 Set Features (09h): Supported 00:07:57.570 Get Features (0Ah): Supported 00:07:57.570 Asynchronous Event Request (0Ch): Supported 00:07:57.570 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:57.570 Directive Send (19h): Supported 00:07:57.570 Directive Receive (1Ah): Supported 00:07:57.570 Virtualization Management (1Ch): Supported 00:07:57.570 Doorbell Buffer Config (7Ch): Supported 00:07:57.570 Format NVM (80h): Supported LBA-Change 00:07:57.570 I/O Commands 00:07:57.570 ------------ 00:07:57.570 Flush (00h): Supported LBA-Change 00:07:57.570 Write (01h): Supported LBA-Change 00:07:57.570 Read (02h): Supported 00:07:57.570 Compare (05h): Supported 00:07:57.570 Write Zeroes (08h): Supported LBA-Change 00:07:57.570 Dataset Management (09h): Supported LBA-Change 00:07:57.570 Unknown (0Ch): Supported 00:07:57.570 Unknown (12h): Supported 00:07:57.570 Copy (19h): Supported LBA-Change 00:07:57.570 Unknown (1Dh): Supported LBA-Change 00:07:57.570 00:07:57.570 Error Log 00:07:57.570 ========= 00:07:57.571 00:07:57.571 Arbitration 00:07:57.571 =========== 00:07:57.571 Arbitration Burst: no limit 00:07:57.571 00:07:57.571 Power Management 00:07:57.571 ================ 00:07:57.571 Number of Power States: 1 00:07:57.571 Current Power State: Power State #0 00:07:57.571 Power State #0: 00:07:57.571 Max Power: 25.00 W 00:07:57.571 Non-Operational State: Operational 00:07:57.571 Entry Latency: 16 microseconds 00:07:57.571 Exit Latency: 4 microseconds 00:07:57.571 Relative Read Throughput: 0 00:07:57.571 Relative Read Latency: 0 00:07:57.571 Relative Write Throughput: 0 00:07:57.571 Relative Write Latency: 0 00:07:57.571 Idle Power: Not Reported 00:07:57.571 Active Power: Not Reported 00:07:57.571 Non-Operational Permissive Mode: Not Supported 00:07:57.571 00:07:57.571 Health Information 00:07:57.571 ================== 00:07:57.571 Critical Warnings: 00:07:57.571 Available Spare Space: OK 00:07:57.571 Temperature: OK 00:07:57.571 Device Reliability: OK 00:07:57.571 Read Only: No 00:07:57.571 Volatile Memory Backup: OK 00:07:57.571 Current Temperature: 323 Kelvin (50 Celsius) 00:07:57.571 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:57.571 Available Spare: 0% 00:07:57.571 Available Spare Threshold: 0% 00:07:57.571 Life Percentage Used: 0% 00:07:57.571 Data Units Read: 796 00:07:57.571 Data Units Written: 725 00:07:57.571 Host Read Commands: 34919 00:07:57.571 Host Write Commands: 34342 00:07:57.571 Controller Busy Time: 0 minutes 00:07:57.571 Power Cycles: 0 00:07:57.571 Power On Hours: 0 hours 00:07:57.571 Unsafe Shutdowns: 0 00:07:57.571 Unrecoverable Media Errors: 0 00:07:57.571 Lifetime Error Log Entries: 0 00:07:57.571 Warning Temperature Time: 0 minutes 00:07:57.571 Critical Temperature Time: 0 minutes 00:07:57.571 00:07:57.571 Number of Queues 00:07:57.571 ================ 00:07:57.571 Number of I/O Submission Queues: 64 00:07:57.571 Number of I/O Completion Queues: 64 00:07:57.571 00:07:57.571 ZNS Specific Controller Data 00:07:57.571 ============================ 00:07:57.571 Zone Append Size Limit: 0 00:07:57.571 00:07:57.571 00:07:57.571 Active Namespaces 00:07:57.571 ================= 00:07:57.571 Namespace ID:1 00:07:57.571 Error Recovery Timeout: Unlimited 00:07:57.571 Command Set Identifier: NVM (00h) 00:07:57.571 Deallocate: Supported 00:07:57.571 Deallocated/Unwritten Error: Supported 00:07:57.571 Deallocated Read Value: All 0x00 00:07:57.571 Deallocate in Write Zeroes: Not Supported 00:07:57.571 Deallocated Guard Field: 0xFFFF 00:07:57.571 Flush: Supported 00:07:57.571 Reservation: Not Supported 00:07:57.571 Namespace Sharing Capabilities: Multiple Controllers 00:07:57.571 Size (in LBAs): 262144 (1GiB) 00:07:57.571 Capacity (in LBAs): 262144 (1GiB) 00:07:57.571 Utilization (in LBAs): 262144 (1GiB) 00:07:57.571 Thin Provisioning: Not Supported 00:07:57.571 Per-NS Atomic Units: No 00:07:57.571 Maximum Single Source Range Length: 128 00:07:57.571 Maximum Copy Length: 128 00:07:57.571 Maximum Source Range Count: 128 00:07:57.571 NGUID/EUI64 Never Reused: No 00:07:57.571 Namespace Write Protected: No 00:07:57.571 Endurance group ID: 1 00:07:57.571 Number of LBA Formats: 8 00:07:57.571 Current LBA Format: LBA Format #04 00:07:57.571 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:57.571 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:57.571 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:57.571 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:57.571 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:57.571 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:57.571 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:57.571 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:57.571 00:07:57.571 Get Feature FDP: 00:07:57.571 ================ 00:07:57.571 Enabled: Yes 00:07:57.571 FDP configuration index: 0 00:07:57.571 00:07:57.571 FDP configurations log page 00:07:57.571 =========================== 00:07:57.571 Number of FDP configurations: 1 00:07:57.571 Version: 0 00:07:57.571 Size: 112 00:07:57.571 FDP Configuration Descriptor: 0 00:07:57.571 Descriptor Size: 96 00:07:57.571 Reclaim Group Identifier format: 2 00:07:57.571 FDP Volatile Write Cache: Not Present 00:07:57.571 FDP Configuration: Valid 00:07:57.571 Vendor Specific Size: 0 00:07:57.571 Number of Reclaim Groups: 2 00:07:57.571 Number of Recalim Unit Handles: 8 00:07:57.571 Max Placement Identifiers: 128 00:07:57.571 Number of Namespaces Suppprted: 256 00:07:57.571 Reclaim unit Nominal Size: 6000000 bytes 00:07:57.571 Estimated Reclaim Unit Time Limit: Not Reported 00:07:57.571 RUH Desc #000: RUH Type: Initially Isolated 00:07:57.571 RUH Desc #001: RUH Type: Initially Isolated 00:07:57.571 RUH Desc #002: RUH Type: Initially Isolated 00:07:57.571 RUH Desc #003: RUH Type: Initially Isolated 00:07:57.571 RUH Desc #004: RUH Type: Initially Isolated 00:07:57.571 RUH Desc #005: RUH Type: Initially Isolated 00:07:57.571 RUH Desc #006: RUH Type: Initially Isolated 00:07:57.571 RUH Desc #007: RUH Type: Initially Isolated 00:07:57.571 00:07:57.571 FDP reclaim unit handle usage log page 00:07:57.831 ====================================== 00:07:57.831 Number of Reclaim Unit Handles: 8 00:07:57.831 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:57.831 RUH Usage Desc #001: RUH Attributes: Unused 00:07:57.831 RUH Usage Desc #002: RUH Attributes: Unused 00:07:57.831 RUH Usage Desc #003: RUH Attributes: Unused 00:07:57.831 RUH Usage Desc #004: RUH Attributes: Unused 00:07:57.831 RUH Usage Desc #005: RUH Attributes: Unused 00:07:57.831 RUH Usage Desc #006: RUH Attributes: Unused 00:07:57.831 RUH Usage Desc #007: RUH Attributes: Unused 00:07:57.831 00:07:57.831 FDP statistics log page 00:07:57.831 ======================= 00:07:57.831 Host bytes with metadata written: 450600960 00:07:57.832 Media bytes with metadata written: 450654208 00:07:57.832 Media bytes erased: 0 00:07:57.832 00:07:57.832 FDP events log page 00:07:57.832 =================== 00:07:57.832 Number of FDP events: 0 00:07:57.832 00:07:57.832 NVM Specific Namespace Data 00:07:57.832 =========================== 00:07:57.832 Logical Block Storage Tag Mask: 0 00:07:57.832 Protection Information Capabilities: 00:07:57.832 16b Guard Protection Information Storage Tag Support: No 00:07:57.832 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:57.832 Storage Tag Check Read Support: No 00:07:57.832 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.832 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.832 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.832 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.832 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.832 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.832 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.832 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.832 00:07:57.832 real 0m1.118s 00:07:57.832 user 0m0.393s 00:07:57.832 sys 0m0.517s 00:07:57.832 18:58:15 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:57.832 18:58:15 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:07:57.832 ************************************ 00:07:57.832 END TEST nvme_identify 00:07:57.832 ************************************ 00:07:57.832 18:58:15 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:07:57.832 18:58:15 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:57.832 18:58:15 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:57.832 18:58:15 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:57.832 ************************************ 00:07:57.832 START TEST nvme_perf 00:07:57.832 ************************************ 00:07:57.832 18:58:15 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:07:57.832 18:58:15 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:07:59.218 Initializing NVMe Controllers 00:07:59.218 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:59.218 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:59.218 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:59.218 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:59.218 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:59.218 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:59.218 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:59.218 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:59.218 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:59.218 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:59.218 Initialization complete. Launching workers. 00:07:59.218 ======================================================== 00:07:59.218 Latency(us) 00:07:59.218 Device Information : IOPS MiB/s Average min max 00:07:59.218 PCIE (0000:00:13.0) NSID 1 from core 0: 7784.25 91.22 16455.85 9753.71 38463.69 00:07:59.218 PCIE (0000:00:10.0) NSID 1 from core 0: 7784.25 91.22 16437.39 8957.06 37726.37 00:07:59.218 PCIE (0000:00:11.0) NSID 1 from core 0: 7784.25 91.22 16414.92 8637.50 36790.56 00:07:59.218 PCIE (0000:00:12.0) NSID 1 from core 0: 7784.25 91.22 16391.47 6964.68 36732.46 00:07:59.218 PCIE (0000:00:12.0) NSID 2 from core 0: 7784.25 91.22 16367.26 6312.67 36034.39 00:07:59.218 PCIE (0000:00:12.0) NSID 3 from core 0: 7848.06 91.97 16208.70 5764.48 28479.60 00:07:59.218 ======================================================== 00:07:59.218 Total : 46769.31 548.08 16379.03 5764.48 38463.69 00:07:59.218 00:07:59.218 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:59.218 ================================================================================= 00:07:59.218 1.00000% : 13107.200us 00:07:59.218 10.00000% : 14317.095us 00:07:59.218 25.00000% : 15123.692us 00:07:59.218 50.00000% : 15930.289us 00:07:59.218 75.00000% : 17341.834us 00:07:59.218 90.00000% : 18955.028us 00:07:59.218 95.00000% : 19761.625us 00:07:59.218 98.00000% : 22685.538us 00:07:59.218 99.00000% : 30045.735us 00:07:59.218 99.50000% : 37708.406us 00:07:59.218 99.90000% : 38313.354us 00:07:59.218 99.99000% : 38515.003us 00:07:59.218 99.99900% : 38515.003us 00:07:59.218 99.99990% : 38515.003us 00:07:59.218 99.99999% : 38515.003us 00:07:59.218 00:07:59.218 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:59.218 ================================================================================= 00:07:59.218 1.00000% : 13006.375us 00:07:59.218 10.00000% : 14317.095us 00:07:59.218 25.00000% : 15022.868us 00:07:59.218 50.00000% : 15930.289us 00:07:59.218 75.00000% : 17341.834us 00:07:59.218 90.00000% : 18854.203us 00:07:59.218 95.00000% : 19862.449us 00:07:59.218 98.00000% : 22786.363us 00:07:59.218 99.00000% : 29440.788us 00:07:59.218 99.50000% : 36700.160us 00:07:59.218 99.90000% : 37708.406us 00:07:59.218 99.99000% : 37910.055us 00:07:59.218 99.99900% : 37910.055us 00:07:59.218 99.99990% : 37910.055us 00:07:59.218 99.99999% : 37910.055us 00:07:59.218 00:07:59.218 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:59.218 ================================================================================= 00:07:59.218 1.00000% : 13107.200us 00:07:59.218 10.00000% : 14317.095us 00:07:59.218 25.00000% : 15022.868us 00:07:59.218 50.00000% : 16031.114us 00:07:59.218 75.00000% : 17341.834us 00:07:59.218 90.00000% : 18955.028us 00:07:59.218 95.00000% : 19963.274us 00:07:59.218 98.00000% : 22181.415us 00:07:59.218 99.00000% : 28432.542us 00:07:59.218 99.50000% : 35893.563us 00:07:59.218 99.90000% : 36700.160us 00:07:59.218 99.99000% : 36901.809us 00:07:59.218 99.99900% : 36901.809us 00:07:59.218 99.99990% : 36901.809us 00:07:59.218 99.99999% : 36901.809us 00:07:59.218 00:07:59.218 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:59.218 ================================================================================= 00:07:59.218 1.00000% : 12300.603us 00:07:59.218 10.00000% : 14317.095us 00:07:59.218 25.00000% : 14922.043us 00:07:59.218 50.00000% : 15829.465us 00:07:59.218 75.00000% : 17543.483us 00:07:59.218 90.00000% : 18955.028us 00:07:59.218 95.00000% : 19963.274us 00:07:59.218 98.00000% : 21374.818us 00:07:59.218 99.00000% : 28634.191us 00:07:59.219 99.50000% : 35893.563us 00:07:59.219 99.90000% : 36700.160us 00:07:59.219 99.99000% : 36901.809us 00:07:59.219 99.99900% : 36901.809us 00:07:59.219 99.99990% : 36901.809us 00:07:59.219 99.99999% : 36901.809us 00:07:59.219 00:07:59.219 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:59.219 ================================================================================= 00:07:59.219 1.00000% : 11746.068us 00:07:59.219 10.00000% : 14216.271us 00:07:59.219 25.00000% : 15022.868us 00:07:59.219 50.00000% : 15930.289us 00:07:59.219 75.00000% : 17644.308us 00:07:59.219 90.00000% : 18854.203us 00:07:59.219 95.00000% : 19559.975us 00:07:59.219 98.00000% : 21273.994us 00:07:59.219 99.00000% : 28029.243us 00:07:59.219 99.50000% : 35086.966us 00:07:59.219 99.90000% : 35893.563us 00:07:59.219 99.99000% : 36095.212us 00:07:59.219 99.99900% : 36095.212us 00:07:59.219 99.99990% : 36095.212us 00:07:59.219 99.99999% : 36095.212us 00:07:59.219 00:07:59.219 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:59.219 ================================================================================= 00:07:59.219 1.00000% : 10687.409us 00:07:59.219 10.00000% : 14216.271us 00:07:59.219 25.00000% : 15022.868us 00:07:59.219 50.00000% : 15930.289us 00:07:59.219 75.00000% : 17442.658us 00:07:59.219 90.00000% : 18854.203us 00:07:59.219 95.00000% : 19358.326us 00:07:59.219 98.00000% : 21173.169us 00:07:59.219 99.00000% : 21979.766us 00:07:59.219 99.50000% : 27625.945us 00:07:59.219 99.90000% : 28230.892us 00:07:59.219 99.99000% : 28634.191us 00:07:59.219 99.99900% : 28634.191us 00:07:59.219 99.99990% : 28634.191us 00:07:59.219 99.99999% : 28634.191us 00:07:59.219 00:07:59.219 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:59.219 ============================================================================== 00:07:59.219 Range in us Cumulative IO count 00:07:59.219 9729.575 - 9779.988: 0.0256% ( 2) 00:07:59.219 9779.988 - 9830.400: 0.0768% ( 4) 00:07:59.219 9830.400 - 9880.812: 0.1153% ( 3) 00:07:59.219 9880.812 - 9931.225: 0.1537% ( 3) 00:07:59.219 9931.225 - 9981.637: 0.1921% ( 3) 00:07:59.219 9981.637 - 10032.049: 0.2305% ( 3) 00:07:59.219 10032.049 - 10082.462: 0.2818% ( 4) 00:07:59.219 10082.462 - 10132.874: 0.3202% ( 3) 00:07:59.219 10132.874 - 10183.286: 0.3458% ( 2) 00:07:59.219 10183.286 - 10233.698: 0.3842% ( 3) 00:07:59.219 10233.698 - 10284.111: 0.4355% ( 4) 00:07:59.219 10284.111 - 10334.523: 0.4739% ( 3) 00:07:59.219 10334.523 - 10384.935: 0.5123% ( 3) 00:07:59.219 10384.935 - 10435.348: 0.5635% ( 4) 00:07:59.219 10435.348 - 10485.760: 0.6019% ( 3) 00:07:59.219 10485.760 - 10536.172: 0.6532% ( 4) 00:07:59.219 10536.172 - 10586.585: 0.6916% ( 3) 00:07:59.219 10586.585 - 10636.997: 0.7300% ( 3) 00:07:59.219 10636.997 - 10687.409: 0.7684% ( 3) 00:07:59.219 10687.409 - 10737.822: 0.8069% ( 3) 00:07:59.219 10737.822 - 10788.234: 0.8197% ( 1) 00:07:59.219 12855.138 - 12905.551: 0.8453% ( 2) 00:07:59.219 12905.551 - 13006.375: 0.9093% ( 5) 00:07:59.219 13006.375 - 13107.200: 1.0886% ( 14) 00:07:59.219 13107.200 - 13208.025: 1.3704% ( 22) 00:07:59.219 13208.025 - 13308.849: 1.8443% ( 37) 00:07:59.219 13308.849 - 13409.674: 2.3694% ( 41) 00:07:59.219 13409.674 - 13510.498: 2.9201% ( 43) 00:07:59.219 13510.498 - 13611.323: 3.5605% ( 50) 00:07:59.219 13611.323 - 13712.148: 4.1368% ( 45) 00:07:59.219 13712.148 - 13812.972: 4.9693% ( 65) 00:07:59.219 13812.972 - 13913.797: 5.8017% ( 65) 00:07:59.219 13913.797 - 14014.622: 6.7751% ( 76) 00:07:59.219 14014.622 - 14115.446: 7.7997% ( 80) 00:07:59.219 14115.446 - 14216.271: 8.8627% ( 83) 00:07:59.219 14216.271 - 14317.095: 10.1947% ( 104) 00:07:59.219 14317.095 - 14417.920: 11.8340% ( 128) 00:07:59.219 14417.920 - 14518.745: 13.5118% ( 131) 00:07:59.219 14518.745 - 14619.569: 15.3304% ( 142) 00:07:59.219 14619.569 - 14720.394: 17.1491% ( 142) 00:07:59.219 14720.394 - 14821.218: 19.3263% ( 170) 00:07:59.219 14821.218 - 14922.043: 21.5548% ( 174) 00:07:59.219 14922.043 - 15022.868: 24.2572% ( 211) 00:07:59.219 15022.868 - 15123.692: 27.4078% ( 246) 00:07:59.219 15123.692 - 15224.517: 30.5840% ( 248) 00:07:59.219 15224.517 - 15325.342: 33.6962% ( 243) 00:07:59.219 15325.342 - 15426.166: 37.1798% ( 272) 00:07:59.219 15426.166 - 15526.991: 40.3817% ( 250) 00:07:59.219 15526.991 - 15627.815: 43.4810% ( 242) 00:07:59.219 15627.815 - 15728.640: 46.3755% ( 226) 00:07:59.219 15728.640 - 15829.465: 49.2956% ( 228) 00:07:59.219 15829.465 - 15930.289: 52.1004% ( 219) 00:07:59.219 15930.289 - 16031.114: 54.9693% ( 224) 00:07:59.219 16031.114 - 16131.938: 57.7100% ( 214) 00:07:59.219 16131.938 - 16232.763: 60.1819% ( 193) 00:07:59.219 16232.763 - 16333.588: 62.2951% ( 165) 00:07:59.219 16333.588 - 16434.412: 64.0753% ( 139) 00:07:59.219 16434.412 - 16535.237: 65.7275% ( 129) 00:07:59.219 16535.237 - 16636.062: 67.2772% ( 121) 00:07:59.219 16636.062 - 16736.886: 68.7244% ( 113) 00:07:59.219 16736.886 - 16837.711: 70.0564% ( 104) 00:07:59.219 16837.711 - 16938.535: 71.2602% ( 94) 00:07:59.219 16938.535 - 17039.360: 72.4001% ( 89) 00:07:59.219 17039.360 - 17140.185: 73.5912% ( 93) 00:07:59.219 17140.185 - 17241.009: 74.5774% ( 77) 00:07:59.219 17241.009 - 17341.834: 75.4611% ( 69) 00:07:59.219 17341.834 - 17442.658: 76.4728% ( 79) 00:07:59.219 17442.658 - 17543.483: 77.4846% ( 79) 00:07:59.219 17543.483 - 17644.308: 78.4196% ( 73) 00:07:59.219 17644.308 - 17745.132: 79.6107% ( 93) 00:07:59.219 17745.132 - 17845.957: 80.6993% ( 85) 00:07:59.219 17845.957 - 17946.782: 81.6086% ( 71) 00:07:59.219 17946.782 - 18047.606: 82.6076% ( 78) 00:07:59.219 18047.606 - 18148.431: 83.6194% ( 79) 00:07:59.219 18148.431 - 18249.255: 84.5927% ( 76) 00:07:59.219 18249.255 - 18350.080: 85.4636% ( 68) 00:07:59.219 18350.080 - 18450.905: 86.5138% ( 82) 00:07:59.219 18450.905 - 18551.729: 87.5640% ( 82) 00:07:59.219 18551.729 - 18652.554: 88.4477% ( 69) 00:07:59.219 18652.554 - 18753.378: 89.1906% ( 58) 00:07:59.219 18753.378 - 18854.203: 89.8950% ( 55) 00:07:59.219 18854.203 - 18955.028: 90.5353% ( 50) 00:07:59.219 18955.028 - 19055.852: 91.2141% ( 53) 00:07:59.219 19055.852 - 19156.677: 91.9570% ( 58) 00:07:59.219 19156.677 - 19257.502: 92.6101% ( 51) 00:07:59.219 19257.502 - 19358.326: 93.1993% ( 46) 00:07:59.219 19358.326 - 19459.151: 93.7884% ( 46) 00:07:59.219 19459.151 - 19559.975: 94.3519% ( 44) 00:07:59.219 19559.975 - 19660.800: 94.8002% ( 35) 00:07:59.219 19660.800 - 19761.625: 95.0051% ( 16) 00:07:59.219 19761.625 - 19862.449: 95.2228% ( 17) 00:07:59.219 19862.449 - 19963.274: 95.4022% ( 14) 00:07:59.219 19963.274 - 20064.098: 95.6071% ( 16) 00:07:59.219 20064.098 - 20164.923: 95.8120% ( 16) 00:07:59.219 20164.923 - 20265.748: 95.9657% ( 12) 00:07:59.219 20265.748 - 20366.572: 96.0938% ( 10) 00:07:59.219 20366.572 - 20467.397: 96.2474% ( 12) 00:07:59.219 20467.397 - 20568.222: 96.3755% ( 10) 00:07:59.219 20568.222 - 20669.046: 96.4908% ( 9) 00:07:59.219 20669.046 - 20769.871: 96.5676% ( 6) 00:07:59.219 20769.871 - 20870.695: 96.6317% ( 5) 00:07:59.219 20870.695 - 20971.520: 96.7085% ( 6) 00:07:59.219 20971.520 - 21072.345: 96.8494% ( 11) 00:07:59.219 21072.345 - 21173.169: 96.8878% ( 3) 00:07:59.219 21173.169 - 21273.994: 96.9903% ( 8) 00:07:59.219 21273.994 - 21374.818: 97.1183% ( 10) 00:07:59.219 21374.818 - 21475.643: 97.2464% ( 10) 00:07:59.219 21475.643 - 21576.468: 97.2976% ( 4) 00:07:59.219 21576.468 - 21677.292: 97.3873% ( 7) 00:07:59.219 21677.292 - 21778.117: 97.4385% ( 4) 00:07:59.219 21778.117 - 21878.942: 97.4769% ( 3) 00:07:59.219 21878.942 - 21979.766: 97.5410% ( 5) 00:07:59.219 21979.766 - 22080.591: 97.6306% ( 7) 00:07:59.219 22080.591 - 22181.415: 97.7075% ( 6) 00:07:59.219 22181.415 - 22282.240: 97.7587% ( 4) 00:07:59.219 22282.240 - 22383.065: 97.8356% ( 6) 00:07:59.219 22383.065 - 22483.889: 97.8868% ( 4) 00:07:59.219 22483.889 - 22584.714: 97.9508% ( 5) 00:07:59.219 22584.714 - 22685.538: 98.0020% ( 4) 00:07:59.219 22685.538 - 22786.363: 98.0661% ( 5) 00:07:59.219 22786.363 - 22887.188: 98.1173% ( 4) 00:07:59.219 22887.188 - 22988.012: 98.1814% ( 5) 00:07:59.219 22988.012 - 23088.837: 98.2454% ( 5) 00:07:59.219 23088.837 - 23189.662: 98.2966% ( 4) 00:07:59.219 23189.662 - 23290.486: 98.3607% ( 5) 00:07:59.219 28634.191 - 28835.840: 98.3991% ( 3) 00:07:59.219 28835.840 - 29037.489: 98.5143% ( 9) 00:07:59.219 29037.489 - 29239.138: 98.6296% ( 9) 00:07:59.219 29239.138 - 29440.788: 98.7577% ( 10) 00:07:59.219 29440.788 - 29642.437: 98.8730% ( 9) 00:07:59.219 29642.437 - 29844.086: 98.9882% ( 9) 00:07:59.219 29844.086 - 30045.735: 99.0779% ( 7) 00:07:59.219 30045.735 - 30247.385: 99.1803% ( 8) 00:07:59.219 36095.212 - 36296.862: 99.1931% ( 1) 00:07:59.219 36901.809 - 37103.458: 99.2572% ( 5) 00:07:59.219 37103.458 - 37305.108: 99.3596% ( 8) 00:07:59.219 37305.108 - 37506.757: 99.4621% ( 8) 00:07:59.219 37506.757 - 37708.406: 99.5774% ( 9) 00:07:59.219 37708.406 - 37910.055: 99.6926% ( 9) 00:07:59.219 37910.055 - 38111.705: 99.8207% ( 10) 00:07:59.219 38111.705 - 38313.354: 99.9232% ( 8) 00:07:59.219 38313.354 - 38515.003: 100.0000% ( 6) 00:07:59.219 00:07:59.219 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:59.219 ============================================================================== 00:07:59.219 Range in us Cumulative IO count 00:07:59.219 8922.978 - 8973.391: 0.0256% ( 2) 00:07:59.219 8973.391 - 9023.803: 0.0512% ( 2) 00:07:59.219 9023.803 - 9074.215: 0.1025% ( 4) 00:07:59.219 9074.215 - 9124.628: 0.1281% ( 2) 00:07:59.220 9124.628 - 9175.040: 0.1793% ( 4) 00:07:59.220 9175.040 - 9225.452: 0.1921% ( 1) 00:07:59.220 9225.452 - 9275.865: 0.2433% ( 4) 00:07:59.220 9275.865 - 9326.277: 0.2818% ( 3) 00:07:59.220 9326.277 - 9376.689: 0.3202% ( 3) 00:07:59.220 9376.689 - 9427.102: 0.3586% ( 3) 00:07:59.220 9427.102 - 9477.514: 0.3842% ( 2) 00:07:59.220 9477.514 - 9527.926: 0.4226% ( 3) 00:07:59.220 9527.926 - 9578.338: 0.4483% ( 2) 00:07:59.220 9578.338 - 9628.751: 0.4739% ( 2) 00:07:59.220 9628.751 - 9679.163: 0.5379% ( 5) 00:07:59.220 9679.163 - 9729.575: 0.5635% ( 2) 00:07:59.220 9729.575 - 9779.988: 0.5891% ( 2) 00:07:59.220 9779.988 - 9830.400: 0.6404% ( 4) 00:07:59.220 9830.400 - 9880.812: 0.6788% ( 3) 00:07:59.220 9880.812 - 9931.225: 0.6916% ( 1) 00:07:59.220 9931.225 - 9981.637: 0.7428% ( 4) 00:07:59.220 9981.637 - 10032.049: 0.7941% ( 4) 00:07:59.220 10032.049 - 10082.462: 0.8069% ( 1) 00:07:59.220 10082.462 - 10132.874: 0.8197% ( 1) 00:07:59.220 12855.138 - 12905.551: 0.8965% ( 6) 00:07:59.220 12905.551 - 13006.375: 1.0758% ( 14) 00:07:59.220 13006.375 - 13107.200: 1.3448% ( 21) 00:07:59.220 13107.200 - 13208.025: 1.5753% ( 18) 00:07:59.220 13208.025 - 13308.849: 1.8827% ( 24) 00:07:59.220 13308.849 - 13409.674: 2.4078% ( 41) 00:07:59.220 13409.674 - 13510.498: 3.0225% ( 48) 00:07:59.220 13510.498 - 13611.323: 3.5476% ( 41) 00:07:59.220 13611.323 - 13712.148: 4.2520% ( 55) 00:07:59.220 13712.148 - 13812.972: 4.8412% ( 46) 00:07:59.220 13812.972 - 13913.797: 5.6609% ( 64) 00:07:59.220 13913.797 - 14014.622: 6.7623% ( 86) 00:07:59.220 14014.622 - 14115.446: 8.0302% ( 99) 00:07:59.220 14115.446 - 14216.271: 9.4775% ( 113) 00:07:59.220 14216.271 - 14317.095: 10.8735% ( 109) 00:07:59.220 14317.095 - 14417.920: 12.6537% ( 139) 00:07:59.220 14417.920 - 14518.745: 14.1137% ( 114) 00:07:59.220 14518.745 - 14619.569: 16.2269% ( 165) 00:07:59.220 14619.569 - 14720.394: 18.1609% ( 151) 00:07:59.220 14720.394 - 14821.218: 20.5686% ( 188) 00:07:59.220 14821.218 - 14922.043: 23.0277% ( 192) 00:07:59.220 14922.043 - 15022.868: 25.5507% ( 197) 00:07:59.220 15022.868 - 15123.692: 28.2787% ( 213) 00:07:59.220 15123.692 - 15224.517: 30.9810% ( 211) 00:07:59.220 15224.517 - 15325.342: 33.8115% ( 221) 00:07:59.220 15325.342 - 15426.166: 36.5138% ( 211) 00:07:59.220 15426.166 - 15526.991: 39.3955% ( 225) 00:07:59.220 15526.991 - 15627.815: 42.1363% ( 214) 00:07:59.220 15627.815 - 15728.640: 45.2228% ( 241) 00:07:59.220 15728.640 - 15829.465: 47.9124% ( 210) 00:07:59.220 15829.465 - 15930.289: 50.7941% ( 225) 00:07:59.220 15930.289 - 16031.114: 53.5605% ( 216) 00:07:59.220 16031.114 - 16131.938: 55.9426% ( 186) 00:07:59.220 16131.938 - 16232.763: 58.0943% ( 168) 00:07:59.220 16232.763 - 16333.588: 60.2715% ( 170) 00:07:59.220 16333.588 - 16434.412: 62.3079% ( 159) 00:07:59.220 16434.412 - 16535.237: 64.4083% ( 164) 00:07:59.220 16535.237 - 16636.062: 65.9324% ( 119) 00:07:59.220 16636.062 - 16736.886: 67.8407% ( 149) 00:07:59.220 16736.886 - 16837.711: 69.8002% ( 153) 00:07:59.220 16837.711 - 16938.535: 71.1194% ( 103) 00:07:59.220 16938.535 - 17039.360: 72.5538% ( 112) 00:07:59.220 17039.360 - 17140.185: 73.7577% ( 94) 00:07:59.220 17140.185 - 17241.009: 74.8847% ( 88) 00:07:59.220 17241.009 - 17341.834: 75.7941% ( 71) 00:07:59.220 17341.834 - 17442.658: 77.1388% ( 105) 00:07:59.220 17442.658 - 17543.483: 78.3043% ( 91) 00:07:59.220 17543.483 - 17644.308: 79.5338% ( 96) 00:07:59.220 17644.308 - 17745.132: 80.6481% ( 87) 00:07:59.220 17745.132 - 17845.957: 81.6855% ( 81) 00:07:59.220 17845.957 - 17946.782: 82.6076% ( 72) 00:07:59.220 17946.782 - 18047.606: 83.5681% ( 75) 00:07:59.220 18047.606 - 18148.431: 84.6183% ( 82) 00:07:59.220 18148.431 - 18249.255: 85.3356% ( 56) 00:07:59.220 18249.255 - 18350.080: 85.9887% ( 51) 00:07:59.220 18350.080 - 18450.905: 86.8724% ( 69) 00:07:59.220 18450.905 - 18551.729: 87.8586% ( 77) 00:07:59.220 18551.729 - 18652.554: 88.5502% ( 54) 00:07:59.220 18652.554 - 18753.378: 89.2930% ( 58) 00:07:59.220 18753.378 - 18854.203: 90.0487% ( 59) 00:07:59.220 18854.203 - 18955.028: 90.7275% ( 53) 00:07:59.220 18955.028 - 19055.852: 91.3038% ( 45) 00:07:59.220 19055.852 - 19156.677: 91.8929% ( 46) 00:07:59.220 19156.677 - 19257.502: 92.4052% ( 40) 00:07:59.220 19257.502 - 19358.326: 92.9303% ( 41) 00:07:59.220 19358.326 - 19459.151: 93.4426% ( 40) 00:07:59.220 19459.151 - 19559.975: 93.9805% ( 42) 00:07:59.220 19559.975 - 19660.800: 94.3007% ( 25) 00:07:59.220 19660.800 - 19761.625: 94.6465% ( 27) 00:07:59.220 19761.625 - 19862.449: 95.0179% ( 29) 00:07:59.220 19862.449 - 19963.274: 95.2613% ( 19) 00:07:59.220 19963.274 - 20064.098: 95.5046% ( 19) 00:07:59.220 20064.098 - 20164.923: 95.6583% ( 12) 00:07:59.220 20164.923 - 20265.748: 95.8760% ( 17) 00:07:59.220 20265.748 - 20366.572: 96.0553% ( 14) 00:07:59.220 20366.572 - 20467.397: 96.0938% ( 3) 00:07:59.220 20467.397 - 20568.222: 96.2859% ( 15) 00:07:59.220 20568.222 - 20669.046: 96.4524% ( 13) 00:07:59.220 20669.046 - 20769.871: 96.5932% ( 11) 00:07:59.220 20769.871 - 20870.695: 96.7085% ( 9) 00:07:59.220 20870.695 - 20971.520: 96.8494% ( 11) 00:07:59.220 20971.520 - 21072.345: 97.0031% ( 12) 00:07:59.220 21072.345 - 21173.169: 97.0287% ( 2) 00:07:59.220 21173.169 - 21273.994: 97.0799% ( 4) 00:07:59.220 21273.994 - 21374.818: 97.1568% ( 6) 00:07:59.220 21374.818 - 21475.643: 97.1696% ( 1) 00:07:59.220 21475.643 - 21576.468: 97.2208% ( 4) 00:07:59.220 21576.468 - 21677.292: 97.2976% ( 6) 00:07:59.220 21677.292 - 21778.117: 97.3489% ( 4) 00:07:59.220 21778.117 - 21878.942: 97.4257% ( 6) 00:07:59.220 21878.942 - 21979.766: 97.5410% ( 9) 00:07:59.220 21979.766 - 22080.591: 97.6434% ( 8) 00:07:59.220 22080.591 - 22181.415: 97.7203% ( 6) 00:07:59.220 22181.415 - 22282.240: 97.7843% ( 5) 00:07:59.220 22282.240 - 22383.065: 97.8484% ( 5) 00:07:59.220 22383.065 - 22483.889: 97.8868% ( 3) 00:07:59.220 22483.889 - 22584.714: 97.9252% ( 3) 00:07:59.220 22584.714 - 22685.538: 97.9764% ( 4) 00:07:59.220 22685.538 - 22786.363: 98.0533% ( 6) 00:07:59.220 22786.363 - 22887.188: 98.0917% ( 3) 00:07:59.220 22887.188 - 22988.012: 98.1301% ( 3) 00:07:59.220 22988.012 - 23088.837: 98.1942% ( 5) 00:07:59.220 23088.837 - 23189.662: 98.2710% ( 6) 00:07:59.220 23189.662 - 23290.486: 98.2966% ( 2) 00:07:59.220 23290.486 - 23391.311: 98.3350% ( 3) 00:07:59.220 23391.311 - 23492.135: 98.3607% ( 2) 00:07:59.220 27827.594 - 28029.243: 98.4247% ( 5) 00:07:59.220 28029.243 - 28230.892: 98.5272% ( 8) 00:07:59.220 28230.892 - 28432.542: 98.6168% ( 7) 00:07:59.220 28432.542 - 28634.191: 98.7321% ( 9) 00:07:59.220 28634.191 - 28835.840: 98.8345% ( 8) 00:07:59.220 28835.840 - 29037.489: 98.9370% ( 8) 00:07:59.220 29037.489 - 29239.138: 98.9882% ( 4) 00:07:59.220 29239.138 - 29440.788: 99.1419% ( 12) 00:07:59.220 29440.788 - 29642.437: 99.1803% ( 3) 00:07:59.220 35893.563 - 36095.212: 99.1931% ( 1) 00:07:59.220 36095.212 - 36296.862: 99.2956% ( 8) 00:07:59.220 36296.862 - 36498.511: 99.4109% ( 9) 00:07:59.220 36498.511 - 36700.160: 99.5005% ( 7) 00:07:59.220 36700.160 - 36901.809: 99.5774% ( 6) 00:07:59.220 36901.809 - 37103.458: 99.6926% ( 9) 00:07:59.220 37103.458 - 37305.108: 99.7951% ( 8) 00:07:59.220 37305.108 - 37506.757: 99.8591% ( 5) 00:07:59.220 37506.757 - 37708.406: 99.9872% ( 10) 00:07:59.220 37708.406 - 37910.055: 100.0000% ( 1) 00:07:59.220 00:07:59.220 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:59.220 ============================================================================== 00:07:59.220 Range in us Cumulative IO count 00:07:59.220 8620.505 - 8670.917: 0.0512% ( 4) 00:07:59.220 8670.917 - 8721.329: 0.1153% ( 5) 00:07:59.220 8721.329 - 8771.742: 0.1921% ( 6) 00:07:59.220 8771.742 - 8822.154: 0.2177% ( 2) 00:07:59.220 8822.154 - 8872.566: 0.2561% ( 3) 00:07:59.220 8872.566 - 8922.978: 0.3202% ( 5) 00:07:59.220 8922.978 - 8973.391: 0.3842% ( 5) 00:07:59.220 8973.391 - 9023.803: 0.4226% ( 3) 00:07:59.220 9023.803 - 9074.215: 0.4867% ( 5) 00:07:59.220 9074.215 - 9124.628: 0.5123% ( 2) 00:07:59.220 9124.628 - 9175.040: 0.5379% ( 2) 00:07:59.220 9175.040 - 9225.452: 0.5763% ( 3) 00:07:59.220 9225.452 - 9275.865: 0.6148% ( 3) 00:07:59.220 9275.865 - 9326.277: 0.6532% ( 3) 00:07:59.220 9326.277 - 9376.689: 0.6916% ( 3) 00:07:59.220 9376.689 - 9427.102: 0.7300% ( 3) 00:07:59.220 9427.102 - 9477.514: 0.7684% ( 3) 00:07:59.220 9477.514 - 9527.926: 0.8197% ( 4) 00:07:59.220 12905.551 - 13006.375: 0.8837% ( 5) 00:07:59.220 13006.375 - 13107.200: 1.1783% ( 23) 00:07:59.220 13107.200 - 13208.025: 1.4600% ( 22) 00:07:59.220 13208.025 - 13308.849: 1.7290% ( 21) 00:07:59.220 13308.849 - 13409.674: 2.1132% ( 30) 00:07:59.220 13409.674 - 13510.498: 2.5359% ( 33) 00:07:59.220 13510.498 - 13611.323: 3.0097% ( 37) 00:07:59.220 13611.323 - 13712.148: 3.7269% ( 56) 00:07:59.220 13712.148 - 13812.972: 4.5850% ( 67) 00:07:59.220 13812.972 - 13913.797: 5.6865% ( 86) 00:07:59.220 13913.797 - 14014.622: 6.7239% ( 81) 00:07:59.220 14014.622 - 14115.446: 8.1839% ( 114) 00:07:59.220 14115.446 - 14216.271: 9.8489% ( 130) 00:07:59.220 14216.271 - 14317.095: 11.3730% ( 119) 00:07:59.220 14317.095 - 14417.920: 13.2044% ( 143) 00:07:59.220 14417.920 - 14518.745: 15.6634% ( 192) 00:07:59.220 14518.745 - 14619.569: 17.7766% ( 165) 00:07:59.220 14619.569 - 14720.394: 20.2613% ( 194) 00:07:59.220 14720.394 - 14821.218: 22.6434% ( 186) 00:07:59.220 14821.218 - 14922.043: 24.9488% ( 180) 00:07:59.220 14922.043 - 15022.868: 27.4846% ( 198) 00:07:59.220 15022.868 - 15123.692: 29.9693% ( 194) 00:07:59.220 15123.692 - 15224.517: 32.6204% ( 207) 00:07:59.221 15224.517 - 15325.342: 35.0026% ( 186) 00:07:59.221 15325.342 - 15426.166: 37.6921% ( 210) 00:07:59.221 15426.166 - 15526.991: 40.1383% ( 191) 00:07:59.221 15526.991 - 15627.815: 42.2772% ( 167) 00:07:59.221 15627.815 - 15728.640: 44.5441% ( 177) 00:07:59.221 15728.640 - 15829.465: 47.0671% ( 197) 00:07:59.221 15829.465 - 15930.289: 49.7439% ( 209) 00:07:59.221 15930.289 - 16031.114: 52.3181% ( 201) 00:07:59.221 16031.114 - 16131.938: 54.8028% ( 194) 00:07:59.221 16131.938 - 16232.763: 57.3514% ( 199) 00:07:59.221 16232.763 - 16333.588: 59.6055% ( 176) 00:07:59.221 16333.588 - 16434.412: 61.9109% ( 180) 00:07:59.221 16434.412 - 16535.237: 63.9344% ( 158) 00:07:59.221 16535.237 - 16636.062: 65.7531% ( 142) 00:07:59.221 16636.062 - 16736.886: 67.2643% ( 118) 00:07:59.221 16736.886 - 16837.711: 69.1086% ( 144) 00:07:59.221 16837.711 - 16938.535: 70.7992% ( 132) 00:07:59.221 16938.535 - 17039.360: 72.2464% ( 113) 00:07:59.221 17039.360 - 17140.185: 73.5912% ( 105) 00:07:59.221 17140.185 - 17241.009: 74.8591% ( 99) 00:07:59.221 17241.009 - 17341.834: 75.9093% ( 82) 00:07:59.221 17341.834 - 17442.658: 76.9723% ( 83) 00:07:59.221 17442.658 - 17543.483: 78.0738% ( 86) 00:07:59.221 17543.483 - 17644.308: 79.2136% ( 89) 00:07:59.221 17644.308 - 17745.132: 80.1998% ( 77) 00:07:59.221 17745.132 - 17845.957: 81.3397% ( 89) 00:07:59.221 17845.957 - 17946.782: 82.4667% ( 88) 00:07:59.221 17946.782 - 18047.606: 83.5297% ( 83) 00:07:59.221 18047.606 - 18148.431: 84.4390% ( 71) 00:07:59.221 18148.431 - 18249.255: 85.3484% ( 71) 00:07:59.221 18249.255 - 18350.080: 86.2193% ( 68) 00:07:59.221 18350.080 - 18450.905: 86.9493% ( 57) 00:07:59.221 18450.905 - 18551.729: 87.7177% ( 60) 00:07:59.221 18551.729 - 18652.554: 88.4990% ( 61) 00:07:59.221 18652.554 - 18753.378: 89.2162% ( 56) 00:07:59.221 18753.378 - 18854.203: 89.8181% ( 47) 00:07:59.221 18854.203 - 18955.028: 90.3048% ( 38) 00:07:59.221 18955.028 - 19055.852: 90.8299% ( 41) 00:07:59.221 19055.852 - 19156.677: 91.3294% ( 39) 00:07:59.221 19156.677 - 19257.502: 91.8289% ( 39) 00:07:59.221 19257.502 - 19358.326: 92.3540% ( 41) 00:07:59.221 19358.326 - 19459.151: 92.9431% ( 46) 00:07:59.221 19459.151 - 19559.975: 93.4554% ( 40) 00:07:59.221 19559.975 - 19660.800: 93.8909% ( 34) 00:07:59.221 19660.800 - 19761.625: 94.3263% ( 34) 00:07:59.221 19761.625 - 19862.449: 94.7618% ( 34) 00:07:59.221 19862.449 - 19963.274: 95.0948% ( 26) 00:07:59.221 19963.274 - 20064.098: 95.3509% ( 20) 00:07:59.221 20064.098 - 20164.923: 95.5558% ( 16) 00:07:59.221 20164.923 - 20265.748: 95.7736% ( 17) 00:07:59.221 20265.748 - 20366.572: 95.9657% ( 15) 00:07:59.221 20366.572 - 20467.397: 96.0809% ( 9) 00:07:59.221 20467.397 - 20568.222: 96.2218% ( 11) 00:07:59.221 20568.222 - 20669.046: 96.3499% ( 10) 00:07:59.221 20669.046 - 20769.871: 96.4652% ( 9) 00:07:59.221 20769.871 - 20870.695: 96.5804% ( 9) 00:07:59.221 20870.695 - 20971.520: 96.7085% ( 10) 00:07:59.221 20971.520 - 21072.345: 96.8494% ( 11) 00:07:59.221 21072.345 - 21173.169: 97.0159% ( 13) 00:07:59.221 21173.169 - 21273.994: 97.1952% ( 14) 00:07:59.221 21273.994 - 21374.818: 97.3617% ( 13) 00:07:59.221 21374.818 - 21475.643: 97.5154% ( 12) 00:07:59.221 21475.643 - 21576.468: 97.6434% ( 10) 00:07:59.221 21576.468 - 21677.292: 97.7587% ( 9) 00:07:59.221 21677.292 - 21778.117: 97.8227% ( 5) 00:07:59.221 21778.117 - 21878.942: 97.8740% ( 4) 00:07:59.221 21878.942 - 21979.766: 97.9124% ( 3) 00:07:59.221 21979.766 - 22080.591: 97.9764% ( 5) 00:07:59.221 22080.591 - 22181.415: 98.0405% ( 5) 00:07:59.221 22181.415 - 22282.240: 98.1045% ( 5) 00:07:59.221 22282.240 - 22383.065: 98.1557% ( 4) 00:07:59.221 22383.065 - 22483.889: 98.2198% ( 5) 00:07:59.221 22483.889 - 22584.714: 98.2838% ( 5) 00:07:59.221 22584.714 - 22685.538: 98.3350% ( 4) 00:07:59.221 22685.538 - 22786.363: 98.3607% ( 2) 00:07:59.221 27020.997 - 27222.646: 98.3863% ( 2) 00:07:59.221 27222.646 - 27424.295: 98.5015% ( 9) 00:07:59.221 27424.295 - 27625.945: 98.6040% ( 8) 00:07:59.221 27625.945 - 27827.594: 98.7193% ( 9) 00:07:59.221 27827.594 - 28029.243: 98.8217% ( 8) 00:07:59.221 28029.243 - 28230.892: 98.9370% ( 9) 00:07:59.221 28230.892 - 28432.542: 99.0394% ( 8) 00:07:59.221 28432.542 - 28634.191: 99.1547% ( 9) 00:07:59.221 28634.191 - 28835.840: 99.1803% ( 2) 00:07:59.221 35288.615 - 35490.265: 99.2828% ( 8) 00:07:59.221 35490.265 - 35691.914: 99.3981% ( 9) 00:07:59.221 35691.914 - 35893.563: 99.5005% ( 8) 00:07:59.221 35893.563 - 36095.212: 99.6030% ( 8) 00:07:59.221 36095.212 - 36296.862: 99.7182% ( 9) 00:07:59.221 36296.862 - 36498.511: 99.8207% ( 8) 00:07:59.221 36498.511 - 36700.160: 99.9488% ( 10) 00:07:59.221 36700.160 - 36901.809: 100.0000% ( 4) 00:07:59.221 00:07:59.221 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:59.221 ============================================================================== 00:07:59.221 Range in us Cumulative IO count 00:07:59.221 6956.898 - 7007.311: 0.0256% ( 2) 00:07:59.221 7007.311 - 7057.723: 0.0768% ( 4) 00:07:59.221 7057.723 - 7108.135: 0.1281% ( 4) 00:07:59.221 7108.135 - 7158.548: 0.1665% ( 3) 00:07:59.221 7158.548 - 7208.960: 0.2177% ( 4) 00:07:59.221 7208.960 - 7259.372: 0.2561% ( 3) 00:07:59.221 7259.372 - 7309.785: 0.3074% ( 4) 00:07:59.221 7309.785 - 7360.197: 0.3458% ( 3) 00:07:59.221 7360.197 - 7410.609: 0.3970% ( 4) 00:07:59.221 7410.609 - 7461.022: 0.4483% ( 4) 00:07:59.221 7461.022 - 7511.434: 0.4867% ( 3) 00:07:59.221 7511.434 - 7561.846: 0.5379% ( 4) 00:07:59.221 7561.846 - 7612.258: 0.5763% ( 3) 00:07:59.221 7612.258 - 7662.671: 0.6276% ( 4) 00:07:59.221 7662.671 - 7713.083: 0.6660% ( 3) 00:07:59.221 7713.083 - 7763.495: 0.7172% ( 4) 00:07:59.221 7763.495 - 7813.908: 0.7556% ( 3) 00:07:59.221 7813.908 - 7864.320: 0.8069% ( 4) 00:07:59.221 7864.320 - 7914.732: 0.8197% ( 1) 00:07:59.221 12048.542 - 12098.954: 0.8581% ( 3) 00:07:59.221 12098.954 - 12149.366: 0.8965% ( 3) 00:07:59.221 12149.366 - 12199.778: 0.9221% ( 2) 00:07:59.221 12199.778 - 12250.191: 0.9606% ( 3) 00:07:59.221 12250.191 - 12300.603: 1.0118% ( 4) 00:07:59.221 12300.603 - 12351.015: 1.0502% ( 3) 00:07:59.221 12351.015 - 12401.428: 1.1014% ( 4) 00:07:59.221 12401.428 - 12451.840: 1.1399% ( 3) 00:07:59.221 12451.840 - 12502.252: 1.1783% ( 3) 00:07:59.221 12502.252 - 12552.665: 1.2167% ( 3) 00:07:59.221 12552.665 - 12603.077: 1.2679% ( 4) 00:07:59.221 12603.077 - 12653.489: 1.3448% ( 6) 00:07:59.221 12653.489 - 12703.902: 1.4216% ( 6) 00:07:59.221 12703.902 - 12754.314: 1.4985% ( 6) 00:07:59.221 12754.314 - 12804.726: 1.5881% ( 7) 00:07:59.221 12804.726 - 12855.138: 1.6522% ( 5) 00:07:59.221 12855.138 - 12905.551: 1.7418% ( 7) 00:07:59.221 12905.551 - 13006.375: 1.9595% ( 17) 00:07:59.221 13006.375 - 13107.200: 2.1388% ( 14) 00:07:59.221 13107.200 - 13208.025: 2.3309% ( 15) 00:07:59.221 13208.025 - 13308.849: 2.6127% ( 22) 00:07:59.221 13308.849 - 13409.674: 2.9329% ( 25) 00:07:59.221 13409.674 - 13510.498: 3.3683% ( 34) 00:07:59.221 13510.498 - 13611.323: 3.8806% ( 40) 00:07:59.221 13611.323 - 13712.148: 4.4057% ( 41) 00:07:59.221 13712.148 - 13812.972: 5.1742% ( 60) 00:07:59.221 13812.972 - 13913.797: 5.9682% ( 62) 00:07:59.221 13913.797 - 14014.622: 6.9544% ( 77) 00:07:59.221 14014.622 - 14115.446: 8.2608% ( 102) 00:07:59.221 14115.446 - 14216.271: 9.6824% ( 111) 00:07:59.221 14216.271 - 14317.095: 11.1040% ( 111) 00:07:59.221 14317.095 - 14417.920: 12.8714% ( 138) 00:07:59.221 14417.920 - 14518.745: 14.9462% ( 162) 00:07:59.221 14518.745 - 14619.569: 17.5205% ( 201) 00:07:59.221 14619.569 - 14720.394: 19.8258% ( 180) 00:07:59.221 14720.394 - 14821.218: 22.4898% ( 208) 00:07:59.221 14821.218 - 14922.043: 25.1153% ( 205) 00:07:59.221 14922.043 - 15022.868: 27.9201% ( 219) 00:07:59.221 15022.868 - 15123.692: 30.7505% ( 221) 00:07:59.221 15123.692 - 15224.517: 34.0804% ( 260) 00:07:59.221 15224.517 - 15325.342: 36.8981% ( 220) 00:07:59.221 15325.342 - 15426.166: 40.0231% ( 244) 00:07:59.221 15426.166 - 15526.991: 42.9303% ( 227) 00:07:59.221 15526.991 - 15627.815: 45.5815% ( 207) 00:07:59.221 15627.815 - 15728.640: 48.0661% ( 194) 00:07:59.221 15728.640 - 15829.465: 50.1665% ( 164) 00:07:59.221 15829.465 - 15930.289: 52.5615% ( 187) 00:07:59.221 15930.289 - 16031.114: 54.8668% ( 180) 00:07:59.221 16031.114 - 16131.938: 56.8519% ( 155) 00:07:59.221 16131.938 - 16232.763: 58.6834% ( 143) 00:07:59.221 16232.763 - 16333.588: 60.6557% ( 154) 00:07:59.221 16333.588 - 16434.412: 62.3591% ( 133) 00:07:59.221 16434.412 - 16535.237: 63.8704% ( 118) 00:07:59.221 16535.237 - 16636.062: 65.1127% ( 97) 00:07:59.221 16636.062 - 16736.886: 66.3294% ( 95) 00:07:59.221 16736.886 - 16837.711: 67.4821% ( 90) 00:07:59.221 16837.711 - 16938.535: 68.6091% ( 88) 00:07:59.221 16938.535 - 17039.360: 69.7746% ( 91) 00:07:59.221 17039.360 - 17140.185: 70.8888% ( 87) 00:07:59.221 17140.185 - 17241.009: 72.1952% ( 102) 00:07:59.221 17241.009 - 17341.834: 73.3350% ( 89) 00:07:59.221 17341.834 - 17442.658: 74.4493% ( 87) 00:07:59.221 17442.658 - 17543.483: 75.6404% ( 93) 00:07:59.221 17543.483 - 17644.308: 76.8443% ( 94) 00:07:59.221 17644.308 - 17745.132: 78.0353% ( 93) 00:07:59.221 17745.132 - 17845.957: 79.2777% ( 97) 00:07:59.221 17845.957 - 17946.782: 80.3791% ( 86) 00:07:59.221 17946.782 - 18047.606: 81.5702% ( 93) 00:07:59.221 18047.606 - 18148.431: 82.6204% ( 82) 00:07:59.221 18148.431 - 18249.255: 83.5681% ( 74) 00:07:59.221 18249.255 - 18350.080: 84.4390% ( 68) 00:07:59.221 18350.080 - 18450.905: 85.3099% ( 68) 00:07:59.221 18450.905 - 18551.729: 86.3730% ( 83) 00:07:59.221 18551.729 - 18652.554: 87.5640% ( 93) 00:07:59.222 18652.554 - 18753.378: 88.5502% ( 77) 00:07:59.222 18753.378 - 18854.203: 89.4980% ( 74) 00:07:59.222 18854.203 - 18955.028: 90.4073% ( 71) 00:07:59.222 18955.028 - 19055.852: 91.2013% ( 62) 00:07:59.222 19055.852 - 19156.677: 91.8033% ( 47) 00:07:59.222 19156.677 - 19257.502: 92.3156% ( 40) 00:07:59.222 19257.502 - 19358.326: 92.8663% ( 43) 00:07:59.222 19358.326 - 19459.151: 93.3274% ( 36) 00:07:59.222 19459.151 - 19559.975: 93.8012% ( 37) 00:07:59.222 19559.975 - 19660.800: 94.1855% ( 30) 00:07:59.222 19660.800 - 19761.625: 94.5184% ( 26) 00:07:59.222 19761.625 - 19862.449: 94.7874% ( 21) 00:07:59.222 19862.449 - 19963.274: 95.0179% ( 18) 00:07:59.222 19963.274 - 20064.098: 95.2485% ( 18) 00:07:59.222 20064.098 - 20164.923: 95.4790% ( 18) 00:07:59.222 20164.923 - 20265.748: 95.6455% ( 13) 00:07:59.222 20265.748 - 20366.572: 95.8888% ( 19) 00:07:59.222 20366.572 - 20467.397: 96.1194% ( 18) 00:07:59.222 20467.397 - 20568.222: 96.3755% ( 20) 00:07:59.222 20568.222 - 20669.046: 96.6060% ( 18) 00:07:59.222 20669.046 - 20769.871: 96.8750% ( 21) 00:07:59.222 20769.871 - 20870.695: 97.1568% ( 22) 00:07:59.222 20870.695 - 20971.520: 97.4769% ( 25) 00:07:59.222 20971.520 - 21072.345: 97.6819% ( 16) 00:07:59.222 21072.345 - 21173.169: 97.8612% ( 14) 00:07:59.222 21173.169 - 21273.994: 97.9636% ( 8) 00:07:59.222 21273.994 - 21374.818: 98.0405% ( 6) 00:07:59.222 21374.818 - 21475.643: 98.1301% ( 7) 00:07:59.222 21475.643 - 21576.468: 98.2710% ( 11) 00:07:59.222 21576.468 - 21677.292: 98.3350% ( 5) 00:07:59.222 21677.292 - 21778.117: 98.3607% ( 2) 00:07:59.222 27222.646 - 27424.295: 98.4247% ( 5) 00:07:59.222 27424.295 - 27625.945: 98.5272% ( 8) 00:07:59.222 27625.945 - 27827.594: 98.6296% ( 8) 00:07:59.222 27827.594 - 28029.243: 98.7449% ( 9) 00:07:59.222 28029.243 - 28230.892: 98.8601% ( 9) 00:07:59.222 28230.892 - 28432.542: 98.9754% ( 9) 00:07:59.222 28432.542 - 28634.191: 99.0779% ( 8) 00:07:59.222 28634.191 - 28835.840: 99.1803% ( 8) 00:07:59.222 35086.966 - 35288.615: 99.2188% ( 3) 00:07:59.222 35288.615 - 35490.265: 99.3340% ( 9) 00:07:59.222 35490.265 - 35691.914: 99.4237% ( 7) 00:07:59.222 35691.914 - 35893.563: 99.5261% ( 8) 00:07:59.222 35893.563 - 36095.212: 99.6414% ( 9) 00:07:59.222 36095.212 - 36296.862: 99.7567% ( 9) 00:07:59.222 36296.862 - 36498.511: 99.8591% ( 8) 00:07:59.222 36498.511 - 36700.160: 99.9744% ( 9) 00:07:59.222 36700.160 - 36901.809: 100.0000% ( 2) 00:07:59.222 00:07:59.222 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:59.222 ============================================================================== 00:07:59.222 Range in us Cumulative IO count 00:07:59.222 6301.538 - 6326.745: 0.0128% ( 1) 00:07:59.222 6326.745 - 6351.951: 0.0384% ( 2) 00:07:59.222 6351.951 - 6377.157: 0.0640% ( 2) 00:07:59.222 6377.157 - 6402.363: 0.0768% ( 1) 00:07:59.222 6402.363 - 6427.569: 0.1025% ( 2) 00:07:59.222 6427.569 - 6452.775: 0.1281% ( 2) 00:07:59.222 6452.775 - 6503.188: 0.1537% ( 2) 00:07:59.222 6503.188 - 6553.600: 0.1921% ( 3) 00:07:59.222 6553.600 - 6604.012: 0.2433% ( 4) 00:07:59.222 6604.012 - 6654.425: 0.2690% ( 2) 00:07:59.222 6654.425 - 6704.837: 0.2946% ( 2) 00:07:59.222 6704.837 - 6755.249: 0.3458% ( 4) 00:07:59.222 6755.249 - 6805.662: 0.3842% ( 3) 00:07:59.222 6805.662 - 6856.074: 0.4355% ( 4) 00:07:59.222 6856.074 - 6906.486: 0.4867% ( 4) 00:07:59.222 6906.486 - 6956.898: 0.5251% ( 3) 00:07:59.222 6956.898 - 7007.311: 0.5763% ( 4) 00:07:59.222 7007.311 - 7057.723: 0.6276% ( 4) 00:07:59.222 7057.723 - 7108.135: 0.6660% ( 3) 00:07:59.222 7108.135 - 7158.548: 0.7044% ( 3) 00:07:59.222 7158.548 - 7208.960: 0.7556% ( 4) 00:07:59.222 7208.960 - 7259.372: 0.8069% ( 4) 00:07:59.222 7259.372 - 7309.785: 0.8197% ( 1) 00:07:59.222 11494.006 - 11544.418: 0.8837% ( 5) 00:07:59.222 11544.418 - 11594.831: 0.9221% ( 3) 00:07:59.222 11594.831 - 11645.243: 0.9606% ( 3) 00:07:59.222 11645.243 - 11695.655: 0.9990% ( 3) 00:07:59.222 11695.655 - 11746.068: 1.0502% ( 4) 00:07:59.222 11746.068 - 11796.480: 1.0886% ( 3) 00:07:59.222 11796.480 - 11846.892: 1.1270% ( 3) 00:07:59.222 11846.892 - 11897.305: 1.1783% ( 4) 00:07:59.222 11897.305 - 11947.717: 1.2167% ( 3) 00:07:59.222 11947.717 - 11998.129: 1.2679% ( 4) 00:07:59.222 11998.129 - 12048.542: 1.3064% ( 3) 00:07:59.222 12048.542 - 12098.954: 1.3576% ( 4) 00:07:59.222 12098.954 - 12149.366: 1.3960% ( 3) 00:07:59.222 12149.366 - 12199.778: 1.4472% ( 4) 00:07:59.222 12199.778 - 12250.191: 1.4857% ( 3) 00:07:59.222 12250.191 - 12300.603: 1.5369% ( 4) 00:07:59.222 12300.603 - 12351.015: 1.5753% ( 3) 00:07:59.222 12351.015 - 12401.428: 1.6137% ( 3) 00:07:59.222 12401.428 - 12451.840: 1.6393% ( 2) 00:07:59.222 12703.902 - 12754.314: 1.6522% ( 1) 00:07:59.222 12754.314 - 12804.726: 1.6906% ( 3) 00:07:59.222 12804.726 - 12855.138: 1.7290% ( 3) 00:07:59.222 12855.138 - 12905.551: 1.7546% ( 2) 00:07:59.222 12905.551 - 13006.375: 1.8955% ( 11) 00:07:59.222 13006.375 - 13107.200: 2.0620% ( 13) 00:07:59.222 13107.200 - 13208.025: 2.1773% ( 9) 00:07:59.222 13208.025 - 13308.849: 2.4334% ( 20) 00:07:59.222 13308.849 - 13409.674: 2.8176% ( 30) 00:07:59.222 13409.674 - 13510.498: 3.3427% ( 41) 00:07:59.222 13510.498 - 13611.323: 3.9575% ( 48) 00:07:59.222 13611.323 - 13712.148: 4.7259% ( 60) 00:07:59.222 13712.148 - 13812.972: 5.6993% ( 76) 00:07:59.222 13812.972 - 13913.797: 6.6726% ( 76) 00:07:59.222 13913.797 - 14014.622: 7.7357% ( 83) 00:07:59.222 14014.622 - 14115.446: 8.9139% ( 92) 00:07:59.222 14115.446 - 14216.271: 10.1178% ( 94) 00:07:59.222 14216.271 - 14317.095: 11.7316% ( 126) 00:07:59.222 14317.095 - 14417.920: 13.5886% ( 145) 00:07:59.222 14417.920 - 14518.745: 15.5610% ( 154) 00:07:59.222 14518.745 - 14619.569: 17.5973% ( 159) 00:07:59.222 14619.569 - 14720.394: 19.7618% ( 169) 00:07:59.222 14720.394 - 14821.218: 22.0415% ( 178) 00:07:59.222 14821.218 - 14922.043: 24.4237% ( 186) 00:07:59.222 14922.043 - 15022.868: 26.6906% ( 177) 00:07:59.222 15022.868 - 15123.692: 29.4698% ( 217) 00:07:59.222 15123.692 - 15224.517: 32.0953% ( 205) 00:07:59.222 15224.517 - 15325.342: 34.9385% ( 222) 00:07:59.222 15325.342 - 15426.166: 37.6921% ( 215) 00:07:59.222 15426.166 - 15526.991: 40.6250% ( 229) 00:07:59.222 15526.991 - 15627.815: 43.3658% ( 214) 00:07:59.222 15627.815 - 15728.640: 46.2346% ( 224) 00:07:59.222 15728.640 - 15829.465: 49.0779% ( 222) 00:07:59.222 15829.465 - 15930.289: 51.6778% ( 203) 00:07:59.222 15930.289 - 16031.114: 54.3161% ( 206) 00:07:59.222 16031.114 - 16131.938: 56.7367% ( 189) 00:07:59.222 16131.938 - 16232.763: 58.9267% ( 171) 00:07:59.222 16232.763 - 16333.588: 61.0656% ( 167) 00:07:59.222 16333.588 - 16434.412: 62.7177% ( 129) 00:07:59.222 16434.412 - 16535.237: 64.1393% ( 111) 00:07:59.222 16535.237 - 16636.062: 65.3304% ( 93) 00:07:59.222 16636.062 - 16736.886: 66.3294% ( 78) 00:07:59.222 16736.886 - 16837.711: 67.2515% ( 72) 00:07:59.222 16837.711 - 16938.535: 68.2761% ( 80) 00:07:59.222 16938.535 - 17039.360: 69.1214% ( 66) 00:07:59.222 17039.360 - 17140.185: 70.1204% ( 78) 00:07:59.222 17140.185 - 17241.009: 71.1322% ( 79) 00:07:59.222 17241.009 - 17341.834: 72.2336% ( 86) 00:07:59.222 17341.834 - 17442.658: 73.2966% ( 83) 00:07:59.222 17442.658 - 17543.483: 74.3084% ( 79) 00:07:59.222 17543.483 - 17644.308: 75.6148% ( 102) 00:07:59.222 17644.308 - 17745.132: 77.2797% ( 130) 00:07:59.222 17745.132 - 17845.957: 78.6373% ( 106) 00:07:59.222 17845.957 - 17946.782: 80.0717% ( 112) 00:07:59.222 17946.782 - 18047.606: 81.3909% ( 103) 00:07:59.222 18047.606 - 18148.431: 82.7485% ( 106) 00:07:59.222 18148.431 - 18249.255: 84.0292% ( 100) 00:07:59.222 18249.255 - 18350.080: 85.4124% ( 108) 00:07:59.222 18350.080 - 18450.905: 86.5266% ( 87) 00:07:59.222 18450.905 - 18551.729: 87.6921% ( 91) 00:07:59.222 18551.729 - 18652.554: 88.8704% ( 92) 00:07:59.222 18652.554 - 18753.378: 89.9846% ( 87) 00:07:59.222 18753.378 - 18854.203: 90.9324% ( 74) 00:07:59.222 18854.203 - 18955.028: 91.7905% ( 67) 00:07:59.222 18955.028 - 19055.852: 92.5845% ( 62) 00:07:59.222 19055.852 - 19156.677: 93.2505% ( 52) 00:07:59.222 19156.677 - 19257.502: 93.8012% ( 43) 00:07:59.222 19257.502 - 19358.326: 94.3391% ( 42) 00:07:59.222 19358.326 - 19459.151: 94.7234% ( 30) 00:07:59.222 19459.151 - 19559.975: 95.0948% ( 29) 00:07:59.222 19559.975 - 19660.800: 95.3509% ( 20) 00:07:59.222 19660.800 - 19761.625: 95.5686% ( 17) 00:07:59.222 19761.625 - 19862.449: 95.7736% ( 16) 00:07:59.222 19862.449 - 19963.274: 95.9401% ( 13) 00:07:59.222 19963.274 - 20064.098: 96.0297% ( 7) 00:07:59.222 20064.098 - 20164.923: 96.0809% ( 4) 00:07:59.222 20164.923 - 20265.748: 96.1450% ( 5) 00:07:59.222 20265.748 - 20366.572: 96.2474% ( 8) 00:07:59.222 20366.572 - 20467.397: 96.3883% ( 11) 00:07:59.222 20467.397 - 20568.222: 96.5804% ( 15) 00:07:59.222 20568.222 - 20669.046: 96.8622% ( 22) 00:07:59.222 20669.046 - 20769.871: 97.1440% ( 22) 00:07:59.222 20769.871 - 20870.695: 97.3745% ( 18) 00:07:59.222 20870.695 - 20971.520: 97.5666% ( 15) 00:07:59.222 20971.520 - 21072.345: 97.7715% ( 16) 00:07:59.222 21072.345 - 21173.169: 97.9252% ( 12) 00:07:59.222 21173.169 - 21273.994: 98.0661% ( 11) 00:07:59.222 21273.994 - 21374.818: 98.1557% ( 7) 00:07:59.222 21374.818 - 21475.643: 98.2582% ( 8) 00:07:59.222 21475.643 - 21576.468: 98.3350% ( 6) 00:07:59.222 21576.468 - 21677.292: 98.3607% ( 2) 00:07:59.222 26617.698 - 26819.348: 98.4119% ( 4) 00:07:59.222 26819.348 - 27020.997: 98.5272% ( 9) 00:07:59.222 27020.997 - 27222.646: 98.6424% ( 9) 00:07:59.222 27222.646 - 27424.295: 98.7321% ( 7) 00:07:59.222 27424.295 - 27625.945: 98.8473% ( 9) 00:07:59.223 27625.945 - 27827.594: 98.9626% ( 9) 00:07:59.223 27827.594 - 28029.243: 99.0779% ( 9) 00:07:59.223 28029.243 - 28230.892: 99.1803% ( 8) 00:07:59.223 34280.369 - 34482.018: 99.1931% ( 1) 00:07:59.223 34482.018 - 34683.668: 99.2956% ( 8) 00:07:59.223 34683.668 - 34885.317: 99.3981% ( 8) 00:07:59.223 34885.317 - 35086.966: 99.5005% ( 8) 00:07:59.223 35086.966 - 35288.615: 99.6158% ( 9) 00:07:59.223 35288.615 - 35490.265: 99.6926% ( 6) 00:07:59.223 35490.265 - 35691.914: 99.8079% ( 9) 00:07:59.223 35691.914 - 35893.563: 99.9232% ( 9) 00:07:59.223 35893.563 - 36095.212: 100.0000% ( 6) 00:07:59.223 00:07:59.223 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:59.223 ============================================================================== 00:07:59.223 Range in us Cumulative IO count 00:07:59.223 5747.003 - 5772.209: 0.0127% ( 1) 00:07:59.223 5772.209 - 5797.415: 0.0254% ( 1) 00:07:59.223 5797.415 - 5822.622: 0.0508% ( 2) 00:07:59.223 5822.622 - 5847.828: 0.0635% ( 1) 00:07:59.223 5847.828 - 5873.034: 0.1143% ( 4) 00:07:59.223 5873.034 - 5898.240: 0.1651% ( 4) 00:07:59.223 5898.240 - 5923.446: 0.2160% ( 4) 00:07:59.223 5923.446 - 5948.652: 0.2414% ( 2) 00:07:59.223 5948.652 - 5973.858: 0.2795% ( 3) 00:07:59.223 5973.858 - 5999.065: 0.3049% ( 2) 00:07:59.223 5999.065 - 6024.271: 0.3303% ( 2) 00:07:59.223 6024.271 - 6049.477: 0.3557% ( 2) 00:07:59.223 6049.477 - 6074.683: 0.3811% ( 2) 00:07:59.223 6074.683 - 6099.889: 0.3938% ( 1) 00:07:59.223 6099.889 - 6125.095: 0.4065% ( 1) 00:07:59.223 6125.095 - 6150.302: 0.4446% ( 3) 00:07:59.223 6175.508 - 6200.714: 0.4827% ( 3) 00:07:59.223 6200.714 - 6225.920: 0.5081% ( 2) 00:07:59.223 6225.920 - 6251.126: 0.5208% ( 1) 00:07:59.223 6251.126 - 6276.332: 0.5335% ( 1) 00:07:59.223 6276.332 - 6301.538: 0.5589% ( 2) 00:07:59.223 6326.745 - 6351.951: 0.5843% ( 2) 00:07:59.223 6351.951 - 6377.157: 0.5971% ( 1) 00:07:59.223 6377.157 - 6402.363: 0.6098% ( 1) 00:07:59.223 6402.363 - 6427.569: 0.6352% ( 2) 00:07:59.223 6427.569 - 6452.775: 0.6479% ( 1) 00:07:59.223 6452.775 - 6503.188: 0.6987% ( 4) 00:07:59.223 6503.188 - 6553.600: 0.7368% ( 3) 00:07:59.223 6553.600 - 6604.012: 0.7876% ( 4) 00:07:59.223 6604.012 - 6654.425: 0.8130% ( 2) 00:07:59.223 10485.760 - 10536.172: 0.8765% ( 5) 00:07:59.223 10536.172 - 10586.585: 0.9400% ( 5) 00:07:59.223 10586.585 - 10636.997: 0.9782% ( 3) 00:07:59.223 10636.997 - 10687.409: 1.0163% ( 3) 00:07:59.223 10687.409 - 10737.822: 1.0544% ( 3) 00:07:59.223 10737.822 - 10788.234: 1.0925% ( 3) 00:07:59.223 10788.234 - 10838.646: 1.1433% ( 4) 00:07:59.223 10838.646 - 10889.058: 1.1941% ( 4) 00:07:59.223 10889.058 - 10939.471: 1.2322% ( 3) 00:07:59.223 10939.471 - 10989.883: 1.2830% ( 4) 00:07:59.223 10989.883 - 11040.295: 1.3211% ( 3) 00:07:59.223 11040.295 - 11090.708: 1.3592% ( 3) 00:07:59.223 11090.708 - 11141.120: 1.3974% ( 3) 00:07:59.223 11141.120 - 11191.532: 1.4482% ( 4) 00:07:59.223 11191.532 - 11241.945: 1.4990% ( 4) 00:07:59.223 11241.945 - 11292.357: 1.5371% ( 3) 00:07:59.223 11292.357 - 11342.769: 1.5752% ( 3) 00:07:59.223 11342.769 - 11393.182: 1.6133% ( 3) 00:07:59.223 11393.182 - 11443.594: 1.6260% ( 1) 00:07:59.223 12855.138 - 12905.551: 1.6387% ( 1) 00:07:59.223 12905.551 - 13006.375: 1.7022% ( 5) 00:07:59.223 13006.375 - 13107.200: 1.9436% ( 19) 00:07:59.223 13107.200 - 13208.025: 2.3374% ( 31) 00:07:59.223 13208.025 - 13308.849: 2.8836% ( 43) 00:07:59.223 13308.849 - 13409.674: 3.4426% ( 44) 00:07:59.223 13409.674 - 13510.498: 4.0269% ( 46) 00:07:59.223 13510.498 - 13611.323: 4.6748% ( 51) 00:07:59.223 13611.323 - 13712.148: 5.2718% ( 47) 00:07:59.223 13712.148 - 13812.972: 5.9451% ( 53) 00:07:59.223 13812.972 - 13913.797: 6.7327% ( 62) 00:07:59.223 13913.797 - 14014.622: 7.6855% ( 75) 00:07:59.223 14014.622 - 14115.446: 8.7907% ( 87) 00:07:59.223 14115.446 - 14216.271: 10.1245% ( 105) 00:07:59.223 14216.271 - 14317.095: 11.5346% ( 111) 00:07:59.223 14317.095 - 14417.920: 13.2622% ( 136) 00:07:59.223 14417.920 - 14518.745: 14.8755% ( 127) 00:07:59.223 14518.745 - 14619.569: 16.6794% ( 142) 00:07:59.223 14619.569 - 14720.394: 18.5340% ( 146) 00:07:59.223 14720.394 - 14821.218: 20.7317% ( 173) 00:07:59.223 14821.218 - 14922.043: 23.1453% ( 190) 00:07:59.223 14922.043 - 15022.868: 25.7114% ( 202) 00:07:59.223 15022.868 - 15123.692: 28.2901% ( 203) 00:07:59.223 15123.692 - 15224.517: 31.2627% ( 234) 00:07:59.223 15224.517 - 15325.342: 34.3369% ( 242) 00:07:59.223 15325.342 - 15426.166: 37.3476% ( 237) 00:07:59.223 15426.166 - 15526.991: 40.4472% ( 244) 00:07:59.223 15526.991 - 15627.815: 43.4324% ( 235) 00:07:59.223 15627.815 - 15728.640: 46.4177% ( 235) 00:07:59.223 15728.640 - 15829.465: 49.3267% ( 229) 00:07:59.223 15829.465 - 15930.289: 52.5025% ( 250) 00:07:59.223 15930.289 - 16031.114: 55.2718% ( 218) 00:07:59.223 16031.114 - 16131.938: 57.9268% ( 209) 00:07:59.223 16131.938 - 16232.763: 60.3023% ( 187) 00:07:59.223 16232.763 - 16333.588: 62.0935% ( 141) 00:07:59.223 16333.588 - 16434.412: 63.6179% ( 120) 00:07:59.223 16434.412 - 16535.237: 64.9898% ( 108) 00:07:59.223 16535.237 - 16636.062: 66.3618% ( 108) 00:07:59.223 16636.062 - 16736.886: 67.5305% ( 92) 00:07:59.223 16736.886 - 16837.711: 68.6103% ( 85) 00:07:59.223 16837.711 - 16938.535: 69.6138% ( 79) 00:07:59.223 16938.535 - 17039.360: 70.6174% ( 79) 00:07:59.223 17039.360 - 17140.185: 71.7607% ( 90) 00:07:59.223 17140.185 - 17241.009: 73.0818% ( 104) 00:07:59.223 17241.009 - 17341.834: 74.1235% ( 82) 00:07:59.223 17341.834 - 17442.658: 75.1270% ( 79) 00:07:59.223 17442.658 - 17543.483: 76.1814% ( 83) 00:07:59.223 17543.483 - 17644.308: 77.2358% ( 83) 00:07:59.223 17644.308 - 17745.132: 78.3410% ( 87) 00:07:59.223 17745.132 - 17845.957: 79.5859% ( 98) 00:07:59.223 17845.957 - 17946.782: 80.7165% ( 89) 00:07:59.223 17946.782 - 18047.606: 81.9487% ( 97) 00:07:59.223 18047.606 - 18148.431: 83.2317% ( 101) 00:07:59.223 18148.431 - 18249.255: 84.4385% ( 95) 00:07:59.223 18249.255 - 18350.080: 85.6961% ( 99) 00:07:59.223 18350.080 - 18450.905: 86.8140% ( 88) 00:07:59.223 18450.905 - 18551.729: 87.9065% ( 86) 00:07:59.223 18551.729 - 18652.554: 88.9736% ( 84) 00:07:59.223 18652.554 - 18753.378: 89.8755% ( 71) 00:07:59.223 18753.378 - 18854.203: 90.7901% ( 72) 00:07:59.223 18854.203 - 18955.028: 91.7810% ( 78) 00:07:59.223 18955.028 - 19055.852: 92.7973% ( 80) 00:07:59.223 19055.852 - 19156.677: 93.6357% ( 66) 00:07:59.223 19156.677 - 19257.502: 94.4614% ( 65) 00:07:59.223 19257.502 - 19358.326: 95.1347% ( 53) 00:07:59.223 19358.326 - 19459.151: 95.6174% ( 38) 00:07:59.223 19459.151 - 19559.975: 95.9604% ( 27) 00:07:59.223 19559.975 - 19660.800: 96.1636% ( 16) 00:07:59.223 19660.800 - 19761.625: 96.3161% ( 12) 00:07:59.223 19761.625 - 19862.449: 96.4558% ( 11) 00:07:59.224 19862.449 - 19963.274: 96.6082% ( 12) 00:07:59.224 19963.274 - 20064.098: 96.7480% ( 11) 00:07:59.224 20064.098 - 20164.923: 96.8623% ( 9) 00:07:59.224 20164.923 - 20265.748: 96.9766% ( 9) 00:07:59.224 20265.748 - 20366.572: 97.1037% ( 10) 00:07:59.224 20366.572 - 20467.397: 97.1799% ( 6) 00:07:59.224 20467.397 - 20568.222: 97.2434% ( 5) 00:07:59.224 20568.222 - 20669.046: 97.3069% ( 5) 00:07:59.224 20669.046 - 20769.871: 97.3704% ( 5) 00:07:59.224 20769.871 - 20870.695: 97.4593% ( 7) 00:07:59.224 20870.695 - 20971.520: 97.6118% ( 12) 00:07:59.224 20971.520 - 21072.345: 97.8913% ( 22) 00:07:59.224 21072.345 - 21173.169: 98.0437% ( 12) 00:07:59.224 21173.169 - 21273.994: 98.2342% ( 15) 00:07:59.224 21273.994 - 21374.818: 98.3359% ( 8) 00:07:59.224 21374.818 - 21475.643: 98.4502% ( 9) 00:07:59.224 21475.643 - 21576.468: 98.5137% ( 5) 00:07:59.224 21576.468 - 21677.292: 98.6280% ( 9) 00:07:59.224 21677.292 - 21778.117: 98.7551% ( 10) 00:07:59.224 21778.117 - 21878.942: 98.8821% ( 10) 00:07:59.224 21878.942 - 21979.766: 99.0091% ( 10) 00:07:59.224 21979.766 - 22080.591: 99.0854% ( 6) 00:07:59.224 22080.591 - 22181.415: 99.1362% ( 4) 00:07:59.224 22181.415 - 22282.240: 99.1870% ( 4) 00:07:59.224 26819.348 - 27020.997: 99.2759% ( 7) 00:07:59.224 27020.997 - 27222.646: 99.3775% ( 8) 00:07:59.224 27222.646 - 27424.295: 99.4792% ( 8) 00:07:59.224 27424.295 - 27625.945: 99.5935% ( 9) 00:07:59.224 27625.945 - 27827.594: 99.7078% ( 9) 00:07:59.224 27827.594 - 28029.243: 99.8222% ( 9) 00:07:59.224 28029.243 - 28230.892: 99.9365% ( 9) 00:07:59.224 28230.892 - 28432.542: 99.9873% ( 4) 00:07:59.224 28432.542 - 28634.191: 100.0000% ( 1) 00:07:59.224 00:07:59.224 18:58:16 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:08:00.169 Initializing NVMe Controllers 00:08:00.169 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:00.169 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:00.169 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:00.169 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:00.169 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:00.169 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:00.169 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:00.169 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:00.169 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:00.169 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:00.169 Initialization complete. Launching workers. 00:08:00.169 ======================================================== 00:08:00.169 Latency(us) 00:08:00.169 Device Information : IOPS MiB/s Average min max 00:08:00.169 PCIE (0000:00:13.0) NSID 1 from core 0: 8330.28 97.62 15378.58 10384.46 36527.91 00:08:00.169 PCIE (0000:00:10.0) NSID 1 from core 0: 8330.28 97.62 15359.75 9903.35 36290.86 00:08:00.169 PCIE (0000:00:11.0) NSID 1 from core 0: 8330.28 97.62 15338.06 9204.32 35472.16 00:08:00.169 PCIE (0000:00:12.0) NSID 1 from core 0: 8330.28 97.62 15316.81 7491.68 35667.25 00:08:00.169 PCIE (0000:00:12.0) NSID 2 from core 0: 8330.28 97.62 15296.13 6532.13 34891.46 00:08:00.169 PCIE (0000:00:12.0) NSID 3 from core 0: 8393.87 98.37 15160.18 6105.17 26812.59 00:08:00.170 ======================================================== 00:08:00.170 Total : 50045.26 586.47 15308.07 6105.17 36527.91 00:08:00.170 00:08:00.170 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:00.170 ================================================================================= 00:08:00.170 1.00000% : 12804.726us 00:08:00.170 10.00000% : 13712.148us 00:08:00.170 25.00000% : 14317.095us 00:08:00.170 50.00000% : 15022.868us 00:08:00.170 75.00000% : 15829.465us 00:08:00.170 90.00000% : 17140.185us 00:08:00.170 95.00000% : 18148.431us 00:08:00.170 98.00000% : 19559.975us 00:08:00.170 99.00000% : 29037.489us 00:08:00.170 99.50000% : 35288.615us 00:08:00.170 99.90000% : 36498.511us 00:08:00.170 99.99000% : 36700.160us 00:08:00.170 99.99900% : 36700.160us 00:08:00.170 99.99990% : 36700.160us 00:08:00.170 99.99999% : 36700.160us 00:08:00.170 00:08:00.170 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:00.170 ================================================================================= 00:08:00.170 1.00000% : 12653.489us 00:08:00.170 10.00000% : 13712.148us 00:08:00.170 25.00000% : 14317.095us 00:08:00.170 50.00000% : 15022.868us 00:08:00.170 75.00000% : 15829.465us 00:08:00.170 90.00000% : 17140.185us 00:08:00.170 95.00000% : 18148.431us 00:08:00.170 98.00000% : 19862.449us 00:08:00.170 99.00000% : 27827.594us 00:08:00.170 99.50000% : 35288.615us 00:08:00.170 99.90000% : 36095.212us 00:08:00.170 99.99000% : 36296.862us 00:08:00.170 99.99900% : 36296.862us 00:08:00.170 99.99990% : 36296.862us 00:08:00.170 99.99999% : 36296.862us 00:08:00.170 00:08:00.170 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:00.170 ================================================================================= 00:08:00.170 1.00000% : 12754.314us 00:08:00.170 10.00000% : 13712.148us 00:08:00.170 25.00000% : 14317.095us 00:08:00.170 50.00000% : 14922.043us 00:08:00.170 75.00000% : 15829.465us 00:08:00.170 90.00000% : 17140.185us 00:08:00.170 95.00000% : 17946.782us 00:08:00.170 98.00000% : 20265.748us 00:08:00.170 99.00000% : 27020.997us 00:08:00.170 99.50000% : 34683.668us 00:08:00.170 99.90000% : 35288.615us 00:08:00.170 99.99000% : 35490.265us 00:08:00.170 99.99900% : 35490.265us 00:08:00.170 99.99990% : 35490.265us 00:08:00.170 99.99999% : 35490.265us 00:08:00.170 00:08:00.170 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:00.170 ================================================================================= 00:08:00.170 1.00000% : 12502.252us 00:08:00.170 10.00000% : 13712.148us 00:08:00.170 25.00000% : 14317.095us 00:08:00.170 50.00000% : 15022.868us 00:08:00.170 75.00000% : 15930.289us 00:08:00.170 90.00000% : 17140.185us 00:08:00.170 95.00000% : 17845.957us 00:08:00.170 98.00000% : 19862.449us 00:08:00.170 99.00000% : 27020.997us 00:08:00.170 99.50000% : 34885.317us 00:08:00.170 99.90000% : 35490.265us 00:08:00.170 99.99000% : 35691.914us 00:08:00.170 99.99900% : 35691.914us 00:08:00.170 99.99990% : 35691.914us 00:08:00.170 99.99999% : 35691.914us 00:08:00.170 00:08:00.170 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:00.170 ================================================================================= 00:08:00.170 1.00000% : 11998.129us 00:08:00.170 10.00000% : 13712.148us 00:08:00.170 25.00000% : 14317.095us 00:08:00.170 50.00000% : 15022.868us 00:08:00.170 75.00000% : 15829.465us 00:08:00.170 90.00000% : 17140.185us 00:08:00.170 95.00000% : 18249.255us 00:08:00.170 98.00000% : 19559.975us 00:08:00.170 99.00000% : 26416.049us 00:08:00.170 99.50000% : 34078.720us 00:08:00.170 99.90000% : 34885.317us 00:08:00.170 99.99000% : 35086.966us 00:08:00.170 99.99900% : 35086.966us 00:08:00.170 99.99990% : 35086.966us 00:08:00.170 99.99999% : 35086.966us 00:08:00.170 00:08:00.170 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:00.170 ================================================================================= 00:08:00.170 1.00000% : 11494.006us 00:08:00.170 10.00000% : 13611.323us 00:08:00.170 25.00000% : 14317.095us 00:08:00.170 50.00000% : 15022.868us 00:08:00.170 75.00000% : 15829.465us 00:08:00.170 90.00000% : 17140.185us 00:08:00.170 95.00000% : 18148.431us 00:08:00.170 98.00000% : 18753.378us 00:08:00.170 99.00000% : 19660.800us 00:08:00.170 99.50000% : 26012.751us 00:08:00.170 99.90000% : 26819.348us 00:08:00.170 99.99000% : 26819.348us 00:08:00.170 99.99900% : 26819.348us 00:08:00.170 99.99990% : 26819.348us 00:08:00.170 99.99999% : 26819.348us 00:08:00.170 00:08:00.170 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:00.170 ============================================================================== 00:08:00.170 Range in us Cumulative IO count 00:08:00.170 10334.523 - 10384.935: 0.0119% ( 1) 00:08:00.170 10384.935 - 10435.348: 0.0239% ( 1) 00:08:00.170 10435.348 - 10485.760: 0.0477% ( 2) 00:08:00.170 10485.760 - 10536.172: 0.0954% ( 4) 00:08:00.170 10536.172 - 10586.585: 0.1789% ( 7) 00:08:00.170 10586.585 - 10636.997: 0.2624% ( 7) 00:08:00.170 10636.997 - 10687.409: 0.4532% ( 16) 00:08:00.170 10687.409 - 10737.822: 0.5367% ( 7) 00:08:00.170 10737.822 - 10788.234: 0.5725% ( 3) 00:08:00.170 10788.234 - 10838.646: 0.6083% ( 3) 00:08:00.170 10838.646 - 10889.058: 0.6441% ( 3) 00:08:00.170 10889.058 - 10939.471: 0.6679% ( 2) 00:08:00.170 10939.471 - 10989.883: 0.7156% ( 4) 00:08:00.170 10989.883 - 11040.295: 0.7514% ( 3) 00:08:00.170 11040.295 - 11090.708: 0.7634% ( 1) 00:08:00.170 12502.252 - 12552.665: 0.7753% ( 1) 00:08:00.170 12552.665 - 12603.077: 0.8230% ( 4) 00:08:00.170 12603.077 - 12653.489: 0.8707% ( 4) 00:08:00.170 12653.489 - 12703.902: 0.9303% ( 5) 00:08:00.170 12703.902 - 12754.314: 0.9661% ( 3) 00:08:00.170 12754.314 - 12804.726: 1.0377% ( 6) 00:08:00.170 12804.726 - 12855.138: 1.1331% ( 8) 00:08:00.170 12855.138 - 12905.551: 1.3240% ( 16) 00:08:00.170 12905.551 - 13006.375: 1.7414% ( 35) 00:08:00.170 13006.375 - 13107.200: 2.2185% ( 40) 00:08:00.170 13107.200 - 13208.025: 2.8507% ( 53) 00:08:00.170 13208.025 - 13308.849: 3.8645% ( 85) 00:08:00.170 13308.849 - 13409.674: 5.3197% ( 122) 00:08:00.170 13409.674 - 13510.498: 7.0372% ( 144) 00:08:00.170 13510.498 - 13611.323: 8.9575% ( 161) 00:08:00.170 13611.323 - 13712.148: 10.8421% ( 158) 00:08:00.170 13712.148 - 13812.972: 13.2156% ( 199) 00:08:00.170 13812.972 - 13913.797: 15.2672% ( 172) 00:08:00.170 13913.797 - 14014.622: 17.7123% ( 205) 00:08:00.170 14014.622 - 14115.446: 20.3006% ( 217) 00:08:00.171 14115.446 - 14216.271: 23.4017% ( 260) 00:08:00.171 14216.271 - 14317.095: 26.4790% ( 258) 00:08:00.171 14317.095 - 14417.920: 29.7710% ( 276) 00:08:00.171 14417.920 - 14518.745: 33.5401% ( 316) 00:08:00.171 14518.745 - 14619.569: 36.8559% ( 278) 00:08:00.171 14619.569 - 14720.394: 40.6966% ( 322) 00:08:00.171 14720.394 - 14821.218: 44.3702% ( 308) 00:08:00.171 14821.218 - 14922.043: 47.7576% ( 284) 00:08:00.171 14922.043 - 15022.868: 51.2166% ( 290) 00:08:00.171 15022.868 - 15123.692: 55.0692% ( 323) 00:08:00.171 15123.692 - 15224.517: 58.9098% ( 322) 00:08:00.171 15224.517 - 15325.342: 62.9890% ( 342) 00:08:00.171 15325.342 - 15426.166: 66.7223% ( 313) 00:08:00.171 15426.166 - 15526.991: 69.8115% ( 259) 00:08:00.171 15526.991 - 15627.815: 72.3044% ( 209) 00:08:00.171 15627.815 - 15728.640: 74.5348% ( 187) 00:08:00.171 15728.640 - 15829.465: 76.5983% ( 173) 00:08:00.171 15829.465 - 15930.289: 77.9342% ( 112) 00:08:00.171 15930.289 - 16031.114: 78.9003% ( 81) 00:08:00.171 16031.114 - 16131.938: 80.0334% ( 95) 00:08:00.171 16131.938 - 16232.763: 81.0234% ( 83) 00:08:00.171 16232.763 - 16333.588: 82.4070% ( 116) 00:08:00.171 16333.588 - 16434.412: 83.5997% ( 100) 00:08:00.171 16434.412 - 16535.237: 84.8998% ( 109) 00:08:00.171 16535.237 - 16636.062: 86.1045% ( 101) 00:08:00.171 16636.062 - 16736.886: 86.9156% ( 68) 00:08:00.171 16736.886 - 16837.711: 87.8578% ( 79) 00:08:00.171 16837.711 - 16938.535: 88.8478% ( 83) 00:08:00.171 16938.535 - 17039.360: 89.6589% ( 68) 00:08:00.171 17039.360 - 17140.185: 90.6369% ( 82) 00:08:00.171 17140.185 - 17241.009: 91.5076% ( 73) 00:08:00.171 17241.009 - 17341.834: 92.1040% ( 50) 00:08:00.171 17341.834 - 17442.658: 92.8674% ( 64) 00:08:00.171 17442.658 - 17543.483: 93.4995% ( 53) 00:08:00.171 17543.483 - 17644.308: 93.8931% ( 33) 00:08:00.171 17644.308 - 17745.132: 94.2032% ( 26) 00:08:00.171 17745.132 - 17845.957: 94.4656% ( 22) 00:08:00.171 17845.957 - 17946.782: 94.6326% ( 14) 00:08:00.171 17946.782 - 18047.606: 94.7996% ( 14) 00:08:00.171 18047.606 - 18148.431: 95.0024% ( 17) 00:08:00.171 18148.431 - 18249.255: 95.1694% ( 14) 00:08:00.171 18249.255 - 18350.080: 95.3602% ( 16) 00:08:00.171 18350.080 - 18450.905: 95.5868% ( 19) 00:08:00.171 18450.905 - 18551.729: 95.8492% ( 22) 00:08:00.171 18551.729 - 18652.554: 96.2071% ( 30) 00:08:00.171 18652.554 - 18753.378: 96.5291% ( 27) 00:08:00.171 18753.378 - 18854.203: 96.6722% ( 12) 00:08:00.171 18854.203 - 18955.028: 96.7915% ( 10) 00:08:00.171 18955.028 - 19055.852: 97.0062% ( 18) 00:08:00.171 19055.852 - 19156.677: 97.2090% ( 17) 00:08:00.171 19156.677 - 19257.502: 97.4356% ( 19) 00:08:00.171 19257.502 - 19358.326: 97.6861% ( 21) 00:08:00.171 19358.326 - 19459.151: 97.8769% ( 16) 00:08:00.171 19459.151 - 19559.975: 98.0200% ( 12) 00:08:00.171 19559.975 - 19660.800: 98.1751% ( 13) 00:08:00.171 19660.800 - 19761.625: 98.2586% ( 7) 00:08:00.171 19761.625 - 19862.449: 98.3302% ( 6) 00:08:00.171 19862.449 - 19963.274: 98.3779% ( 4) 00:08:00.171 19963.274 - 20064.098: 98.4375% ( 5) 00:08:00.171 20064.098 - 20164.923: 98.4733% ( 3) 00:08:00.171 27827.594 - 28029.243: 98.5687% ( 8) 00:08:00.171 28029.243 - 28230.892: 98.6880% ( 10) 00:08:00.171 28230.892 - 28432.542: 98.8192% ( 11) 00:08:00.171 28432.542 - 28634.191: 98.9027% ( 7) 00:08:00.171 28634.191 - 28835.840: 98.9742% ( 6) 00:08:00.171 28835.840 - 29037.489: 99.0100% ( 3) 00:08:00.171 29037.489 - 29239.138: 99.1054% ( 8) 00:08:00.171 29239.138 - 29440.788: 99.2128% ( 9) 00:08:00.171 29440.788 - 29642.437: 99.2366% ( 2) 00:08:00.171 34885.317 - 35086.966: 99.2844% ( 4) 00:08:00.171 35086.966 - 35288.615: 99.6422% ( 30) 00:08:00.171 35288.615 - 35490.265: 99.6780% ( 3) 00:08:00.171 35893.563 - 36095.212: 99.6899% ( 1) 00:08:00.171 36095.212 - 36296.862: 99.7734% ( 7) 00:08:00.171 36296.862 - 36498.511: 99.9642% ( 16) 00:08:00.171 36498.511 - 36700.160: 100.0000% ( 3) 00:08:00.171 00:08:00.171 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:00.171 ============================================================================== 00:08:00.171 Range in us Cumulative IO count 00:08:00.171 9880.812 - 9931.225: 0.1073% ( 9) 00:08:00.171 9931.225 - 9981.637: 0.2505% ( 12) 00:08:00.171 9981.637 - 10032.049: 0.3817% ( 11) 00:08:00.171 10032.049 - 10082.462: 0.4532% ( 6) 00:08:00.171 10082.462 - 10132.874: 0.5010% ( 4) 00:08:00.171 10132.874 - 10183.286: 0.5487% ( 4) 00:08:00.171 10233.698 - 10284.111: 0.6441% ( 8) 00:08:00.171 10284.111 - 10334.523: 0.6560% ( 1) 00:08:00.171 10334.523 - 10384.935: 0.6799% ( 2) 00:08:00.171 10384.935 - 10435.348: 0.6918% ( 1) 00:08:00.171 10485.760 - 10536.172: 0.7156% ( 2) 00:08:00.171 10536.172 - 10586.585: 0.7276% ( 1) 00:08:00.171 10586.585 - 10636.997: 0.7514% ( 2) 00:08:00.171 10636.997 - 10687.409: 0.7634% ( 1) 00:08:00.171 12300.603 - 12351.015: 0.7991% ( 3) 00:08:00.171 12351.015 - 12401.428: 0.8111% ( 1) 00:08:00.171 12401.428 - 12451.840: 0.8469% ( 3) 00:08:00.171 12451.840 - 12502.252: 0.8588% ( 1) 00:08:00.171 12502.252 - 12552.665: 0.8946% ( 3) 00:08:00.171 12552.665 - 12603.077: 0.9661% ( 6) 00:08:00.171 12603.077 - 12653.489: 1.0138% ( 4) 00:08:00.171 12653.489 - 12703.902: 1.0854% ( 6) 00:08:00.171 12703.902 - 12754.314: 1.2047% ( 10) 00:08:00.171 12754.314 - 12804.726: 1.4313% ( 19) 00:08:00.171 12804.726 - 12855.138: 1.6341% ( 17) 00:08:00.171 12855.138 - 12905.551: 1.9680% ( 28) 00:08:00.171 12905.551 - 13006.375: 2.5286% ( 47) 00:08:00.171 13006.375 - 13107.200: 3.3516% ( 69) 00:08:00.171 13107.200 - 13208.025: 4.2939% ( 79) 00:08:00.171 13208.025 - 13308.849: 5.3077% ( 85) 00:08:00.171 13308.849 - 13409.674: 6.3454% ( 87) 00:08:00.171 13409.674 - 13510.498: 7.5740% ( 103) 00:08:00.171 13510.498 - 13611.323: 9.1365% ( 131) 00:08:00.171 13611.323 - 13712.148: 11.2953% ( 181) 00:08:00.171 13712.148 - 13812.972: 13.6331% ( 196) 00:08:00.171 13812.972 - 13913.797: 15.4461% ( 152) 00:08:00.171 13913.797 - 14014.622: 17.5453% ( 176) 00:08:00.171 14014.622 - 14115.446: 20.5153% ( 249) 00:08:00.171 14115.446 - 14216.271: 23.7118% ( 268) 00:08:00.171 14216.271 - 14317.095: 26.9203% ( 269) 00:08:00.171 14317.095 - 14417.920: 29.8545% ( 246) 00:08:00.171 14417.920 - 14518.745: 33.4089% ( 298) 00:08:00.171 14518.745 - 14619.569: 36.6770% ( 274) 00:08:00.171 14619.569 - 14720.394: 40.7323% ( 340) 00:08:00.171 14720.394 - 14821.218: 45.1694% ( 372) 00:08:00.171 14821.218 - 14922.043: 49.3321% ( 349) 00:08:00.171 14922.043 - 15022.868: 52.3378% ( 252) 00:08:00.171 15022.868 - 15123.692: 56.0711% ( 313) 00:08:00.171 15123.692 - 15224.517: 60.0668% ( 335) 00:08:00.171 15224.517 - 15325.342: 63.5735% ( 294) 00:08:00.171 15325.342 - 15426.166: 66.2572% ( 225) 00:08:00.171 15426.166 - 15526.991: 68.4876% ( 187) 00:08:00.171 15526.991 - 15627.815: 70.4318% ( 163) 00:08:00.172 15627.815 - 15728.640: 72.6980% ( 190) 00:08:00.172 15728.640 - 15829.465: 75.1431% ( 205) 00:08:00.172 15829.465 - 15930.289: 77.0992% ( 164) 00:08:00.172 15930.289 - 16031.114: 78.8406% ( 146) 00:08:00.172 16031.114 - 16131.938: 80.6536% ( 152) 00:08:00.172 16131.938 - 16232.763: 82.4547% ( 151) 00:08:00.172 16232.763 - 16333.588: 83.4447% ( 83) 00:08:00.172 16333.588 - 16434.412: 84.3154% ( 73) 00:08:00.172 16434.412 - 16535.237: 84.9475% ( 53) 00:08:00.172 16535.237 - 16636.062: 85.6870% ( 62) 00:08:00.172 16636.062 - 16736.886: 86.5458% ( 72) 00:08:00.172 16736.886 - 16837.711: 87.3569% ( 68) 00:08:00.172 16837.711 - 16938.535: 88.1322% ( 65) 00:08:00.172 16938.535 - 17039.360: 89.2056% ( 90) 00:08:00.172 17039.360 - 17140.185: 90.2552% ( 88) 00:08:00.172 17140.185 - 17241.009: 91.2333% ( 82) 00:08:00.172 17241.009 - 17341.834: 91.8655% ( 53) 00:08:00.172 17341.834 - 17442.658: 92.4141% ( 46) 00:08:00.172 17442.658 - 17543.483: 93.2252% ( 68) 00:08:00.172 17543.483 - 17644.308: 93.7023% ( 40) 00:08:00.172 17644.308 - 17745.132: 94.0959% ( 33) 00:08:00.172 17745.132 - 17845.957: 94.4179% ( 27) 00:08:00.172 17845.957 - 17946.782: 94.6923% ( 23) 00:08:00.172 17946.782 - 18047.606: 94.9905% ( 25) 00:08:00.172 18047.606 - 18148.431: 95.1455% ( 13) 00:08:00.172 18148.431 - 18249.255: 95.2767% ( 11) 00:08:00.172 18249.255 - 18350.080: 95.3960% ( 10) 00:08:00.172 18350.080 - 18450.905: 95.5988% ( 17) 00:08:00.172 18450.905 - 18551.729: 95.7180% ( 10) 00:08:00.172 18551.729 - 18652.554: 95.8135% ( 8) 00:08:00.172 18652.554 - 18753.378: 95.9566% ( 12) 00:08:00.172 18753.378 - 18854.203: 96.1594% ( 17) 00:08:00.172 18854.203 - 18955.028: 96.3502% ( 16) 00:08:00.172 18955.028 - 19055.852: 96.5410% ( 16) 00:08:00.172 19055.852 - 19156.677: 96.8154% ( 23) 00:08:00.172 19156.677 - 19257.502: 97.0539% ( 20) 00:08:00.172 19257.502 - 19358.326: 97.3282% ( 23) 00:08:00.172 19358.326 - 19459.151: 97.6384% ( 26) 00:08:00.172 19459.151 - 19559.975: 97.8053% ( 14) 00:08:00.172 19559.975 - 19660.800: 97.9365% ( 11) 00:08:00.172 19660.800 - 19761.625: 97.9723% ( 3) 00:08:00.172 19761.625 - 19862.449: 98.1155% ( 12) 00:08:00.172 19862.449 - 19963.274: 98.2347% ( 10) 00:08:00.172 19963.274 - 20064.098: 98.3659% ( 11) 00:08:00.172 20064.098 - 20164.923: 98.4375% ( 6) 00:08:00.172 20164.923 - 20265.748: 98.4733% ( 3) 00:08:00.172 26416.049 - 26617.698: 98.5210% ( 4) 00:08:00.172 26617.698 - 26819.348: 98.6164% ( 8) 00:08:00.172 26819.348 - 27020.997: 98.7118% ( 8) 00:08:00.172 27020.997 - 27222.646: 98.7953% ( 7) 00:08:00.172 27222.646 - 27424.295: 98.8788% ( 7) 00:08:00.172 27424.295 - 27625.945: 98.9742% ( 8) 00:08:00.172 27625.945 - 27827.594: 99.0697% ( 8) 00:08:00.172 27827.594 - 28029.243: 99.1770% ( 9) 00:08:00.172 28029.243 - 28230.892: 99.2247% ( 4) 00:08:00.172 28230.892 - 28432.542: 99.2366% ( 1) 00:08:00.172 34482.018 - 34683.668: 99.2605% ( 2) 00:08:00.172 34683.668 - 34885.317: 99.3559% ( 8) 00:08:00.172 34885.317 - 35086.966: 99.4275% ( 6) 00:08:00.172 35086.966 - 35288.615: 99.5229% ( 8) 00:08:00.172 35288.615 - 35490.265: 99.6302% ( 9) 00:08:00.172 35490.265 - 35691.914: 99.7137% ( 7) 00:08:00.172 35691.914 - 35893.563: 99.8211% ( 9) 00:08:00.172 35893.563 - 36095.212: 99.9165% ( 8) 00:08:00.172 36095.212 - 36296.862: 100.0000% ( 7) 00:08:00.172 00:08:00.172 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:00.172 ============================================================================== 00:08:00.172 Range in us Cumulative IO count 00:08:00.172 9175.040 - 9225.452: 0.0477% ( 4) 00:08:00.172 9225.452 - 9275.865: 0.1431% ( 8) 00:08:00.172 9275.865 - 9326.277: 0.2266% ( 7) 00:08:00.172 9326.277 - 9376.689: 0.4652% ( 20) 00:08:00.172 9376.689 - 9427.102: 0.5487% ( 7) 00:08:00.172 9427.102 - 9477.514: 0.6322% ( 7) 00:08:00.172 9477.514 - 9527.926: 0.6799% ( 4) 00:08:00.172 9527.926 - 9578.338: 0.7156% ( 3) 00:08:00.172 9578.338 - 9628.751: 0.7514% ( 3) 00:08:00.172 9628.751 - 9679.163: 0.7634% ( 1) 00:08:00.172 12502.252 - 12552.665: 0.7872% ( 2) 00:08:00.172 12552.665 - 12603.077: 0.8111% ( 2) 00:08:00.172 12603.077 - 12653.489: 0.8469% ( 3) 00:08:00.172 12653.489 - 12703.902: 0.9542% ( 9) 00:08:00.172 12703.902 - 12754.314: 1.1570% ( 17) 00:08:00.172 12754.314 - 12804.726: 1.3597% ( 17) 00:08:00.172 12804.726 - 12855.138: 1.5267% ( 14) 00:08:00.172 12855.138 - 12905.551: 1.7056% ( 15) 00:08:00.172 12905.551 - 13006.375: 2.1589% ( 38) 00:08:00.172 13006.375 - 13107.200: 2.6360% ( 40) 00:08:00.172 13107.200 - 13208.025: 3.2562% ( 52) 00:08:00.172 13208.025 - 13308.849: 3.8764% ( 52) 00:08:00.172 13308.849 - 13409.674: 5.0453% ( 98) 00:08:00.172 13409.674 - 13510.498: 6.5005% ( 122) 00:08:00.172 13510.498 - 13611.323: 8.4327% ( 162) 00:08:00.172 13611.323 - 13712.148: 10.6155% ( 183) 00:08:00.172 13712.148 - 13812.972: 12.9771% ( 198) 00:08:00.172 13812.972 - 13913.797: 15.2433% ( 190) 00:08:00.172 13913.797 - 14014.622: 18.2371% ( 251) 00:08:00.172 14014.622 - 14115.446: 21.5530% ( 278) 00:08:00.172 14115.446 - 14216.271: 24.4990% ( 247) 00:08:00.172 14216.271 - 14317.095: 27.5644% ( 257) 00:08:00.172 14317.095 - 14417.920: 30.8922% ( 279) 00:08:00.172 14417.920 - 14518.745: 34.5658% ( 308) 00:08:00.172 14518.745 - 14619.569: 38.1679% ( 302) 00:08:00.172 14619.569 - 14720.394: 42.7123% ( 381) 00:08:00.172 14720.394 - 14821.218: 46.8989% ( 351) 00:08:00.172 14821.218 - 14922.043: 50.3698% ( 291) 00:08:00.172 14922.043 - 15022.868: 54.5324% ( 349) 00:08:00.172 15022.868 - 15123.692: 57.1088% ( 216) 00:08:00.172 15123.692 - 15224.517: 59.6970% ( 217) 00:08:00.172 15224.517 - 15325.342: 62.5954% ( 243) 00:08:00.172 15325.342 - 15426.166: 65.0167% ( 203) 00:08:00.172 15426.166 - 15526.991: 67.4380% ( 203) 00:08:00.172 15526.991 - 15627.815: 70.6703% ( 271) 00:08:00.172 15627.815 - 15728.640: 73.0081% ( 196) 00:08:00.172 15728.640 - 15829.465: 75.3459% ( 196) 00:08:00.172 15829.465 - 15930.289: 77.2901% ( 163) 00:08:00.172 15930.289 - 16031.114: 79.0315% ( 146) 00:08:00.172 16031.114 - 16131.938: 79.9380% ( 76) 00:08:00.172 16131.938 - 16232.763: 80.9637% ( 86) 00:08:00.172 16232.763 - 16333.588: 82.3115% ( 113) 00:08:00.172 16333.588 - 16434.412: 83.5997% ( 108) 00:08:00.172 16434.412 - 16535.237: 85.1622% ( 131) 00:08:00.172 16535.237 - 16636.062: 86.3073% ( 96) 00:08:00.172 16636.062 - 16736.886: 87.2495% ( 79) 00:08:00.172 16736.886 - 16837.711: 87.9055% ( 55) 00:08:00.172 16837.711 - 16938.535: 88.5854% ( 57) 00:08:00.172 16938.535 - 17039.360: 89.2414% ( 55) 00:08:00.172 17039.360 - 17140.185: 90.0406% ( 67) 00:08:00.172 17140.185 - 17241.009: 90.8278% ( 66) 00:08:00.172 17241.009 - 17341.834: 91.5673% ( 62) 00:08:00.172 17341.834 - 17442.658: 92.1279% ( 47) 00:08:00.172 17442.658 - 17543.483: 92.7719% ( 54) 00:08:00.172 17543.483 - 17644.308: 93.5949% ( 69) 00:08:00.172 17644.308 - 17745.132: 94.2867% ( 58) 00:08:00.172 17745.132 - 17845.957: 94.9427% ( 55) 00:08:00.173 17845.957 - 17946.782: 95.3006% ( 30) 00:08:00.173 17946.782 - 18047.606: 95.5391% ( 20) 00:08:00.173 18047.606 - 18148.431: 95.7061% ( 14) 00:08:00.173 18148.431 - 18249.255: 95.8373% ( 11) 00:08:00.173 18249.255 - 18350.080: 96.1355% ( 25) 00:08:00.173 18350.080 - 18450.905: 96.3383% ( 17) 00:08:00.173 18450.905 - 18551.729: 96.4933% ( 13) 00:08:00.173 18551.729 - 18652.554: 96.6842% ( 16) 00:08:00.173 18652.554 - 18753.378: 96.8273% ( 12) 00:08:00.173 18753.378 - 18854.203: 96.8869% ( 5) 00:08:00.173 18854.203 - 18955.028: 96.9108% ( 2) 00:08:00.173 18955.028 - 19055.852: 96.9466% ( 3) 00:08:00.173 19055.852 - 19156.677: 96.9585% ( 1) 00:08:00.173 19156.677 - 19257.502: 97.0301% ( 6) 00:08:00.173 19257.502 - 19358.326: 97.1135% ( 7) 00:08:00.173 19358.326 - 19459.151: 97.2209% ( 9) 00:08:00.173 19459.151 - 19559.975: 97.3998% ( 15) 00:08:00.173 19559.975 - 19660.800: 97.5310% ( 11) 00:08:00.173 19660.800 - 19761.625: 97.5787% ( 4) 00:08:00.173 19761.625 - 19862.449: 97.6384% ( 5) 00:08:00.173 19862.449 - 19963.274: 97.6861% ( 4) 00:08:00.173 19963.274 - 20064.098: 97.7934% ( 9) 00:08:00.173 20064.098 - 20164.923: 97.9127% ( 10) 00:08:00.173 20164.923 - 20265.748: 98.0081% ( 8) 00:08:00.173 20265.748 - 20366.572: 98.2586% ( 21) 00:08:00.173 20366.572 - 20467.397: 98.3540% ( 8) 00:08:00.173 20467.397 - 20568.222: 98.4017% ( 4) 00:08:00.173 20568.222 - 20669.046: 98.4614% ( 5) 00:08:00.173 20669.046 - 20769.871: 98.4733% ( 1) 00:08:00.173 25710.277 - 25811.102: 98.4852% ( 1) 00:08:00.173 25811.102 - 26012.751: 98.5806% ( 8) 00:08:00.173 26012.751 - 26214.400: 98.6880% ( 9) 00:08:00.173 26214.400 - 26416.049: 98.7953% ( 9) 00:08:00.173 26416.049 - 26617.698: 98.9027% ( 9) 00:08:00.173 26617.698 - 26819.348: 98.9981% ( 8) 00:08:00.173 26819.348 - 27020.997: 99.1054% ( 9) 00:08:00.173 27020.997 - 27222.646: 99.2009% ( 8) 00:08:00.173 27222.646 - 27424.295: 99.2366% ( 3) 00:08:00.173 33877.071 - 34078.720: 99.3201% ( 7) 00:08:00.173 34078.720 - 34280.369: 99.3798% ( 5) 00:08:00.173 34280.369 - 34482.018: 99.4871% ( 9) 00:08:00.173 34482.018 - 34683.668: 99.5825% ( 8) 00:08:00.173 34683.668 - 34885.317: 99.6899% ( 9) 00:08:00.173 34885.317 - 35086.966: 99.7972% ( 9) 00:08:00.173 35086.966 - 35288.615: 99.9046% ( 9) 00:08:00.173 35288.615 - 35490.265: 100.0000% ( 8) 00:08:00.173 00:08:00.173 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:00.173 ============================================================================== 00:08:00.173 Range in us Cumulative IO count 00:08:00.173 7461.022 - 7511.434: 0.0358% ( 3) 00:08:00.173 7511.434 - 7561.846: 0.0835% ( 4) 00:08:00.173 7561.846 - 7612.258: 0.1670% ( 7) 00:08:00.173 7612.258 - 7662.671: 0.2863% ( 10) 00:08:00.173 7662.671 - 7713.083: 0.4413% ( 13) 00:08:00.173 7713.083 - 7763.495: 0.5487% ( 9) 00:08:00.173 7763.495 - 7813.908: 0.6083% ( 5) 00:08:00.173 7813.908 - 7864.320: 0.6560% ( 4) 00:08:00.173 7864.320 - 7914.732: 0.7156% ( 5) 00:08:00.173 7914.732 - 7965.145: 0.7514% ( 3) 00:08:00.173 7965.145 - 8015.557: 0.7634% ( 1) 00:08:00.173 12250.191 - 12300.603: 0.7872% ( 2) 00:08:00.173 12300.603 - 12351.015: 0.8469% ( 5) 00:08:00.173 12351.015 - 12401.428: 0.8946% ( 4) 00:08:00.173 12401.428 - 12451.840: 0.9781% ( 7) 00:08:00.173 12451.840 - 12502.252: 1.2285% ( 21) 00:08:00.173 12502.252 - 12552.665: 1.3359% ( 9) 00:08:00.173 12552.665 - 12603.077: 1.3836% ( 4) 00:08:00.173 12603.077 - 12653.489: 1.4671% ( 7) 00:08:00.173 12653.489 - 12703.902: 1.5864% ( 10) 00:08:00.173 12703.902 - 12754.314: 1.7295% ( 12) 00:08:00.173 12754.314 - 12804.726: 1.8965% ( 14) 00:08:00.173 12804.726 - 12855.138: 2.2424% ( 29) 00:08:00.173 12855.138 - 12905.551: 2.5406% ( 25) 00:08:00.173 12905.551 - 13006.375: 2.9938% ( 38) 00:08:00.173 13006.375 - 13107.200: 3.3755% ( 32) 00:08:00.173 13107.200 - 13208.025: 3.8406% ( 39) 00:08:00.173 13208.025 - 13308.849: 4.6279% ( 66) 00:08:00.173 13308.849 - 13409.674: 5.3793% ( 63) 00:08:00.173 13409.674 - 13510.498: 6.2739% ( 75) 00:08:00.173 13510.498 - 13611.323: 8.2300% ( 164) 00:08:00.173 13611.323 - 13712.148: 10.7586% ( 212) 00:08:00.173 13712.148 - 13812.972: 13.7047% ( 247) 00:08:00.173 13812.972 - 13913.797: 16.4599% ( 231) 00:08:00.173 13913.797 - 14014.622: 19.2032% ( 230) 00:08:00.173 14014.622 - 14115.446: 22.3402% ( 263) 00:08:00.173 14115.446 - 14216.271: 24.9404% ( 218) 00:08:00.173 14216.271 - 14317.095: 27.8507% ( 244) 00:08:00.173 14317.095 - 14417.920: 31.3454% ( 293) 00:08:00.173 14417.920 - 14518.745: 35.0787% ( 313) 00:08:00.173 14518.745 - 14619.569: 38.6450% ( 299) 00:08:00.173 14619.569 - 14720.394: 43.0224% ( 367) 00:08:00.173 14720.394 - 14821.218: 46.4695% ( 289) 00:08:00.173 14821.218 - 14922.043: 49.9881% ( 295) 00:08:00.173 14922.043 - 15022.868: 53.2443% ( 273) 00:08:00.173 15022.868 - 15123.692: 56.5005% ( 273) 00:08:00.173 15123.692 - 15224.517: 59.3511% ( 239) 00:08:00.173 15224.517 - 15325.342: 62.2257% ( 241) 00:08:00.173 15325.342 - 15426.166: 64.6231% ( 201) 00:08:00.173 15426.166 - 15526.991: 67.2233% ( 218) 00:08:00.173 15526.991 - 15627.815: 69.6088% ( 200) 00:08:00.173 15627.815 - 15728.640: 71.6842% ( 174) 00:08:00.173 15728.640 - 15829.465: 73.8430% ( 181) 00:08:00.173 15829.465 - 15930.289: 76.5506% ( 227) 00:08:00.173 15930.289 - 16031.114: 78.1846% ( 137) 00:08:00.173 16031.114 - 16131.938: 79.6756% ( 125) 00:08:00.173 16131.938 - 16232.763: 81.2739% ( 134) 00:08:00.173 16232.763 - 16333.588: 82.5262% ( 105) 00:08:00.173 16333.588 - 16434.412: 83.5401% ( 85) 00:08:00.173 16434.412 - 16535.237: 84.3631% ( 69) 00:08:00.173 16535.237 - 16636.062: 85.2338% ( 73) 00:08:00.173 16636.062 - 16736.886: 86.2834% ( 88) 00:08:00.173 16736.886 - 16837.711: 87.3449% ( 89) 00:08:00.173 16837.711 - 16938.535: 88.6212% ( 107) 00:08:00.173 16938.535 - 17039.360: 89.7901% ( 98) 00:08:00.173 17039.360 - 17140.185: 91.0186% ( 103) 00:08:00.173 17140.185 - 17241.009: 91.8655% ( 71) 00:08:00.173 17241.009 - 17341.834: 92.7481% ( 74) 00:08:00.173 17341.834 - 17442.658: 93.4399% ( 58) 00:08:00.173 17442.658 - 17543.483: 93.9170% ( 40) 00:08:00.173 17543.483 - 17644.308: 94.4418% ( 44) 00:08:00.173 17644.308 - 17745.132: 94.7400% ( 25) 00:08:00.173 17745.132 - 17845.957: 95.0978% ( 30) 00:08:00.173 17845.957 - 17946.782: 95.3244% ( 19) 00:08:00.173 17946.782 - 18047.606: 95.5630% ( 20) 00:08:00.173 18047.606 - 18148.431: 95.9327% ( 31) 00:08:00.173 18148.431 - 18249.255: 96.0997% ( 14) 00:08:00.173 18249.255 - 18350.080: 96.2428% ( 12) 00:08:00.173 18350.080 - 18450.905: 96.3621% ( 10) 00:08:00.173 18450.905 - 18551.729: 96.4695% ( 9) 00:08:00.173 18551.729 - 18652.554: 96.7319% ( 22) 00:08:00.173 18652.554 - 18753.378: 96.8034% ( 6) 00:08:00.173 18753.378 - 18854.203: 96.8511% ( 4) 00:08:00.173 18854.203 - 18955.028: 96.8869% ( 3) 00:08:00.174 18955.028 - 19055.852: 96.9346% ( 4) 00:08:00.174 19055.852 - 19156.677: 96.9585% ( 2) 00:08:00.174 19156.677 - 19257.502: 97.0778% ( 10) 00:08:00.174 19257.502 - 19358.326: 97.2328% ( 13) 00:08:00.174 19358.326 - 19459.151: 97.3879% ( 13) 00:08:00.174 19459.151 - 19559.975: 97.4952% ( 9) 00:08:00.174 19559.975 - 19660.800: 97.6741% ( 15) 00:08:00.174 19660.800 - 19761.625: 97.8531% ( 15) 00:08:00.174 19761.625 - 19862.449: 98.0200% ( 14) 00:08:00.174 19862.449 - 19963.274: 98.2944% ( 23) 00:08:00.174 19963.274 - 20064.098: 98.3898% ( 8) 00:08:00.174 20064.098 - 20164.923: 98.4375% ( 4) 00:08:00.174 20164.923 - 20265.748: 98.4733% ( 3) 00:08:00.174 25811.102 - 26012.751: 98.5091% ( 3) 00:08:00.174 26012.751 - 26214.400: 98.5926% ( 7) 00:08:00.174 26214.400 - 26416.049: 98.6999% ( 9) 00:08:00.174 26416.049 - 26617.698: 98.8073% ( 9) 00:08:00.174 26617.698 - 26819.348: 98.9146% ( 9) 00:08:00.174 26819.348 - 27020.997: 99.0100% ( 8) 00:08:00.174 27020.997 - 27222.646: 99.1054% ( 8) 00:08:00.174 27222.646 - 27424.295: 99.2128% ( 9) 00:08:00.174 27424.295 - 27625.945: 99.2366% ( 2) 00:08:00.174 34078.720 - 34280.369: 99.2963% ( 5) 00:08:00.174 34280.369 - 34482.018: 99.4036% ( 9) 00:08:00.174 34482.018 - 34683.668: 99.4990% ( 8) 00:08:00.174 34683.668 - 34885.317: 99.6064% ( 9) 00:08:00.174 34885.317 - 35086.966: 99.7137% ( 9) 00:08:00.174 35086.966 - 35288.615: 99.8092% ( 8) 00:08:00.174 35288.615 - 35490.265: 99.9046% ( 8) 00:08:00.174 35490.265 - 35691.914: 100.0000% ( 8) 00:08:00.174 00:08:00.174 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:00.174 ============================================================================== 00:08:00.174 Range in us Cumulative IO count 00:08:00.174 6503.188 - 6553.600: 0.0239% ( 2) 00:08:00.174 6553.600 - 6604.012: 0.0596% ( 3) 00:08:00.174 6604.012 - 6654.425: 0.0954% ( 3) 00:08:00.174 6654.425 - 6704.837: 0.1312% ( 3) 00:08:00.174 6704.837 - 6755.249: 0.1670% ( 3) 00:08:00.174 6755.249 - 6805.662: 0.2385% ( 6) 00:08:00.174 6805.662 - 6856.074: 0.5010% ( 22) 00:08:00.174 6856.074 - 6906.486: 0.5487% ( 4) 00:08:00.174 6906.486 - 6956.898: 0.5844% ( 3) 00:08:00.174 6956.898 - 7007.311: 0.6202% ( 3) 00:08:00.174 7007.311 - 7057.723: 0.6679% ( 4) 00:08:00.174 7057.723 - 7108.135: 0.7037% ( 3) 00:08:00.174 7108.135 - 7158.548: 0.7395% ( 3) 00:08:00.174 7158.548 - 7208.960: 0.7634% ( 2) 00:08:00.174 11746.068 - 11796.480: 0.7753% ( 1) 00:08:00.174 11796.480 - 11846.892: 0.8230% ( 4) 00:08:00.174 11846.892 - 11897.305: 0.8707% ( 4) 00:08:00.174 11897.305 - 11947.717: 0.9423% ( 6) 00:08:00.174 11947.717 - 11998.129: 1.1450% ( 17) 00:08:00.174 11998.129 - 12048.542: 1.3001% ( 13) 00:08:00.174 12048.542 - 12098.954: 1.3240% ( 2) 00:08:00.174 12098.954 - 12149.366: 1.3597% ( 3) 00:08:00.174 12149.366 - 12199.778: 1.3955% ( 3) 00:08:00.174 12199.778 - 12250.191: 1.4432% ( 4) 00:08:00.174 12250.191 - 12300.603: 1.4790% ( 3) 00:08:00.174 12300.603 - 12351.015: 1.5148% ( 3) 00:08:00.174 12351.015 - 12401.428: 1.5267% ( 1) 00:08:00.174 12552.665 - 12603.077: 1.5386% ( 1) 00:08:00.174 12804.726 - 12855.138: 1.5625% ( 2) 00:08:00.174 12855.138 - 12905.551: 1.5983% ( 3) 00:08:00.174 12905.551 - 13006.375: 1.7653% ( 14) 00:08:00.174 13006.375 - 13107.200: 2.3855% ( 52) 00:08:00.174 13107.200 - 13208.025: 3.2443% ( 72) 00:08:00.174 13208.025 - 13308.849: 4.4251% ( 99) 00:08:00.174 13308.849 - 13409.674: 5.6656% ( 104) 00:08:00.174 13409.674 - 13510.498: 7.2638% ( 134) 00:08:00.174 13510.498 - 13611.323: 9.7925% ( 212) 00:08:00.174 13611.323 - 13712.148: 11.9156% ( 178) 00:08:00.174 13712.148 - 13812.972: 14.0386% ( 178) 00:08:00.174 13812.972 - 13913.797: 16.3884% ( 197) 00:08:00.174 13913.797 - 14014.622: 19.1317% ( 230) 00:08:00.174 14014.622 - 14115.446: 21.3025% ( 182) 00:08:00.174 14115.446 - 14216.271: 23.7834% ( 208) 00:08:00.174 14216.271 - 14317.095: 26.6341% ( 239) 00:08:00.174 14317.095 - 14417.920: 30.0930% ( 290) 00:08:00.174 14417.920 - 14518.745: 33.8740% ( 317) 00:08:00.174 14518.745 - 14619.569: 38.0367% ( 349) 00:08:00.174 14619.569 - 14720.394: 41.7462% ( 311) 00:08:00.174 14720.394 - 14821.218: 45.2290% ( 292) 00:08:00.174 14821.218 - 14922.043: 48.2705% ( 255) 00:08:00.174 14922.043 - 15022.868: 51.8965% ( 304) 00:08:00.174 15022.868 - 15123.692: 55.4389% ( 297) 00:08:00.174 15123.692 - 15224.517: 59.6613% ( 354) 00:08:00.174 15224.517 - 15325.342: 63.8240% ( 349) 00:08:00.174 15325.342 - 15426.166: 66.8655% ( 255) 00:08:00.174 15426.166 - 15526.991: 69.4179% ( 214) 00:08:00.174 15526.991 - 15627.815: 71.6245% ( 185) 00:08:00.174 15627.815 - 15728.640: 73.5210% ( 159) 00:08:00.174 15728.640 - 15829.465: 75.0477% ( 128) 00:08:00.174 15829.465 - 15930.289: 76.5625% ( 127) 00:08:00.174 15930.289 - 16031.114: 78.4351% ( 157) 00:08:00.174 16031.114 - 16131.938: 80.2600% ( 153) 00:08:00.174 16131.938 - 16232.763: 81.7510% ( 125) 00:08:00.174 16232.763 - 16333.588: 82.8841% ( 95) 00:08:00.174 16333.588 - 16434.412: 83.8025% ( 77) 00:08:00.174 16434.412 - 16535.237: 84.7448% ( 79) 00:08:00.174 16535.237 - 16636.062: 85.7347% ( 83) 00:08:00.174 16636.062 - 16736.886: 86.5339% ( 67) 00:08:00.174 16736.886 - 16837.711: 87.4284% ( 75) 00:08:00.174 16837.711 - 16938.535: 88.5496% ( 94) 00:08:00.174 16938.535 - 17039.360: 89.5992% ( 88) 00:08:00.174 17039.360 - 17140.185: 90.7920% ( 100) 00:08:00.174 17140.185 - 17241.009: 91.6985% ( 76) 00:08:00.174 17241.009 - 17341.834: 92.4380% ( 62) 00:08:00.174 17341.834 - 17442.658: 92.9389% ( 42) 00:08:00.174 17442.658 - 17543.483: 93.3206% ( 32) 00:08:00.174 17543.483 - 17644.308: 93.6069% ( 24) 00:08:00.174 17644.308 - 17745.132: 93.7977% ( 16) 00:08:00.174 17745.132 - 17845.957: 93.9170% ( 10) 00:08:00.174 17845.957 - 17946.782: 94.1913% ( 23) 00:08:00.174 17946.782 - 18047.606: 94.4895% ( 25) 00:08:00.174 18047.606 - 18148.431: 94.9308% ( 37) 00:08:00.174 18148.431 - 18249.255: 95.5272% ( 50) 00:08:00.174 18249.255 - 18350.080: 95.9447% ( 35) 00:08:00.174 18350.080 - 18450.905: 96.3621% ( 35) 00:08:00.174 18450.905 - 18551.729: 96.9585% ( 50) 00:08:00.174 18551.729 - 18652.554: 97.2090% ( 21) 00:08:00.174 18652.554 - 18753.378: 97.3998% ( 16) 00:08:00.174 18753.378 - 18854.203: 97.5429% ( 12) 00:08:00.174 18854.203 - 18955.028: 97.6503% ( 9) 00:08:00.174 18955.028 - 19055.852: 97.7099% ( 5) 00:08:00.174 19156.677 - 19257.502: 97.7457% ( 3) 00:08:00.174 19257.502 - 19358.326: 97.8411% ( 8) 00:08:00.174 19358.326 - 19459.151: 97.9604% ( 10) 00:08:00.174 19459.151 - 19559.975: 98.0200% ( 5) 00:08:00.175 19559.975 - 19660.800: 98.1155% ( 8) 00:08:00.175 19660.800 - 19761.625: 98.1870% ( 6) 00:08:00.175 19761.625 - 19862.449: 98.2467% ( 5) 00:08:00.175 19862.449 - 19963.274: 98.3063% ( 5) 00:08:00.175 19963.274 - 20064.098: 98.3659% ( 5) 00:08:00.175 20064.098 - 20164.923: 98.4256% ( 5) 00:08:00.175 20164.923 - 20265.748: 98.4733% ( 4) 00:08:00.175 25306.978 - 25407.803: 98.4971% ( 2) 00:08:00.175 25407.803 - 25508.628: 98.5448% ( 4) 00:08:00.175 25508.628 - 25609.452: 98.6045% ( 5) 00:08:00.175 25609.452 - 25710.277: 98.6522% ( 4) 00:08:00.175 25710.277 - 25811.102: 98.6999% ( 4) 00:08:00.175 25811.102 - 26012.751: 98.8073% ( 9) 00:08:00.175 26012.751 - 26214.400: 98.9027% ( 8) 00:08:00.175 26214.400 - 26416.049: 99.0100% ( 9) 00:08:00.175 26416.049 - 26617.698: 99.1174% ( 9) 00:08:00.175 26617.698 - 26819.348: 99.2247% ( 9) 00:08:00.175 26819.348 - 27020.997: 99.2366% ( 1) 00:08:00.175 33272.123 - 33473.772: 99.2963% ( 5) 00:08:00.175 33473.772 - 33675.422: 99.3917% ( 8) 00:08:00.175 33675.422 - 33877.071: 99.4871% ( 8) 00:08:00.175 33877.071 - 34078.720: 99.5825% ( 8) 00:08:00.175 34078.720 - 34280.369: 99.6780% ( 8) 00:08:00.175 34280.369 - 34482.018: 99.7853% ( 9) 00:08:00.175 34482.018 - 34683.668: 99.8927% ( 9) 00:08:00.175 34683.668 - 34885.317: 99.9881% ( 8) 00:08:00.175 34885.317 - 35086.966: 100.0000% ( 1) 00:08:00.175 00:08:00.175 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:00.175 ============================================================================== 00:08:00.175 Range in us Cumulative IO count 00:08:00.175 6099.889 - 6125.095: 0.0118% ( 1) 00:08:00.175 6175.508 - 6200.714: 0.0237% ( 1) 00:08:00.175 6200.714 - 6225.920: 0.0473% ( 2) 00:08:00.175 6225.920 - 6251.126: 0.0829% ( 3) 00:08:00.175 6251.126 - 6276.332: 0.1420% ( 5) 00:08:00.175 6276.332 - 6301.538: 0.1776% ( 3) 00:08:00.175 6301.538 - 6326.745: 0.2841% ( 9) 00:08:00.175 6326.745 - 6351.951: 0.5090% ( 19) 00:08:00.175 6351.951 - 6377.157: 0.6037% ( 8) 00:08:00.175 6377.157 - 6402.363: 0.6155% ( 1) 00:08:00.175 6402.363 - 6427.569: 0.6392% ( 2) 00:08:00.175 6427.569 - 6452.775: 0.6510% ( 1) 00:08:00.175 6452.775 - 6503.188: 0.6984% ( 4) 00:08:00.175 6503.188 - 6553.600: 0.7339% ( 3) 00:08:00.175 6553.600 - 6604.012: 0.7576% ( 2) 00:08:00.175 11292.357 - 11342.769: 0.7931% ( 3) 00:08:00.175 11342.769 - 11393.182: 0.8641% ( 6) 00:08:00.175 11393.182 - 11443.594: 0.9351% ( 6) 00:08:00.175 11443.594 - 11494.006: 1.0417% ( 9) 00:08:00.175 11494.006 - 11544.418: 1.1955% ( 13) 00:08:00.175 11544.418 - 11594.831: 1.2666% ( 6) 00:08:00.175 11594.831 - 11645.243: 1.3376% ( 6) 00:08:00.175 11645.243 - 11695.655: 1.3849% ( 4) 00:08:00.175 11695.655 - 11746.068: 1.4323% ( 4) 00:08:00.175 11746.068 - 11796.480: 1.4678% ( 3) 00:08:00.175 11796.480 - 11846.892: 1.5033% ( 3) 00:08:00.175 11846.892 - 11897.305: 1.5152% ( 1) 00:08:00.175 12653.489 - 12703.902: 1.5270% ( 1) 00:08:00.175 12703.902 - 12754.314: 1.5743% ( 4) 00:08:00.175 12754.314 - 12804.726: 1.6217% ( 4) 00:08:00.175 12804.726 - 12855.138: 1.7756% ( 13) 00:08:00.175 12855.138 - 12905.551: 1.9413% ( 14) 00:08:00.175 12905.551 - 13006.375: 2.3556% ( 35) 00:08:00.175 13006.375 - 13107.200: 3.3026% ( 80) 00:08:00.175 13107.200 - 13208.025: 4.2850% ( 83) 00:08:00.175 13208.025 - 13308.849: 5.2320% ( 80) 00:08:00.175 13308.849 - 13409.674: 6.6761% ( 122) 00:08:00.175 13409.674 - 13510.498: 8.6411% ( 166) 00:08:00.175 13510.498 - 13611.323: 11.0914% ( 207) 00:08:00.175 13611.323 - 13712.148: 13.4588% ( 200) 00:08:00.175 13712.148 - 13812.972: 15.7552% ( 194) 00:08:00.175 13812.972 - 13913.797: 17.6728% ( 162) 00:08:00.175 13913.797 - 14014.622: 19.6023% ( 163) 00:08:00.175 14014.622 - 14115.446: 21.6619% ( 174) 00:08:00.175 14115.446 - 14216.271: 24.0057% ( 198) 00:08:00.175 14216.271 - 14317.095: 26.6454% ( 223) 00:08:00.175 14317.095 - 14417.920: 30.3385% ( 312) 00:08:00.175 14417.920 - 14518.745: 33.1676% ( 239) 00:08:00.175 14518.745 - 14619.569: 36.3400% ( 268) 00:08:00.175 14619.569 - 14720.394: 39.7254% ( 286) 00:08:00.175 14720.394 - 14821.218: 43.2528% ( 298) 00:08:00.175 14821.218 - 14922.043: 47.3722% ( 348) 00:08:00.175 14922.043 - 15022.868: 51.4678% ( 346) 00:08:00.175 15022.868 - 15123.692: 55.3622% ( 329) 00:08:00.175 15123.692 - 15224.517: 59.4460% ( 345) 00:08:00.175 15224.517 - 15325.342: 63.0445% ( 304) 00:08:00.175 15325.342 - 15426.166: 66.4181% ( 285) 00:08:00.175 15426.166 - 15526.991: 69.0459% ( 222) 00:08:00.175 15526.991 - 15627.815: 71.4370% ( 202) 00:08:00.175 15627.815 - 15728.640: 73.8518% ( 204) 00:08:00.175 15728.640 - 15829.465: 75.5800% ( 146) 00:08:00.175 15829.465 - 15930.289: 77.4503% ( 158) 00:08:00.175 15930.289 - 16031.114: 79.1193% ( 141) 00:08:00.175 16031.114 - 16131.938: 80.6345% ( 128) 00:08:00.175 16131.938 - 16232.763: 82.0786% ( 122) 00:08:00.175 16232.763 - 16333.588: 83.1913% ( 94) 00:08:00.175 16333.588 - 16434.412: 84.4223% ( 104) 00:08:00.175 16434.412 - 16535.237: 85.4759% ( 89) 00:08:00.175 16535.237 - 16636.062: 86.3873% ( 77) 00:08:00.175 16636.062 - 16736.886: 87.2396% ( 72) 00:08:00.175 16736.886 - 16837.711: 88.1510% ( 77) 00:08:00.175 16837.711 - 16938.535: 89.0033% ( 72) 00:08:00.175 16938.535 - 17039.360: 89.7491% ( 63) 00:08:00.175 17039.360 - 17140.185: 90.3646% ( 52) 00:08:00.175 17140.185 - 17241.009: 90.9209% ( 47) 00:08:00.175 17241.009 - 17341.834: 91.3471% ( 36) 00:08:00.175 17341.834 - 17442.658: 91.7614% ( 35) 00:08:00.175 17442.658 - 17543.483: 92.1520% ( 33) 00:08:00.175 17543.483 - 17644.308: 92.5663% ( 35) 00:08:00.175 17644.308 - 17745.132: 93.2765% ( 60) 00:08:00.175 17745.132 - 17845.957: 93.8210% ( 46) 00:08:00.175 17845.957 - 17946.782: 94.3892% ( 48) 00:08:00.175 17946.782 - 18047.606: 94.9100% ( 44) 00:08:00.175 18047.606 - 18148.431: 95.5848% ( 57) 00:08:00.175 18148.431 - 18249.255: 96.1648% ( 49) 00:08:00.175 18249.255 - 18350.080: 96.7330% ( 48) 00:08:00.175 18350.080 - 18450.905: 97.1117% ( 32) 00:08:00.175 18450.905 - 18551.729: 97.6562% ( 46) 00:08:00.175 18551.729 - 18652.554: 97.9640% ( 26) 00:08:00.175 18652.554 - 18753.378: 98.2008% ( 20) 00:08:00.175 18753.378 - 18854.203: 98.3546% ( 13) 00:08:00.175 18854.203 - 18955.028: 98.4257% ( 6) 00:08:00.175 18955.028 - 19055.852: 98.4730% ( 4) 00:08:00.175 19055.852 - 19156.677: 98.4848% ( 1) 00:08:00.175 19257.502 - 19358.326: 98.5440% ( 5) 00:08:00.175 19358.326 - 19459.151: 98.6151% ( 6) 00:08:00.175 19459.151 - 19559.975: 98.7216% ( 9) 00:08:00.175 19559.975 - 19660.800: 99.1004% ( 32) 00:08:00.175 19660.800 - 19761.625: 99.1359% ( 3) 00:08:00.175 19761.625 - 19862.449: 99.2069% ( 6) 00:08:00.175 19862.449 - 19963.274: 99.2424% ( 3) 00:08:00.175 25306.978 - 25407.803: 99.2779% ( 3) 00:08:00.175 25407.803 - 25508.628: 99.3253% ( 4) 00:08:00.175 25508.628 - 25609.452: 99.3845% ( 5) 00:08:00.175 25609.452 - 25710.277: 99.4318% ( 4) 00:08:00.175 25710.277 - 25811.102: 99.4792% ( 4) 00:08:00.176 25811.102 - 26012.751: 99.5739% ( 8) 00:08:00.176 26012.751 - 26214.400: 99.6804% ( 9) 00:08:00.176 26214.400 - 26416.049: 99.7869% ( 9) 00:08:00.176 26416.049 - 26617.698: 99.8935% ( 9) 00:08:00.176 26617.698 - 26819.348: 100.0000% ( 9) 00:08:00.176 00:08:00.176 18:58:17 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:08:00.176 00:08:00.176 real 0m2.495s 00:08:00.176 user 0m2.163s 00:08:00.176 sys 0m0.208s 00:08:00.176 18:58:17 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:00.176 ************************************ 00:08:00.176 END TEST nvme_perf 00:08:00.176 ************************************ 00:08:00.176 18:58:17 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:08:00.437 18:58:17 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:00.437 18:58:17 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:00.437 18:58:17 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:00.437 18:58:17 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:00.437 ************************************ 00:08:00.437 START TEST nvme_hello_world 00:08:00.437 ************************************ 00:08:00.437 18:58:17 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:00.437 Initializing NVMe Controllers 00:08:00.437 Attached to 0000:00:13.0 00:08:00.437 Namespace ID: 1 size: 1GB 00:08:00.437 Attached to 0000:00:10.0 00:08:00.437 Namespace ID: 1 size: 6GB 00:08:00.437 Attached to 0000:00:11.0 00:08:00.437 Namespace ID: 1 size: 5GB 00:08:00.437 Attached to 0000:00:12.0 00:08:00.437 Namespace ID: 1 size: 4GB 00:08:00.437 Namespace ID: 2 size: 4GB 00:08:00.437 Namespace ID: 3 size: 4GB 00:08:00.437 Initialization complete. 00:08:00.437 INFO: using host memory buffer for IO 00:08:00.437 Hello world! 00:08:00.437 INFO: using host memory buffer for IO 00:08:00.437 Hello world! 00:08:00.437 INFO: using host memory buffer for IO 00:08:00.437 Hello world! 00:08:00.437 INFO: using host memory buffer for IO 00:08:00.437 Hello world! 00:08:00.437 INFO: using host memory buffer for IO 00:08:00.437 Hello world! 00:08:00.437 INFO: using host memory buffer for IO 00:08:00.437 Hello world! 00:08:00.698 ************************************ 00:08:00.698 END TEST nvme_hello_world 00:08:00.698 ************************************ 00:08:00.698 00:08:00.698 real 0m0.210s 00:08:00.698 user 0m0.075s 00:08:00.698 sys 0m0.087s 00:08:00.698 18:58:18 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:00.698 18:58:18 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:08:00.698 18:58:18 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:00.698 18:58:18 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:00.698 18:58:18 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:00.698 18:58:18 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:00.698 ************************************ 00:08:00.698 START TEST nvme_sgl 00:08:00.698 ************************************ 00:08:00.698 18:58:18 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:00.698 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:08:00.698 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:08:00.698 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:08:00.698 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:08:00.698 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:08:00.698 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:08:00.698 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:08:00.698 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:08:00.698 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:08:00.698 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:08:00.698 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:08:00.698 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:08:00.698 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:08:00.698 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:08:00.698 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:08:00.698 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:08:00.698 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:08:00.698 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:08:00.698 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:08:00.698 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:08:00.960 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:08:00.960 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:08:00.960 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:08:00.960 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:08:00.960 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:08:00.960 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:08:00.960 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:08:00.960 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:08:00.960 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:08:00.960 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:08:00.960 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:08:00.960 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:08:00.960 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:08:00.960 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:08:00.960 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:08:00.960 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:08:00.960 NVMe Readv/Writev Request test 00:08:00.960 Attached to 0000:00:13.0 00:08:00.960 Attached to 0000:00:10.0 00:08:00.960 Attached to 0000:00:11.0 00:08:00.960 Attached to 0000:00:12.0 00:08:00.960 0000:00:10.0: build_io_request_2 test passed 00:08:00.960 0000:00:10.0: build_io_request_4 test passed 00:08:00.960 0000:00:10.0: build_io_request_5 test passed 00:08:00.960 0000:00:10.0: build_io_request_6 test passed 00:08:00.960 0000:00:10.0: build_io_request_7 test passed 00:08:00.960 0000:00:10.0: build_io_request_10 test passed 00:08:00.960 0000:00:11.0: build_io_request_2 test passed 00:08:00.960 0000:00:11.0: build_io_request_4 test passed 00:08:00.960 0000:00:11.0: build_io_request_5 test passed 00:08:00.960 0000:00:11.0: build_io_request_6 test passed 00:08:00.960 0000:00:11.0: build_io_request_7 test passed 00:08:00.960 0000:00:11.0: build_io_request_10 test passed 00:08:00.960 Cleaning up... 00:08:00.960 00:08:00.960 real 0m0.250s 00:08:00.960 user 0m0.130s 00:08:00.960 sys 0m0.073s 00:08:00.960 18:58:18 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:00.960 18:58:18 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:08:00.960 ************************************ 00:08:00.960 END TEST nvme_sgl 00:08:00.960 ************************************ 00:08:00.960 18:58:18 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:00.960 18:58:18 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:00.960 18:58:18 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:00.960 18:58:18 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:00.960 ************************************ 00:08:00.960 START TEST nvme_e2edp 00:08:00.960 ************************************ 00:08:00.960 18:58:18 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:01.220 NVMe Write/Read with End-to-End data protection test 00:08:01.220 Attached to 0000:00:13.0 00:08:01.220 Attached to 0000:00:10.0 00:08:01.220 Attached to 0000:00:11.0 00:08:01.220 Attached to 0000:00:12.0 00:08:01.220 Cleaning up... 00:08:01.220 00:08:01.220 real 0m0.199s 00:08:01.220 user 0m0.071s 00:08:01.220 sys 0m0.080s 00:08:01.220 18:58:18 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:01.220 18:58:18 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:08:01.220 ************************************ 00:08:01.220 END TEST nvme_e2edp 00:08:01.220 ************************************ 00:08:01.220 18:58:18 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:01.220 18:58:18 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:01.220 18:58:18 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:01.220 18:58:18 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:01.220 ************************************ 00:08:01.220 START TEST nvme_reserve 00:08:01.220 ************************************ 00:08:01.220 18:58:18 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:01.482 ===================================================== 00:08:01.482 NVMe Controller at PCI bus 0, device 19, function 0 00:08:01.482 ===================================================== 00:08:01.482 Reservations: Not Supported 00:08:01.482 ===================================================== 00:08:01.482 NVMe Controller at PCI bus 0, device 16, function 0 00:08:01.482 ===================================================== 00:08:01.482 Reservations: Not Supported 00:08:01.482 ===================================================== 00:08:01.482 NVMe Controller at PCI bus 0, device 17, function 0 00:08:01.482 ===================================================== 00:08:01.482 Reservations: Not Supported 00:08:01.482 ===================================================== 00:08:01.482 NVMe Controller at PCI bus 0, device 18, function 0 00:08:01.482 ===================================================== 00:08:01.482 Reservations: Not Supported 00:08:01.482 Reservation test passed 00:08:01.482 ************************************ 00:08:01.482 END TEST nvme_reserve 00:08:01.482 ************************************ 00:08:01.482 00:08:01.482 real 0m0.181s 00:08:01.482 user 0m0.066s 00:08:01.482 sys 0m0.075s 00:08:01.482 18:58:18 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:01.482 18:58:18 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:08:01.482 18:58:18 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:01.482 18:58:18 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:01.482 18:58:18 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:01.482 18:58:18 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:01.482 ************************************ 00:08:01.482 START TEST nvme_err_injection 00:08:01.482 ************************************ 00:08:01.482 18:58:18 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:01.744 NVMe Error Injection test 00:08:01.744 Attached to 0000:00:13.0 00:08:01.744 Attached to 0000:00:10.0 00:08:01.744 Attached to 0000:00:11.0 00:08:01.744 Attached to 0000:00:12.0 00:08:01.744 0000:00:12.0: get features failed as expected 00:08:01.744 0000:00:13.0: get features failed as expected 00:08:01.744 0000:00:10.0: get features failed as expected 00:08:01.744 0000:00:11.0: get features failed as expected 00:08:01.744 0000:00:11.0: get features successfully as expected 00:08:01.744 0000:00:12.0: get features successfully as expected 00:08:01.744 0000:00:13.0: get features successfully as expected 00:08:01.744 0000:00:10.0: get features successfully as expected 00:08:01.744 0000:00:12.0: read failed as expected 00:08:01.744 0000:00:13.0: read failed as expected 00:08:01.744 0000:00:10.0: read failed as expected 00:08:01.744 0000:00:11.0: read failed as expected 00:08:01.744 0000:00:12.0: read successfully as expected 00:08:01.744 0000:00:13.0: read successfully as expected 00:08:01.744 0000:00:10.0: read successfully as expected 00:08:01.744 0000:00:11.0: read successfully as expected 00:08:01.744 Cleaning up... 00:08:01.744 ************************************ 00:08:01.744 END TEST nvme_err_injection 00:08:01.744 ************************************ 00:08:01.744 00:08:01.744 real 0m0.210s 00:08:01.744 user 0m0.064s 00:08:01.744 sys 0m0.098s 00:08:01.744 18:58:19 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:01.744 18:58:19 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:08:01.744 18:58:19 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:01.744 18:58:19 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:08:01.744 18:58:19 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:01.744 18:58:19 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:01.744 ************************************ 00:08:01.744 START TEST nvme_overhead 00:08:01.744 ************************************ 00:08:01.744 18:58:19 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:03.127 Initializing NVMe Controllers 00:08:03.127 Attached to 0000:00:13.0 00:08:03.127 Attached to 0000:00:10.0 00:08:03.127 Attached to 0000:00:11.0 00:08:03.127 Attached to 0000:00:12.0 00:08:03.127 Initialization complete. Launching workers. 00:08:03.127 submit (in ns) avg, min, max = 13125.3, 10550.0, 77507.7 00:08:03.127 complete (in ns) avg, min, max = 8130.1, 7233.1, 103247.7 00:08:03.127 00:08:03.127 Submit histogram 00:08:03.127 ================ 00:08:03.127 Range in us Cumulative Count 00:08:03.127 10.535 - 10.585: 0.0298% ( 1) 00:08:03.127 10.683 - 10.732: 0.0596% ( 1) 00:08:03.127 10.782 - 10.831: 0.1193% ( 2) 00:08:03.127 10.831 - 10.880: 0.1491% ( 1) 00:08:03.127 10.880 - 10.929: 0.1789% ( 1) 00:08:03.127 10.929 - 10.978: 0.3877% ( 7) 00:08:03.127 10.978 - 11.028: 1.1333% ( 25) 00:08:03.127 11.028 - 11.077: 2.6245% ( 50) 00:08:03.127 11.077 - 11.126: 5.6069% ( 100) 00:08:03.127 11.126 - 11.175: 9.5735% ( 133) 00:08:03.127 11.175 - 11.225: 14.4050% ( 162) 00:08:03.127 11.225 - 11.274: 18.6400% ( 142) 00:08:03.127 11.274 - 11.323: 21.9803% ( 112) 00:08:03.127 11.323 - 11.372: 24.5750% ( 87) 00:08:03.127 11.372 - 11.422: 27.2890% ( 91) 00:08:03.127 11.422 - 11.471: 29.7644% ( 83) 00:08:03.127 11.471 - 11.520: 31.7328% ( 66) 00:08:03.127 11.520 - 11.569: 33.7310% ( 67) 00:08:03.127 11.569 - 11.618: 35.1625% ( 48) 00:08:03.127 11.618 - 11.668: 37.1906% ( 68) 00:08:03.127 11.668 - 11.717: 38.6520% ( 49) 00:08:03.127 11.717 - 11.766: 39.9940% ( 45) 00:08:03.127 11.766 - 11.815: 40.9186% ( 31) 00:08:03.127 11.815 - 11.865: 41.9326% ( 34) 00:08:03.127 11.865 - 11.914: 43.2150% ( 43) 00:08:03.127 11.914 - 11.963: 44.4080% ( 40) 00:08:03.127 11.963 - 12.012: 45.2431% ( 28) 00:08:03.127 12.012 - 12.062: 46.1080% ( 29) 00:08:03.127 12.062 - 12.111: 46.9729% ( 29) 00:08:03.127 12.111 - 12.160: 47.8378% ( 29) 00:08:03.127 12.160 - 12.209: 48.3746% ( 18) 00:08:03.127 12.209 - 12.258: 48.8518% ( 16) 00:08:03.127 12.258 - 12.308: 49.3886% ( 18) 00:08:03.127 12.308 - 12.357: 49.9254% ( 18) 00:08:03.127 12.357 - 12.406: 50.5816% ( 22) 00:08:03.127 12.406 - 12.455: 50.8798% ( 10) 00:08:03.127 12.455 - 12.505: 51.2079% ( 11) 00:08:03.127 12.505 - 12.554: 51.6254% ( 14) 00:08:03.127 12.554 - 12.603: 51.9535% ( 11) 00:08:03.127 12.603 - 12.702: 52.5201% ( 19) 00:08:03.127 12.702 - 12.800: 53.5043% ( 33) 00:08:03.127 12.800 - 12.898: 54.2499% ( 25) 00:08:03.127 12.898 - 12.997: 54.9657% ( 24) 00:08:03.127 12.997 - 13.095: 55.4727% ( 17) 00:08:03.127 13.095 - 13.194: 56.0394% ( 19) 00:08:03.127 13.194 - 13.292: 56.5464% ( 17) 00:08:03.127 13.292 - 13.391: 57.4709% ( 31) 00:08:03.127 13.391 - 13.489: 59.8867% ( 81) 00:08:03.127 13.489 - 13.588: 63.3463% ( 116) 00:08:03.127 13.588 - 13.686: 66.8655% ( 118) 00:08:03.127 13.686 - 13.785: 72.0251% ( 173) 00:08:03.127 13.785 - 13.883: 76.3794% ( 146) 00:08:03.127 13.883 - 13.982: 80.1074% ( 125) 00:08:03.127 13.982 - 14.080: 83.3284% ( 108) 00:08:03.127 14.080 - 14.178: 85.3564% ( 68) 00:08:03.127 14.178 - 14.277: 87.0862% ( 58) 00:08:03.127 14.277 - 14.375: 87.8914% ( 27) 00:08:03.127 14.375 - 14.474: 88.2792% ( 13) 00:08:03.127 14.474 - 14.572: 88.7265% ( 15) 00:08:03.127 14.572 - 14.671: 88.9055% ( 6) 00:08:03.127 14.671 - 14.769: 89.1441% ( 8) 00:08:03.127 14.769 - 14.868: 89.3826% ( 8) 00:08:03.127 14.868 - 14.966: 89.7405% ( 12) 00:08:03.127 14.966 - 15.065: 89.9493% ( 7) 00:08:03.127 15.065 - 15.163: 90.2177% ( 9) 00:08:03.127 15.163 - 15.262: 90.4265% ( 7) 00:08:03.127 15.262 - 15.360: 90.6353% ( 7) 00:08:03.127 15.360 - 15.458: 90.7844% ( 5) 00:08:03.127 15.458 - 15.557: 90.9931% ( 7) 00:08:03.127 15.557 - 15.655: 91.2019% ( 7) 00:08:03.127 15.655 - 15.754: 91.5001% ( 10) 00:08:03.127 15.754 - 15.852: 91.6194% ( 4) 00:08:03.127 15.852 - 15.951: 91.7686% ( 5) 00:08:03.127 15.951 - 16.049: 91.9475% ( 6) 00:08:03.127 16.049 - 16.148: 92.0668% ( 4) 00:08:03.127 16.148 - 16.246: 92.3054% ( 8) 00:08:03.127 16.246 - 16.345: 92.4843% ( 6) 00:08:03.127 16.345 - 16.443: 92.7528% ( 9) 00:08:03.127 16.443 - 16.542: 92.9317% ( 6) 00:08:03.127 16.542 - 16.640: 93.2299% ( 10) 00:08:03.127 16.640 - 16.738: 93.4387% ( 7) 00:08:03.127 16.738 - 16.837: 93.5580% ( 4) 00:08:03.127 16.837 - 16.935: 93.6475% ( 3) 00:08:03.127 16.935 - 17.034: 93.7370% ( 3) 00:08:03.127 17.034 - 17.132: 93.8562% ( 4) 00:08:03.127 17.132 - 17.231: 93.9159% ( 2) 00:08:03.127 17.231 - 17.329: 94.1545% ( 8) 00:08:03.127 17.329 - 17.428: 94.3633% ( 7) 00:08:03.127 17.428 - 17.526: 94.5422% ( 6) 00:08:03.127 17.526 - 17.625: 94.7211% ( 6) 00:08:03.127 17.625 - 17.723: 95.0194% ( 10) 00:08:03.127 17.723 - 17.822: 95.2580% ( 8) 00:08:03.127 17.822 - 17.920: 95.4071% ( 5) 00:08:03.127 17.920 - 18.018: 95.5264% ( 4) 00:08:03.127 18.018 - 18.117: 95.7053% ( 6) 00:08:03.127 18.117 - 18.215: 95.8843% ( 6) 00:08:03.127 18.215 - 18.314: 96.1229% ( 8) 00:08:03.127 18.314 - 18.412: 96.2123% ( 3) 00:08:03.127 18.412 - 18.511: 96.3316% ( 4) 00:08:03.127 18.511 - 18.609: 96.4211% ( 3) 00:08:03.127 18.609 - 18.708: 96.5404% ( 4) 00:08:03.127 18.708 - 18.806: 96.5702% ( 1) 00:08:03.127 18.806 - 18.905: 96.6597% ( 3) 00:08:03.127 18.905 - 19.003: 96.7492% ( 3) 00:08:03.127 19.003 - 19.102: 96.7790% ( 1) 00:08:03.127 19.102 - 19.200: 96.8088% ( 1) 00:08:03.127 19.200 - 19.298: 96.8983% ( 3) 00:08:03.127 19.298 - 19.397: 96.9579% ( 2) 00:08:03.127 19.397 - 19.495: 97.0772% ( 4) 00:08:03.127 19.495 - 19.594: 97.1667% ( 3) 00:08:03.127 19.594 - 19.692: 97.2860% ( 4) 00:08:03.127 19.692 - 19.791: 97.4053% ( 4) 00:08:03.127 19.791 - 19.889: 97.4351% ( 1) 00:08:03.127 19.889 - 19.988: 97.5843% ( 5) 00:08:03.127 19.988 - 20.086: 97.7930% ( 7) 00:08:03.127 20.086 - 20.185: 97.8527% ( 2) 00:08:03.127 20.185 - 20.283: 97.9421% ( 3) 00:08:03.127 20.283 - 20.382: 97.9720% ( 1) 00:08:03.127 20.382 - 20.480: 98.0316% ( 2) 00:08:03.127 20.480 - 20.578: 98.0614% ( 1) 00:08:03.127 20.578 - 20.677: 98.2106% ( 5) 00:08:03.127 20.677 - 20.775: 98.2702% ( 2) 00:08:03.127 20.775 - 20.874: 98.3895% ( 4) 00:08:03.127 20.874 - 20.972: 98.5088% ( 4) 00:08:03.127 20.972 - 21.071: 98.5983% ( 3) 00:08:03.127 21.071 - 21.169: 98.6877% ( 3) 00:08:03.127 21.169 - 21.268: 98.7474% ( 2) 00:08:03.127 21.268 - 21.366: 98.7772% ( 1) 00:08:03.127 21.366 - 21.465: 98.8667% ( 3) 00:08:03.127 21.465 - 21.563: 98.8965% ( 1) 00:08:03.127 21.563 - 21.662: 98.9263% ( 1) 00:08:03.127 21.760 - 21.858: 98.9562% ( 1) 00:08:03.127 21.858 - 21.957: 98.9860% ( 1) 00:08:03.127 21.957 - 22.055: 99.0158% ( 1) 00:08:03.127 22.154 - 22.252: 99.0755% ( 2) 00:08:03.127 22.449 - 22.548: 99.1351% ( 2) 00:08:03.127 22.843 - 22.942: 99.1948% ( 2) 00:08:03.127 23.532 - 23.631: 99.2544% ( 2) 00:08:03.127 24.418 - 24.517: 99.2842% ( 1) 00:08:03.127 24.517 - 24.615: 99.3439% ( 2) 00:08:03.127 24.615 - 24.714: 99.3737% ( 1) 00:08:03.127 24.812 - 24.911: 99.4035% ( 1) 00:08:03.127 26.388 - 26.585: 99.4333% ( 1) 00:08:03.127 26.585 - 26.782: 99.4632% ( 1) 00:08:03.127 26.782 - 26.978: 99.4930% ( 1) 00:08:03.127 27.766 - 27.963: 99.5228% ( 1) 00:08:03.127 29.538 - 29.735: 99.5526% ( 1) 00:08:03.127 30.129 - 30.326: 99.5825% ( 1) 00:08:03.127 30.326 - 30.523: 99.6421% ( 2) 00:08:03.128 30.523 - 30.720: 99.6719% ( 1) 00:08:03.128 30.720 - 30.917: 99.7018% ( 1) 00:08:03.128 32.098 - 32.295: 99.7316% ( 1) 00:08:03.128 32.295 - 32.492: 99.7614% ( 1) 00:08:03.128 33.477 - 33.674: 99.7912% ( 1) 00:08:03.128 37.022 - 37.218: 99.8211% ( 1) 00:08:03.128 38.400 - 38.597: 99.8509% ( 1) 00:08:03.128 39.188 - 39.385: 99.8807% ( 1) 00:08:03.128 43.717 - 43.914: 99.9105% ( 1) 00:08:03.128 66.560 - 66.954: 99.9404% ( 1) 00:08:03.128 71.286 - 71.680: 99.9702% ( 1) 00:08:03.128 77.194 - 77.588: 100.0000% ( 1) 00:08:03.128 00:08:03.128 Complete histogram 00:08:03.128 ================== 00:08:03.128 Range in us Cumulative Count 00:08:03.128 7.188 - 7.237: 0.0596% ( 2) 00:08:03.128 7.237 - 7.286: 0.5965% ( 18) 00:08:03.128 7.286 - 7.335: 2.4456% ( 62) 00:08:03.128 7.335 - 7.385: 6.2929% ( 129) 00:08:03.128 7.385 - 7.434: 11.6612% ( 180) 00:08:03.128 7.434 - 7.483: 18.7295% ( 237) 00:08:03.128 7.483 - 7.532: 26.2452% ( 252) 00:08:03.128 7.532 - 7.582: 31.4942% ( 176) 00:08:03.128 7.582 - 7.631: 36.7730% ( 177) 00:08:03.128 7.631 - 7.680: 41.2466% ( 150) 00:08:03.128 7.680 - 7.729: 45.3922% ( 139) 00:08:03.128 7.729 - 7.778: 48.7623% ( 113) 00:08:03.128 7.778 - 7.828: 51.2377% ( 83) 00:08:03.128 7.828 - 7.877: 53.5341% ( 77) 00:08:03.128 7.877 - 7.926: 55.2341% ( 57) 00:08:03.128 7.926 - 7.975: 56.5762% ( 45) 00:08:03.128 7.975 - 8.025: 57.5604% ( 33) 00:08:03.128 8.025 - 8.074: 58.7832% ( 41) 00:08:03.128 8.074 - 8.123: 60.2446% ( 49) 00:08:03.128 8.123 - 8.172: 62.4814% ( 75) 00:08:03.128 8.172 - 8.222: 65.7620% ( 110) 00:08:03.128 8.222 - 8.271: 69.6391% ( 130) 00:08:03.128 8.271 - 8.320: 74.5004% ( 163) 00:08:03.128 8.320 - 8.369: 79.0039% ( 151) 00:08:03.128 8.369 - 8.418: 82.9108% ( 131) 00:08:03.128 8.418 - 8.468: 86.3406% ( 115) 00:08:03.128 8.468 - 8.517: 88.9949% ( 89) 00:08:03.128 8.517 - 8.566: 90.6054% ( 54) 00:08:03.128 8.566 - 8.615: 91.7686% ( 39) 00:08:03.128 8.615 - 8.665: 93.0808% ( 44) 00:08:03.128 8.665 - 8.714: 93.8562% ( 26) 00:08:03.128 8.714 - 8.763: 94.6018% ( 25) 00:08:03.128 8.763 - 8.812: 95.1685% ( 19) 00:08:03.128 8.812 - 8.862: 95.6457% ( 16) 00:08:03.128 8.862 - 8.911: 96.0632% ( 14) 00:08:03.128 8.911 - 8.960: 96.3615% ( 10) 00:08:03.128 8.960 - 9.009: 96.4808% ( 4) 00:08:03.128 9.009 - 9.058: 96.6597% ( 6) 00:08:03.128 9.058 - 9.108: 96.7194% ( 2) 00:08:03.128 9.108 - 9.157: 96.8685% ( 5) 00:08:03.128 9.157 - 9.206: 96.9878% ( 4) 00:08:03.128 9.206 - 9.255: 97.1071% ( 4) 00:08:03.128 9.255 - 9.305: 97.1369% ( 1) 00:08:03.128 9.305 - 9.354: 97.2264% ( 3) 00:08:03.128 9.354 - 9.403: 97.2562% ( 1) 00:08:03.128 9.403 - 9.452: 97.3158% ( 2) 00:08:03.128 9.452 - 9.502: 97.4053% ( 3) 00:08:03.128 9.502 - 9.551: 97.4650% ( 2) 00:08:03.128 9.551 - 9.600: 97.4948% ( 1) 00:08:03.128 9.600 - 9.649: 97.5246% ( 1) 00:08:03.128 9.649 - 9.698: 97.5843% ( 2) 00:08:03.128 9.698 - 9.748: 97.6737% ( 3) 00:08:03.128 9.748 - 9.797: 97.7334% ( 2) 00:08:03.128 9.846 - 9.895: 97.7632% ( 1) 00:08:03.128 9.945 - 9.994: 97.8228% ( 2) 00:08:03.128 10.043 - 10.092: 97.8527% ( 1) 00:08:03.128 10.092 - 10.142: 97.8825% ( 1) 00:08:03.128 10.142 - 10.191: 97.9421% ( 2) 00:08:03.128 10.191 - 10.240: 98.0018% ( 2) 00:08:03.128 10.486 - 10.535: 98.0316% ( 1) 00:08:03.128 10.683 - 10.732: 98.0614% ( 1) 00:08:03.128 10.929 - 10.978: 98.0913% ( 1) 00:08:03.128 11.077 - 11.126: 98.1211% ( 1) 00:08:03.128 11.175 - 11.225: 98.1509% ( 1) 00:08:03.128 11.717 - 11.766: 98.1807% ( 1) 00:08:03.128 12.554 - 12.603: 98.2106% ( 1) 00:08:03.128 12.800 - 12.898: 98.2404% ( 1) 00:08:03.128 12.997 - 13.095: 98.2702% ( 1) 00:08:03.128 13.194 - 13.292: 98.3895% ( 4) 00:08:03.128 13.391 - 13.489: 98.4492% ( 2) 00:08:03.128 13.588 - 13.686: 98.5088% ( 2) 00:08:03.128 13.686 - 13.785: 98.5983% ( 3) 00:08:03.128 13.785 - 13.883: 98.6579% ( 2) 00:08:03.128 13.883 - 13.982: 98.6877% ( 1) 00:08:03.128 13.982 - 14.080: 98.7176% ( 1) 00:08:03.128 14.080 - 14.178: 98.7772% ( 2) 00:08:03.128 14.178 - 14.277: 98.9263% ( 5) 00:08:03.128 14.277 - 14.375: 98.9860% ( 2) 00:08:03.128 14.474 - 14.572: 99.0456% ( 2) 00:08:03.128 14.671 - 14.769: 99.0755% ( 1) 00:08:03.128 14.769 - 14.868: 99.1053% ( 1) 00:08:03.128 14.966 - 15.065: 99.1351% ( 1) 00:08:03.128 15.360 - 15.458: 99.1649% ( 1) 00:08:03.128 15.458 - 15.557: 99.2246% ( 2) 00:08:03.128 15.557 - 15.655: 99.2842% ( 2) 00:08:03.128 15.655 - 15.754: 99.3439% ( 2) 00:08:03.128 15.754 - 15.852: 99.4035% ( 2) 00:08:03.128 15.852 - 15.951: 99.4333% ( 1) 00:08:03.128 15.951 - 16.049: 99.4632% ( 1) 00:08:03.128 16.049 - 16.148: 99.5228% ( 2) 00:08:03.128 16.443 - 16.542: 99.5526% ( 1) 00:08:03.128 17.034 - 17.132: 99.5825% ( 1) 00:08:03.128 17.132 - 17.231: 99.6123% ( 1) 00:08:03.128 17.822 - 17.920: 99.6421% ( 1) 00:08:03.128 18.314 - 18.412: 99.6719% ( 1) 00:08:03.128 21.268 - 21.366: 99.7018% ( 1) 00:08:03.128 22.055 - 22.154: 99.7614% ( 2) 00:08:03.128 23.237 - 23.335: 99.7912% ( 1) 00:08:03.128 23.434 - 23.532: 99.8211% ( 1) 00:08:03.128 24.911 - 25.009: 99.8509% ( 1) 00:08:03.128 25.600 - 25.797: 99.8807% ( 1) 00:08:03.128 32.295 - 32.492: 99.9105% ( 1) 00:08:03.128 35.446 - 35.643: 99.9404% ( 1) 00:08:03.128 76.012 - 76.406: 99.9702% ( 1) 00:08:03.128 103.188 - 103.975: 100.0000% ( 1) 00:08:03.128 00:08:03.128 00:08:03.128 real 0m1.205s 00:08:03.128 user 0m1.068s 00:08:03.128 sys 0m0.087s 00:08:03.128 18:58:20 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:03.128 18:58:20 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:08:03.128 ************************************ 00:08:03.128 END TEST nvme_overhead 00:08:03.128 ************************************ 00:08:03.128 18:58:20 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:03.128 18:58:20 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:03.128 18:58:20 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:03.128 18:58:20 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:03.128 ************************************ 00:08:03.128 START TEST nvme_arbitration 00:08:03.128 ************************************ 00:08:03.128 18:58:20 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:06.425 Initializing NVMe Controllers 00:08:06.425 Attached to 0000:00:13.0 00:08:06.425 Attached to 0000:00:10.0 00:08:06.425 Attached to 0000:00:11.0 00:08:06.425 Attached to 0000:00:12.0 00:08:06.425 Associating QEMU NVMe Ctrl (12343 ) with lcore 0 00:08:06.426 Associating QEMU NVMe Ctrl (12340 ) with lcore 1 00:08:06.426 Associating QEMU NVMe Ctrl (12341 ) with lcore 2 00:08:06.426 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:08:06.426 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:08:06.426 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:08:06.426 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:08:06.426 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:08:06.426 Initialization complete. Launching workers. 00:08:06.426 Starting thread on core 1 with urgent priority queue 00:08:06.426 Starting thread on core 2 with urgent priority queue 00:08:06.426 Starting thread on core 3 with urgent priority queue 00:08:06.426 Starting thread on core 0 with urgent priority queue 00:08:06.426 QEMU NVMe Ctrl (12343 ) core 0: 6272.00 IO/s 15.94 secs/100000 ios 00:08:06.426 QEMU NVMe Ctrl (12342 ) core 0: 6280.67 IO/s 15.92 secs/100000 ios 00:08:06.426 QEMU NVMe Ctrl (12340 ) core 1: 6165.33 IO/s 16.22 secs/100000 ios 00:08:06.426 QEMU NVMe Ctrl (12342 ) core 1: 6173.00 IO/s 16.20 secs/100000 ios 00:08:06.426 QEMU NVMe Ctrl (12341 ) core 2: 5732.67 IO/s 17.44 secs/100000 ios 00:08:06.426 QEMU NVMe Ctrl (12342 ) core 3: 5576.67 IO/s 17.93 secs/100000 ios 00:08:06.426 ======================================================== 00:08:06.426 00:08:06.426 00:08:06.426 real 0m3.215s 00:08:06.426 user 0m9.042s 00:08:06.426 sys 0m0.099s 00:08:06.426 18:58:23 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:06.426 ************************************ 00:08:06.426 END TEST nvme_arbitration 00:08:06.426 ************************************ 00:08:06.426 18:58:23 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:08:06.426 18:58:23 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:06.426 18:58:23 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:06.426 18:58:23 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:06.426 18:58:23 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:06.426 ************************************ 00:08:06.426 START TEST nvme_single_aen 00:08:06.426 ************************************ 00:08:06.426 18:58:23 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:06.426 Asynchronous Event Request test 00:08:06.426 Attached to 0000:00:13.0 00:08:06.426 Attached to 0000:00:10.0 00:08:06.426 Attached to 0000:00:11.0 00:08:06.426 Attached to 0000:00:12.0 00:08:06.426 Reset controller to setup AER completions for this process 00:08:06.426 Registering asynchronous event callbacks... 00:08:06.426 Getting orig temperature thresholds of all controllers 00:08:06.426 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:06.426 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:06.426 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:06.426 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:06.426 Setting all controllers temperature threshold low to trigger AER 00:08:06.426 Waiting for all controllers temperature threshold to be set lower 00:08:06.426 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:06.426 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:06.426 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:06.426 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:06.426 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:06.426 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:06.426 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:06.426 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:06.426 Waiting for all controllers to trigger AER and reset threshold 00:08:06.426 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:06.426 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:06.426 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:06.426 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:06.426 Cleaning up... 00:08:06.426 ************************************ 00:08:06.426 END TEST nvme_single_aen 00:08:06.426 ************************************ 00:08:06.426 00:08:06.426 real 0m0.179s 00:08:06.426 user 0m0.066s 00:08:06.426 sys 0m0.066s 00:08:06.426 18:58:23 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:06.426 18:58:23 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:08:06.757 18:58:23 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:08:06.757 18:58:23 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:06.757 18:58:23 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:06.757 18:58:23 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:06.757 ************************************ 00:08:06.757 START TEST nvme_doorbell_aers 00:08:06.757 ************************************ 00:08:06.757 18:58:24 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:08:06.757 18:58:24 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:08:06.757 18:58:24 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:08:06.757 18:58:24 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:08:06.757 18:58:24 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:08:06.757 18:58:24 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:06.757 18:58:24 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:08:06.757 18:58:24 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:06.757 18:58:24 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:06.757 18:58:24 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:06.757 18:58:24 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:06.757 18:58:24 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:06.757 18:58:24 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:06.757 18:58:24 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:06.757 [2024-12-05 18:58:24.254203] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74699) is not found. Dropping the request. 00:08:16.791 Executing: test_write_invalid_db 00:08:16.791 Waiting for AER completion... 00:08:16.791 Failure: test_write_invalid_db 00:08:16.791 00:08:16.791 Executing: test_invalid_db_write_overflow_sq 00:08:16.791 Waiting for AER completion... 00:08:16.791 Failure: test_invalid_db_write_overflow_sq 00:08:16.791 00:08:16.791 Executing: test_invalid_db_write_overflow_cq 00:08:16.791 Waiting for AER completion... 00:08:16.791 Failure: test_invalid_db_write_overflow_cq 00:08:16.791 00:08:16.791 18:58:34 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:16.791 18:58:34 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:16.791 [2024-12-05 18:58:34.277589] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74699) is not found. Dropping the request. 00:08:26.751 Executing: test_write_invalid_db 00:08:26.751 Waiting for AER completion... 00:08:26.751 Failure: test_write_invalid_db 00:08:26.751 00:08:26.751 Executing: test_invalid_db_write_overflow_sq 00:08:26.751 Waiting for AER completion... 00:08:26.751 Failure: test_invalid_db_write_overflow_sq 00:08:26.751 00:08:26.751 Executing: test_invalid_db_write_overflow_cq 00:08:26.751 Waiting for AER completion... 00:08:26.751 Failure: test_invalid_db_write_overflow_cq 00:08:26.751 00:08:26.751 18:58:44 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:26.751 18:58:44 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:26.751 [2024-12-05 18:58:44.296846] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74699) is not found. Dropping the request. 00:08:36.714 Executing: test_write_invalid_db 00:08:36.714 Waiting for AER completion... 00:08:36.714 Failure: test_write_invalid_db 00:08:36.714 00:08:36.714 Executing: test_invalid_db_write_overflow_sq 00:08:36.714 Waiting for AER completion... 00:08:36.714 Failure: test_invalid_db_write_overflow_sq 00:08:36.714 00:08:36.714 Executing: test_invalid_db_write_overflow_cq 00:08:36.714 Waiting for AER completion... 00:08:36.714 Failure: test_invalid_db_write_overflow_cq 00:08:36.714 00:08:36.714 18:58:54 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:36.714 18:58:54 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:36.971 [2024-12-05 18:58:54.331791] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74699) is not found. Dropping the request. 00:08:46.954 Executing: test_write_invalid_db 00:08:46.954 Waiting for AER completion... 00:08:46.954 Failure: test_write_invalid_db 00:08:46.954 00:08:46.954 Executing: test_invalid_db_write_overflow_sq 00:08:46.954 Waiting for AER completion... 00:08:46.954 Failure: test_invalid_db_write_overflow_sq 00:08:46.954 00:08:46.954 Executing: test_invalid_db_write_overflow_cq 00:08:46.954 Waiting for AER completion... 00:08:46.954 Failure: test_invalid_db_write_overflow_cq 00:08:46.954 00:08:46.954 00:08:46.954 real 0m40.173s 00:08:46.954 user 0m34.180s 00:08:46.954 sys 0m5.612s 00:08:46.954 18:59:04 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:46.954 18:59:04 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:08:46.954 ************************************ 00:08:46.954 END TEST nvme_doorbell_aers 00:08:46.954 ************************************ 00:08:46.954 18:59:04 nvme -- nvme/nvme.sh@97 -- # uname 00:08:46.954 18:59:04 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:08:46.954 18:59:04 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:46.954 18:59:04 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:46.954 18:59:04 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:46.954 18:59:04 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:46.954 ************************************ 00:08:46.954 START TEST nvme_multi_aen 00:08:46.954 ************************************ 00:08:46.954 18:59:04 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:46.954 [2024-12-05 18:59:04.382580] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74699) is not found. Dropping the request. 00:08:46.954 [2024-12-05 18:59:04.382633] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74699) is not found. Dropping the request. 00:08:46.954 [2024-12-05 18:59:04.382644] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74699) is not found. Dropping the request. 00:08:46.954 [2024-12-05 18:59:04.383696] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74699) is not found. Dropping the request. 00:08:46.954 [2024-12-05 18:59:04.383720] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74699) is not found. Dropping the request. 00:08:46.954 [2024-12-05 18:59:04.383728] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74699) is not found. Dropping the request. 00:08:46.954 [2024-12-05 18:59:04.384647] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74699) is not found. Dropping the request. 00:08:46.954 [2024-12-05 18:59:04.384671] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74699) is not found. Dropping the request. 00:08:46.954 [2024-12-05 18:59:04.384679] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74699) is not found. Dropping the request. 00:08:46.954 [2024-12-05 18:59:04.385723] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74699) is not found. Dropping the request. 00:08:46.954 [2024-12-05 18:59:04.385819] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74699) is not found. Dropping the request. 00:08:46.954 [2024-12-05 18:59:04.385886] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74699) is not found. Dropping the request. 00:08:46.954 Child process pid: 75220 00:08:47.212 [Child] Asynchronous Event Request test 00:08:47.212 [Child] Attached to 0000:00:13.0 00:08:47.212 [Child] Attached to 0000:00:10.0 00:08:47.212 [Child] Attached to 0000:00:11.0 00:08:47.212 [Child] Attached to 0000:00:12.0 00:08:47.212 [Child] Registering asynchronous event callbacks... 00:08:47.212 [Child] Getting orig temperature thresholds of all controllers 00:08:47.212 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:47.212 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:47.212 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:47.212 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:47.212 [Child] Waiting for all controllers to trigger AER and reset threshold 00:08:47.212 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:47.212 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:47.212 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:47.212 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:47.212 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:47.212 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:47.212 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:47.212 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:47.212 [Child] Cleaning up... 00:08:47.212 Asynchronous Event Request test 00:08:47.212 Attached to 0000:00:13.0 00:08:47.212 Attached to 0000:00:10.0 00:08:47.212 Attached to 0000:00:11.0 00:08:47.212 Attached to 0000:00:12.0 00:08:47.212 Reset controller to setup AER completions for this process 00:08:47.212 Registering asynchronous event callbacks... 00:08:47.212 Getting orig temperature thresholds of all controllers 00:08:47.212 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:47.212 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:47.212 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:47.212 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:47.212 Setting all controllers temperature threshold low to trigger AER 00:08:47.212 Waiting for all controllers temperature threshold to be set lower 00:08:47.212 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:47.212 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:47.212 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:47.212 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:47.212 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:47.212 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:47.212 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:47.212 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:47.212 Waiting for all controllers to trigger AER and reset threshold 00:08:47.212 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:47.212 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:47.212 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:47.212 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:47.212 Cleaning up... 00:08:47.212 00:08:47.212 real 0m0.370s 00:08:47.212 user 0m0.128s 00:08:47.212 sys 0m0.140s 00:08:47.212 18:59:04 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:47.212 18:59:04 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:08:47.212 ************************************ 00:08:47.212 END TEST nvme_multi_aen 00:08:47.212 ************************************ 00:08:47.212 18:59:04 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:47.212 18:59:04 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:47.212 18:59:04 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:47.212 18:59:04 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:47.212 ************************************ 00:08:47.212 START TEST nvme_startup 00:08:47.212 ************************************ 00:08:47.212 18:59:04 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:47.471 Initializing NVMe Controllers 00:08:47.471 Attached to 0000:00:13.0 00:08:47.471 Attached to 0000:00:10.0 00:08:47.471 Attached to 0000:00:11.0 00:08:47.471 Attached to 0000:00:12.0 00:08:47.471 Initialization complete. 00:08:47.471 Time used:122190.180 (us). 00:08:47.471 00:08:47.471 real 0m0.172s 00:08:47.471 user 0m0.047s 00:08:47.471 sys 0m0.079s 00:08:47.471 18:59:04 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:47.471 18:59:04 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:08:47.471 ************************************ 00:08:47.471 END TEST nvme_startup 00:08:47.471 ************************************ 00:08:47.471 18:59:04 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:08:47.471 18:59:04 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:47.471 18:59:04 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:47.471 18:59:04 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:47.471 ************************************ 00:08:47.471 START TEST nvme_multi_secondary 00:08:47.471 ************************************ 00:08:47.471 18:59:04 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:08:47.471 18:59:04 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=75276 00:08:47.471 18:59:04 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=75277 00:08:47.471 18:59:04 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:08:47.471 18:59:04 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:47.471 18:59:04 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:08:50.803 Initializing NVMe Controllers 00:08:50.803 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:50.803 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:50.803 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:50.803 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:50.803 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:50.803 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:50.803 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:50.803 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:50.803 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:50.803 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:50.803 Initialization complete. Launching workers. 00:08:50.803 ======================================================== 00:08:50.803 Latency(us) 00:08:50.803 Device Information : IOPS MiB/s Average min max 00:08:50.803 PCIE (0000:00:13.0) NSID 1 from core 1: 7901.08 30.86 2024.62 1011.57 6523.62 00:08:50.803 PCIE (0000:00:10.0) NSID 1 from core 1: 7901.08 30.86 2023.72 979.18 6176.16 00:08:50.803 PCIE (0000:00:11.0) NSID 1 from core 1: 7901.08 30.86 2024.83 989.60 6441.44 00:08:50.803 PCIE (0000:00:12.0) NSID 1 from core 1: 7901.08 30.86 2024.91 1017.80 5985.42 00:08:50.803 PCIE (0000:00:12.0) NSID 2 from core 1: 7901.08 30.86 2024.91 996.73 5759.66 00:08:50.803 PCIE (0000:00:12.0) NSID 3 from core 1: 7901.08 30.86 2024.94 1024.73 6074.59 00:08:50.803 ======================================================== 00:08:50.803 Total : 47406.51 185.18 2024.65 979.18 6523.62 00:08:50.803 00:08:50.803 Initializing NVMe Controllers 00:08:50.803 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:50.803 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:50.803 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:50.803 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:50.803 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:50.803 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:50.803 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:50.803 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:50.803 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:50.803 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:50.803 Initialization complete. Launching workers. 00:08:50.803 ======================================================== 00:08:50.803 Latency(us) 00:08:50.803 Device Information : IOPS MiB/s Average min max 00:08:50.803 PCIE (0000:00:13.0) NSID 1 from core 2: 2903.34 11.34 5510.37 1365.89 14040.17 00:08:50.803 PCIE (0000:00:10.0) NSID 1 from core 2: 2903.34 11.34 5508.44 1317.64 14262.13 00:08:50.803 PCIE (0000:00:11.0) NSID 1 from core 2: 2903.34 11.34 5509.93 1225.86 19297.70 00:08:50.803 PCIE (0000:00:12.0) NSID 1 from core 2: 2903.34 11.34 5509.75 848.50 16719.92 00:08:50.803 PCIE (0000:00:12.0) NSID 2 from core 2: 2903.34 11.34 5509.81 774.47 16817.14 00:08:50.803 PCIE (0000:00:12.0) NSID 3 from core 2: 2903.34 11.34 5510.10 717.15 16533.36 00:08:50.803 ======================================================== 00:08:50.803 Total : 17420.02 68.05 5509.73 717.15 19297.70 00:08:50.803 00:08:50.803 18:59:08 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 75276 00:08:52.701 Initializing NVMe Controllers 00:08:52.701 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:52.701 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:52.701 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:52.701 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:52.701 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:52.701 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:52.701 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:52.701 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:52.701 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:52.701 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:52.701 Initialization complete. Launching workers. 00:08:52.701 ======================================================== 00:08:52.701 Latency(us) 00:08:52.701 Device Information : IOPS MiB/s Average min max 00:08:52.701 PCIE (0000:00:13.0) NSID 1 from core 0: 11221.61 43.83 1425.43 690.31 6017.18 00:08:52.701 PCIE (0000:00:10.0) NSID 1 from core 0: 11221.61 43.83 1424.59 673.86 6225.08 00:08:52.701 PCIE (0000:00:11.0) NSID 1 from core 0: 11221.61 43.83 1425.39 645.77 5525.89 00:08:52.701 PCIE (0000:00:12.0) NSID 1 from core 0: 11221.61 43.83 1425.38 504.68 6360.37 00:08:52.701 PCIE (0000:00:12.0) NSID 2 from core 0: 11221.61 43.83 1425.35 426.80 6382.84 00:08:52.701 PCIE (0000:00:12.0) NSID 3 from core 0: 11221.61 43.83 1425.29 375.79 6129.43 00:08:52.701 ======================================================== 00:08:52.701 Total : 67329.66 263.01 1425.24 375.79 6382.84 00:08:52.701 00:08:52.701 18:59:10 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 75277 00:08:52.701 18:59:10 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=75346 00:08:52.701 18:59:10 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:08:52.701 18:59:10 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=75347 00:08:52.701 18:59:10 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:08:52.701 18:59:10 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:55.983 Initializing NVMe Controllers 00:08:55.983 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:55.983 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:55.983 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:55.983 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:55.983 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:55.983 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:55.983 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:55.983 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:55.983 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:55.983 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:55.983 Initialization complete. Launching workers. 00:08:55.984 ======================================================== 00:08:55.984 Latency(us) 00:08:55.984 Device Information : IOPS MiB/s Average min max 00:08:55.984 PCIE (0000:00:13.0) NSID 1 from core 1: 7903.36 30.87 2024.04 757.60 6458.04 00:08:55.984 PCIE (0000:00:10.0) NSID 1 from core 1: 7903.36 30.87 2023.07 729.53 6837.29 00:08:55.984 PCIE (0000:00:11.0) NSID 1 from core 1: 7903.36 30.87 2023.97 749.30 6382.33 00:08:55.984 PCIE (0000:00:12.0) NSID 1 from core 1: 7903.36 30.87 2023.97 749.86 7126.58 00:08:55.984 PCIE (0000:00:12.0) NSID 2 from core 1: 7903.36 30.87 2023.95 749.09 6750.07 00:08:55.984 PCIE (0000:00:12.0) NSID 3 from core 1: 7903.36 30.87 2023.99 758.35 6445.00 00:08:55.984 ======================================================== 00:08:55.984 Total : 47420.14 185.23 2023.83 729.53 7126.58 00:08:55.984 00:08:55.984 Initializing NVMe Controllers 00:08:55.984 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:55.984 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:55.984 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:55.984 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:55.984 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:55.984 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:55.984 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:55.984 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:55.984 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:55.984 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:55.984 Initialization complete. Launching workers. 00:08:55.984 ======================================================== 00:08:55.984 Latency(us) 00:08:55.984 Device Information : IOPS MiB/s Average min max 00:08:55.984 PCIE (0000:00:13.0) NSID 1 from core 0: 7761.41 30.32 2060.99 814.09 5655.90 00:08:55.984 PCIE (0000:00:10.0) NSID 1 from core 0: 7761.41 30.32 2060.11 793.14 5416.50 00:08:55.984 PCIE (0000:00:11.0) NSID 1 from core 0: 7761.41 30.32 2061.09 812.08 5825.05 00:08:55.984 PCIE (0000:00:12.0) NSID 1 from core 0: 7761.41 30.32 2061.08 826.12 5352.80 00:08:55.984 PCIE (0000:00:12.0) NSID 2 from core 0: 7761.41 30.32 2061.04 816.30 5166.24 00:08:55.984 PCIE (0000:00:12.0) NSID 3 from core 0: 7761.41 30.32 2061.02 813.05 5256.59 00:08:55.984 ======================================================== 00:08:55.984 Total : 46568.45 181.91 2060.89 793.14 5825.05 00:08:55.984 00:08:57.881 Initializing NVMe Controllers 00:08:57.881 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:57.881 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:57.881 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:57.881 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:57.881 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:57.881 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:57.881 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:57.881 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:57.881 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:57.881 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:57.881 Initialization complete. Launching workers. 00:08:57.881 ======================================================== 00:08:57.881 Latency(us) 00:08:57.881 Device Information : IOPS MiB/s Average min max 00:08:57.881 PCIE (0000:00:13.0) NSID 1 from core 2: 4742.92 18.53 3372.56 770.01 13085.31 00:08:57.881 PCIE (0000:00:10.0) NSID 1 from core 2: 4742.92 18.53 3370.89 749.33 13285.06 00:08:57.881 PCIE (0000:00:11.0) NSID 1 from core 2: 4742.92 18.53 3372.63 706.27 13681.48 00:08:57.881 PCIE (0000:00:12.0) NSID 1 from core 2: 4742.92 18.53 3372.39 764.74 13469.08 00:08:57.881 PCIE (0000:00:12.0) NSID 2 from core 2: 4742.92 18.53 3372.47 756.15 13730.52 00:08:57.881 PCIE (0000:00:12.0) NSID 3 from core 2: 4742.92 18.53 3372.73 453.98 12714.72 00:08:57.881 ======================================================== 00:08:57.881 Total : 28457.53 111.16 3372.28 453.98 13730.52 00:08:57.881 00:08:57.881 ************************************ 00:08:57.881 END TEST nvme_multi_secondary 00:08:57.881 ************************************ 00:08:57.881 18:59:15 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 75346 00:08:57.881 18:59:15 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 75347 00:08:57.881 00:08:57.881 real 0m10.527s 00:08:57.881 user 0m18.289s 00:08:57.881 sys 0m0.516s 00:08:57.881 18:59:15 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:57.881 18:59:15 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:08:57.881 18:59:15 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:08:57.881 18:59:15 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:08:57.881 18:59:15 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/74297 ]] 00:08:57.881 18:59:15 nvme -- common/autotest_common.sh@1094 -- # kill 74297 00:08:57.882 18:59:15 nvme -- common/autotest_common.sh@1095 -- # wait 74297 00:08:57.882 [2024-12-05 18:59:15.400386] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75219) is not found. Dropping the request. 00:08:57.882 [2024-12-05 18:59:15.400459] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75219) is not found. Dropping the request. 00:08:57.882 [2024-12-05 18:59:15.400481] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75219) is not found. Dropping the request. 00:08:57.882 [2024-12-05 18:59:15.400500] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75219) is not found. Dropping the request. 00:08:57.882 [2024-12-05 18:59:15.401076] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75219) is not found. Dropping the request. 00:08:57.882 [2024-12-05 18:59:15.401120] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75219) is not found. Dropping the request. 00:08:57.882 [2024-12-05 18:59:15.401136] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75219) is not found. Dropping the request. 00:08:57.882 [2024-12-05 18:59:15.401154] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75219) is not found. Dropping the request. 00:08:57.882 [2024-12-05 18:59:15.401697] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75219) is not found. Dropping the request. 00:08:57.882 [2024-12-05 18:59:15.401749] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75219) is not found. Dropping the request. 00:08:57.882 [2024-12-05 18:59:15.401766] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75219) is not found. Dropping the request. 00:08:57.882 [2024-12-05 18:59:15.401786] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75219) is not found. Dropping the request. 00:08:57.882 [2024-12-05 18:59:15.402396] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75219) is not found. Dropping the request. 00:08:57.882 [2024-12-05 18:59:15.402445] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75219) is not found. Dropping the request. 00:08:57.882 [2024-12-05 18:59:15.402462] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75219) is not found. Dropping the request. 00:08:57.882 [2024-12-05 18:59:15.402479] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75219) is not found. Dropping the request. 00:08:58.139 18:59:15 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:08:58.139 18:59:15 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:08:58.139 18:59:15 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:58.139 18:59:15 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:58.139 18:59:15 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:58.139 18:59:15 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:58.139 ************************************ 00:08:58.139 START TEST bdev_nvme_reset_stuck_adm_cmd 00:08:58.139 ************************************ 00:08:58.139 18:59:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:58.139 * Looking for test storage... 00:08:58.139 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:58.139 18:59:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:58.139 18:59:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lcov --version 00:08:58.139 18:59:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:58.139 18:59:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:58.139 18:59:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:58.139 18:59:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:58.139 18:59:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:58.139 18:59:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:08:58.139 18:59:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:08:58.139 18:59:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:08:58.139 18:59:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:08:58.139 18:59:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:08:58.139 18:59:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:08:58.139 18:59:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:08:58.139 18:59:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:58.139 18:59:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:08:58.139 18:59:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:08:58.139 18:59:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:58.139 18:59:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:58.139 18:59:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:08:58.139 18:59:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:08:58.139 18:59:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:58.139 18:59:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:08:58.139 18:59:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:08:58.139 18:59:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:08:58.139 18:59:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:08:58.139 18:59:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:58.139 18:59:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:08:58.139 18:59:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:08:58.139 18:59:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:58.139 18:59:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:58.139 18:59:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:08:58.139 18:59:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:58.139 18:59:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:58.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.139 --rc genhtml_branch_coverage=1 00:08:58.139 --rc genhtml_function_coverage=1 00:08:58.139 --rc genhtml_legend=1 00:08:58.139 --rc geninfo_all_blocks=1 00:08:58.139 --rc geninfo_unexecuted_blocks=1 00:08:58.139 00:08:58.139 ' 00:08:58.139 18:59:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:58.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.139 --rc genhtml_branch_coverage=1 00:08:58.139 --rc genhtml_function_coverage=1 00:08:58.139 --rc genhtml_legend=1 00:08:58.139 --rc geninfo_all_blocks=1 00:08:58.139 --rc geninfo_unexecuted_blocks=1 00:08:58.139 00:08:58.139 ' 00:08:58.139 18:59:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:58.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.140 --rc genhtml_branch_coverage=1 00:08:58.140 --rc genhtml_function_coverage=1 00:08:58.140 --rc genhtml_legend=1 00:08:58.140 --rc geninfo_all_blocks=1 00:08:58.140 --rc geninfo_unexecuted_blocks=1 00:08:58.140 00:08:58.140 ' 00:08:58.140 18:59:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:58.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.140 --rc genhtml_branch_coverage=1 00:08:58.140 --rc genhtml_function_coverage=1 00:08:58.140 --rc genhtml_legend=1 00:08:58.140 --rc geninfo_all_blocks=1 00:08:58.140 --rc geninfo_unexecuted_blocks=1 00:08:58.140 00:08:58.140 ' 00:08:58.140 18:59:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:08:58.140 18:59:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:08:58.140 18:59:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:08:58.140 18:59:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:08:58.140 18:59:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:08:58.140 18:59:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:08:58.140 18:59:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:08:58.140 18:59:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:08:58.140 18:59:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:08:58.140 18:59:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:08:58.140 18:59:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:58.140 18:59:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:08:58.140 18:59:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:58.140 18:59:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:58.140 18:59:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:58.140 18:59:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:58.140 18:59:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:58.140 18:59:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:08:58.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:58.140 18:59:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:08:58.140 18:59:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:08:58.140 18:59:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=75504 00:08:58.140 18:59:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:08:58.140 18:59:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:58.140 18:59:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 75504 00:08:58.140 18:59:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 75504 ']' 00:08:58.140 18:59:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:58.140 18:59:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:58.140 18:59:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:58.140 18:59:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:58.140 18:59:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:58.397 [2024-12-05 18:59:15.755823] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:08:58.397 [2024-12-05 18:59:15.755934] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75504 ] 00:08:58.397 [2024-12-05 18:59:15.909737] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:58.397 [2024-12-05 18:59:15.931210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:58.397 [2024-12-05 18:59:15.931769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:58.397 [2024-12-05 18:59:15.931900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.397 [2024-12-05 18:59:15.931998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:59.336 18:59:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:59.336 18:59:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:08:59.336 18:59:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:08:59.336 18:59:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.336 18:59:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:59.336 nvme0n1 00:08:59.336 18:59:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.336 18:59:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:08:59.336 18:59:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_qHx8r.txt 00:08:59.336 18:59:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:08:59.336 18:59:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.336 18:59:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:59.336 true 00:08:59.336 18:59:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.336 18:59:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:08:59.336 18:59:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1733425156 00:08:59.336 18:59:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=75527 00:08:59.336 18:59:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:59.336 18:59:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:08:59.336 18:59:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:09:01.232 18:59:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:09:01.232 18:59:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.232 18:59:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:01.232 [2024-12-05 18:59:18.690948] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:09:01.232 [2024-12-05 18:59:18.691266] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:09:01.232 [2024-12-05 18:59:18.691343] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:01.232 [2024-12-05 18:59:18.691405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.232 [2024-12-05 18:59:18.693076] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:09:01.232 18:59:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.232 18:59:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 75527 00:09:01.232 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 75527 00:09:01.232 18:59:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 75527 00:09:01.232 18:59:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:09:01.232 18:59:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:09:01.232 18:59:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:09:01.232 18:59:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.232 18:59:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:01.232 18:59:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.232 18:59:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:09:01.232 18:59:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_qHx8r.txt 00:09:01.232 18:59:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:09:01.232 18:59:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:09:01.232 18:59:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:01.232 18:59:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:01.232 18:59:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:01.232 18:59:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:01.232 18:59:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:01.232 18:59:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:01.232 18:59:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:09:01.232 18:59:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:09:01.232 18:59:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:09:01.233 18:59:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:01.233 18:59:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:01.233 18:59:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:01.233 18:59:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:01.233 18:59:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:01.233 18:59:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:01.233 18:59:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:09:01.233 18:59:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:09:01.233 18:59:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_qHx8r.txt 00:09:01.233 18:59:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 75504 00:09:01.233 18:59:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 75504 ']' 00:09:01.233 18:59:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 75504 00:09:01.233 18:59:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:09:01.233 18:59:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:01.490 18:59:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75504 00:09:01.490 killing process with pid 75504 00:09:01.490 18:59:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:01.490 18:59:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:01.490 18:59:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75504' 00:09:01.490 18:59:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 75504 00:09:01.490 18:59:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 75504 00:09:01.747 18:59:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:09:01.747 18:59:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:09:01.747 ************************************ 00:09:01.747 END TEST bdev_nvme_reset_stuck_adm_cmd 00:09:01.747 ************************************ 00:09:01.747 00:09:01.747 real 0m3.577s 00:09:01.747 user 0m12.807s 00:09:01.747 sys 0m0.483s 00:09:01.747 18:59:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:01.747 18:59:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:01.747 18:59:19 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:09:01.747 18:59:19 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:09:01.747 18:59:19 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:01.747 18:59:19 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:01.747 18:59:19 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:01.747 ************************************ 00:09:01.747 START TEST nvme_fio 00:09:01.747 ************************************ 00:09:01.747 18:59:19 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:09:01.747 18:59:19 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:09:01.747 18:59:19 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:09:01.747 18:59:19 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:09:01.747 18:59:19 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:01.747 18:59:19 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:09:01.747 18:59:19 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:01.747 18:59:19 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:01.747 18:59:19 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:01.747 18:59:19 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:01.747 18:59:19 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:01.747 18:59:19 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:09:01.747 18:59:19 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:09:01.747 18:59:19 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:01.747 18:59:19 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:01.747 18:59:19 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:02.098 18:59:19 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:02.098 18:59:19 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:02.098 18:59:19 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:02.098 18:59:19 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:02.098 18:59:19 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:02.098 18:59:19 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:02.098 18:59:19 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:02.098 18:59:19 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:02.098 18:59:19 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:02.099 18:59:19 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:02.099 18:59:19 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:02.099 18:59:19 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:02.099 18:59:19 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:02.099 18:59:19 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:02.099 18:59:19 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:02.099 18:59:19 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:02.099 18:59:19 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:02.099 18:59:19 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:02.099 18:59:19 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:02.099 18:59:19 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:02.366 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:02.366 fio-3.35 00:09:02.366 Starting 1 thread 00:09:08.922 00:09:08.922 test: (groupid=0, jobs=1): err= 0: pid=75650: Thu Dec 5 18:59:25 2024 00:09:08.922 read: IOPS=24.4k, BW=95.2MiB/s (99.9MB/s)(191MiB/2001msec) 00:09:08.922 slat (nsec): min=3376, max=75000, avg=4954.21, stdev=2070.01 00:09:08.922 clat (usec): min=212, max=10349, avg=2624.54, stdev=755.46 00:09:08.923 lat (usec): min=216, max=10398, avg=2629.50, stdev=756.79 00:09:08.923 clat percentiles (usec): 00:09:08.923 | 1.00th=[ 1418], 5.00th=[ 2057], 10.00th=[ 2278], 20.00th=[ 2343], 00:09:08.923 | 30.00th=[ 2376], 40.00th=[ 2409], 50.00th=[ 2442], 60.00th=[ 2474], 00:09:08.923 | 70.00th=[ 2507], 80.00th=[ 2606], 90.00th=[ 3163], 95.00th=[ 4424], 00:09:08.923 | 99.00th=[ 5866], 99.50th=[ 6259], 99.90th=[ 7635], 99.95th=[ 8586], 00:09:08.923 | 99.99th=[10159] 00:09:08.923 bw ( KiB/s): min=92056, max=100256, per=98.79%, avg=96341.33, stdev=4112.55, samples=3 00:09:08.923 iops : min=23014, max=25064, avg=24085.33, stdev=1028.14, samples=3 00:09:08.923 write: IOPS=24.2k, BW=94.6MiB/s (99.2MB/s)(189MiB/2001msec); 0 zone resets 00:09:08.923 slat (usec): min=3, max=188, avg= 5.19, stdev= 2.16 00:09:08.923 clat (usec): min=309, max=10220, avg=2624.40, stdev=756.41 00:09:08.923 lat (usec): min=314, max=10234, avg=2629.59, stdev=757.70 00:09:08.923 clat percentiles (usec): 00:09:08.923 | 1.00th=[ 1418], 5.00th=[ 2040], 10.00th=[ 2278], 20.00th=[ 2343], 00:09:08.923 | 30.00th=[ 2376], 40.00th=[ 2409], 50.00th=[ 2442], 60.00th=[ 2474], 00:09:08.923 | 70.00th=[ 2507], 80.00th=[ 2606], 90.00th=[ 3130], 95.00th=[ 4359], 00:09:08.923 | 99.00th=[ 5866], 99.50th=[ 6259], 99.90th=[ 7832], 99.95th=[ 8848], 00:09:08.923 | 99.99th=[10028] 00:09:08.923 bw ( KiB/s): min=93032, max=99912, per=99.56%, avg=96474.67, stdev=3440.00, samples=3 00:09:08.923 iops : min=23258, max=24978, avg=24118.67, stdev=860.00, samples=3 00:09:08.923 lat (usec) : 250=0.01%, 500=0.01%, 750=0.03%, 1000=0.17% 00:09:08.923 lat (msec) : 2=4.24%, 4=89.77%, 10=5.77%, 20=0.01% 00:09:08.923 cpu : usr=99.10%, sys=0.20%, ctx=7, majf=0, minf=624 00:09:08.923 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:08.923 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:08.923 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:08.923 issued rwts: total=48783,48472,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:08.923 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:08.923 00:09:08.923 Run status group 0 (all jobs): 00:09:08.923 READ: bw=95.2MiB/s (99.9MB/s), 95.2MiB/s-95.2MiB/s (99.9MB/s-99.9MB/s), io=191MiB (200MB), run=2001-2001msec 00:09:08.923 WRITE: bw=94.6MiB/s (99.2MB/s), 94.6MiB/s-94.6MiB/s (99.2MB/s-99.2MB/s), io=189MiB (199MB), run=2001-2001msec 00:09:08.923 ----------------------------------------------------- 00:09:08.923 Suppressions used: 00:09:08.923 count bytes template 00:09:08.923 1 32 /usr/src/fio/parse.c 00:09:08.923 1 8 libtcmalloc_minimal.so 00:09:08.923 ----------------------------------------------------- 00:09:08.923 00:09:08.923 18:59:25 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:08.923 18:59:25 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:08.923 18:59:25 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:08.923 18:59:25 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:08.923 18:59:26 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:08.923 18:59:26 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:08.923 18:59:26 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:08.923 18:59:26 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:08.923 18:59:26 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:08.923 18:59:26 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:08.923 18:59:26 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:08.923 18:59:26 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:08.923 18:59:26 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:08.923 18:59:26 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:08.923 18:59:26 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:08.923 18:59:26 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:08.923 18:59:26 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:08.923 18:59:26 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:08.923 18:59:26 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:08.923 18:59:26 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:08.923 18:59:26 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:08.923 18:59:26 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:08.923 18:59:26 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:08.923 18:59:26 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:08.923 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:08.923 fio-3.35 00:09:08.923 Starting 1 thread 00:09:17.034 00:09:17.034 test: (groupid=0, jobs=1): err= 0: pid=75705: Thu Dec 5 18:59:33 2024 00:09:17.034 read: IOPS=24.0k, BW=93.7MiB/s (98.2MB/s)(187MiB/2001msec) 00:09:17.034 slat (nsec): min=4207, max=81431, avg=5038.70, stdev=2164.31 00:09:17.034 clat (usec): min=210, max=8439, avg=2663.37, stdev=788.42 00:09:17.034 lat (usec): min=215, max=8478, avg=2668.41, stdev=789.84 00:09:17.034 clat percentiles (usec): 00:09:17.034 | 1.00th=[ 1762], 5.00th=[ 2212], 10.00th=[ 2311], 20.00th=[ 2343], 00:09:17.034 | 30.00th=[ 2376], 40.00th=[ 2409], 50.00th=[ 2409], 60.00th=[ 2442], 00:09:17.034 | 70.00th=[ 2474], 80.00th=[ 2573], 90.00th=[ 3359], 95.00th=[ 4752], 00:09:17.034 | 99.00th=[ 6194], 99.50th=[ 6390], 99.90th=[ 6718], 99.95th=[ 6783], 00:09:17.034 | 99.99th=[ 8225] 00:09:17.034 bw ( KiB/s): min=91568, max=97392, per=99.51%, avg=95437.33, stdev=3351.00, samples=3 00:09:17.034 iops : min=22892, max=24348, avg=23859.33, stdev=837.75, samples=3 00:09:17.034 write: IOPS=23.8k, BW=93.1MiB/s (97.6MB/s)(186MiB/2001msec); 0 zone resets 00:09:17.034 slat (nsec): min=4285, max=60775, avg=5340.44, stdev=2248.08 00:09:17.034 clat (usec): min=225, max=8294, avg=2672.51, stdev=803.85 00:09:17.034 lat (usec): min=230, max=8307, avg=2677.85, stdev=805.33 00:09:17.034 clat percentiles (usec): 00:09:17.034 | 1.00th=[ 1762], 5.00th=[ 2212], 10.00th=[ 2311], 20.00th=[ 2343], 00:09:17.034 | 30.00th=[ 2376], 40.00th=[ 2409], 50.00th=[ 2409], 60.00th=[ 2442], 00:09:17.034 | 70.00th=[ 2474], 80.00th=[ 2606], 90.00th=[ 3425], 95.00th=[ 4817], 00:09:17.034 | 99.00th=[ 6194], 99.50th=[ 6456], 99.90th=[ 6718], 99.95th=[ 6783], 00:09:17.034 | 99.99th=[ 8094] 00:09:17.034 bw ( KiB/s): min=92272, max=97008, per=100.00%, avg=95426.67, stdev=2732.02, samples=3 00:09:17.034 iops : min=23068, max=24252, avg=23856.67, stdev=683.01, samples=3 00:09:17.034 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.04% 00:09:17.034 lat (msec) : 2=2.46%, 4=90.03%, 10=7.44% 00:09:17.034 cpu : usr=99.15%, sys=0.15%, ctx=3, majf=0, minf=625 00:09:17.034 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:17.034 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:17.034 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:17.034 issued rwts: total=47978,47691,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:17.034 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:17.034 00:09:17.034 Run status group 0 (all jobs): 00:09:17.034 READ: bw=93.7MiB/s (98.2MB/s), 93.7MiB/s-93.7MiB/s (98.2MB/s-98.2MB/s), io=187MiB (197MB), run=2001-2001msec 00:09:17.034 WRITE: bw=93.1MiB/s (97.6MB/s), 93.1MiB/s-93.1MiB/s (97.6MB/s-97.6MB/s), io=186MiB (195MB), run=2001-2001msec 00:09:17.034 ----------------------------------------------------- 00:09:17.034 Suppressions used: 00:09:17.034 count bytes template 00:09:17.034 1 32 /usr/src/fio/parse.c 00:09:17.034 1 8 libtcmalloc_minimal.so 00:09:17.034 ----------------------------------------------------- 00:09:17.034 00:09:17.034 18:59:33 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:17.034 18:59:33 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:17.034 18:59:33 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:17.034 18:59:33 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:17.034 18:59:33 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:17.034 18:59:33 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:17.034 18:59:33 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:17.034 18:59:33 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:17.034 18:59:33 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:17.034 18:59:33 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:17.034 18:59:33 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:17.034 18:59:33 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:17.034 18:59:33 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:17.034 18:59:33 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:17.034 18:59:33 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:17.034 18:59:33 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:17.034 18:59:33 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:17.034 18:59:33 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:17.034 18:59:33 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:17.034 18:59:33 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:17.034 18:59:33 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:17.034 18:59:33 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:17.034 18:59:33 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:17.034 18:59:33 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:17.034 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:17.034 fio-3.35 00:09:17.034 Starting 1 thread 00:09:23.591 00:09:23.591 test: (groupid=0, jobs=1): err= 0: pid=75760: Thu Dec 5 18:59:40 2024 00:09:23.591 read: IOPS=23.8k, BW=92.8MiB/s (97.3MB/s)(186MiB/2001msec) 00:09:23.591 slat (usec): min=4, max=374, avg= 5.08, stdev= 3.01 00:09:23.591 clat (usec): min=280, max=8688, avg=2693.44, stdev=906.51 00:09:23.591 lat (usec): min=284, max=8742, avg=2698.52, stdev=908.20 00:09:23.591 clat percentiles (usec): 00:09:23.591 | 1.00th=[ 1729], 5.00th=[ 2245], 10.00th=[ 2343], 20.00th=[ 2376], 00:09:23.591 | 30.00th=[ 2409], 40.00th=[ 2409], 50.00th=[ 2442], 60.00th=[ 2474], 00:09:23.591 | 70.00th=[ 2507], 80.00th=[ 2606], 90.00th=[ 3032], 95.00th=[ 5145], 00:09:23.591 | 99.00th=[ 6587], 99.50th=[ 6980], 99.90th=[ 7832], 99.95th=[ 7963], 00:09:23.591 | 99.99th=[ 8455] 00:09:23.591 bw ( KiB/s): min=93472, max=97408, per=100.00%, avg=95546.67, stdev=1976.65, samples=3 00:09:23.591 iops : min=23368, max=24352, avg=23886.67, stdev=494.16, samples=3 00:09:23.591 write: IOPS=23.6k, BW=92.2MiB/s (96.7MB/s)(185MiB/2001msec); 0 zone resets 00:09:23.591 slat (nsec): min=4343, max=71077, avg=5325.08, stdev=2411.76 00:09:23.591 clat (usec): min=255, max=8532, avg=2692.38, stdev=906.43 00:09:23.591 lat (usec): min=260, max=8569, avg=2697.70, stdev=908.10 00:09:23.591 clat percentiles (usec): 00:09:23.591 | 1.00th=[ 1729], 5.00th=[ 2245], 10.00th=[ 2343], 20.00th=[ 2376], 00:09:23.591 | 30.00th=[ 2409], 40.00th=[ 2409], 50.00th=[ 2442], 60.00th=[ 2474], 00:09:23.591 | 70.00th=[ 2507], 80.00th=[ 2606], 90.00th=[ 2999], 95.00th=[ 5145], 00:09:23.591 | 99.00th=[ 6587], 99.50th=[ 6980], 99.90th=[ 7898], 99.95th=[ 8029], 00:09:23.591 | 99.99th=[ 8356] 00:09:23.591 bw ( KiB/s): min=92672, max=98472, per=100.00%, avg=95573.33, stdev=2900.00, samples=3 00:09:23.591 iops : min=23168, max=24618, avg=23893.33, stdev=725.00, samples=3 00:09:23.591 lat (usec) : 500=0.02%, 750=0.02%, 1000=0.02% 00:09:23.591 lat (msec) : 2=2.64%, 4=90.55%, 10=6.76% 00:09:23.591 cpu : usr=99.10%, sys=0.20%, ctx=8, majf=0, minf=625 00:09:23.591 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:23.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:23.591 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:23.591 issued rwts: total=47532,47250,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:23.591 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:23.591 00:09:23.591 Run status group 0 (all jobs): 00:09:23.591 READ: bw=92.8MiB/s (97.3MB/s), 92.8MiB/s-92.8MiB/s (97.3MB/s-97.3MB/s), io=186MiB (195MB), run=2001-2001msec 00:09:23.591 WRITE: bw=92.2MiB/s (96.7MB/s), 92.2MiB/s-92.2MiB/s (96.7MB/s-96.7MB/s), io=185MiB (194MB), run=2001-2001msec 00:09:23.591 ----------------------------------------------------- 00:09:23.591 Suppressions used: 00:09:23.591 count bytes template 00:09:23.591 1 32 /usr/src/fio/parse.c 00:09:23.591 1 8 libtcmalloc_minimal.so 00:09:23.591 ----------------------------------------------------- 00:09:23.591 00:09:23.591 18:59:41 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:23.591 18:59:41 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:23.591 18:59:41 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:23.591 18:59:41 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:23.849 18:59:41 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:23.849 18:59:41 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:24.107 18:59:41 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:24.107 18:59:41 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:24.107 18:59:41 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:24.107 18:59:41 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:24.107 18:59:41 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:24.107 18:59:41 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:24.107 18:59:41 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:24.107 18:59:41 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:24.107 18:59:41 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:24.107 18:59:41 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:24.107 18:59:41 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:24.107 18:59:41 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:24.107 18:59:41 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:24.107 18:59:41 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:24.107 18:59:41 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:24.107 18:59:41 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:24.107 18:59:41 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:24.107 18:59:41 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:24.107 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:24.107 fio-3.35 00:09:24.107 Starting 1 thread 00:09:30.682 00:09:30.682 test: (groupid=0, jobs=1): err= 0: pid=75817: Thu Dec 5 18:59:47 2024 00:09:30.682 read: IOPS=23.5k, BW=91.9MiB/s (96.3MB/s)(184MiB/2001msec) 00:09:30.682 slat (usec): min=3, max=183, avg= 4.92, stdev= 2.22 00:09:30.682 clat (usec): min=701, max=10689, avg=2716.79, stdev=945.76 00:09:30.682 lat (usec): min=706, max=10733, avg=2721.71, stdev=946.79 00:09:30.682 clat percentiles (usec): 00:09:30.682 | 1.00th=[ 1582], 5.00th=[ 2073], 10.00th=[ 2245], 20.00th=[ 2311], 00:09:30.682 | 30.00th=[ 2343], 40.00th=[ 2376], 50.00th=[ 2376], 60.00th=[ 2442], 00:09:30.682 | 70.00th=[ 2507], 80.00th=[ 2802], 90.00th=[ 3851], 95.00th=[ 5014], 00:09:30.682 | 99.00th=[ 6652], 99.50th=[ 6980], 99.90th=[ 7701], 99.95th=[ 8586], 00:09:30.682 | 99.99th=[10552] 00:09:30.682 bw ( KiB/s): min=91024, max=104488, per=100.00%, avg=99725.33, stdev=7546.83, samples=3 00:09:30.682 iops : min=22756, max=26122, avg=24931.33, stdev=1886.71, samples=3 00:09:30.682 write: IOPS=23.4k, BW=91.2MiB/s (95.7MB/s)(183MiB/2001msec); 0 zone resets 00:09:30.682 slat (usec): min=3, max=103, avg= 5.18, stdev= 2.22 00:09:30.682 clat (usec): min=756, max=10620, avg=2728.26, stdev=960.80 00:09:30.682 lat (usec): min=761, max=10633, avg=2733.43, stdev=961.87 00:09:30.682 clat percentiles (usec): 00:09:30.682 | 1.00th=[ 1582], 5.00th=[ 2073], 10.00th=[ 2245], 20.00th=[ 2311], 00:09:30.682 | 30.00th=[ 2343], 40.00th=[ 2376], 50.00th=[ 2409], 60.00th=[ 2442], 00:09:30.682 | 70.00th=[ 2507], 80.00th=[ 2802], 90.00th=[ 3884], 95.00th=[ 5080], 00:09:30.682 | 99.00th=[ 6718], 99.50th=[ 6980], 99.90th=[ 7832], 99.95th=[ 8586], 00:09:30.682 | 99.99th=[10421] 00:09:30.682 bw ( KiB/s): min=91208, max=104136, per=100.00%, avg=99730.67, stdev=7382.25, samples=3 00:09:30.682 iops : min=22802, max=26034, avg=24932.67, stdev=1845.56, samples=3 00:09:30.682 lat (usec) : 750=0.01%, 1000=0.07% 00:09:30.682 lat (msec) : 2=3.51%, 4=86.88%, 10=9.52%, 20=0.02% 00:09:30.682 cpu : usr=99.20%, sys=0.00%, ctx=5, majf=0, minf=623 00:09:30.683 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:30.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:30.683 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:30.683 issued rwts: total=47052,46735,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:30.683 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:30.683 00:09:30.683 Run status group 0 (all jobs): 00:09:30.683 READ: bw=91.9MiB/s (96.3MB/s), 91.9MiB/s-91.9MiB/s (96.3MB/s-96.3MB/s), io=184MiB (193MB), run=2001-2001msec 00:09:30.683 WRITE: bw=91.2MiB/s (95.7MB/s), 91.2MiB/s-91.2MiB/s (95.7MB/s-95.7MB/s), io=183MiB (191MB), run=2001-2001msec 00:09:30.683 ----------------------------------------------------- 00:09:30.683 Suppressions used: 00:09:30.683 count bytes template 00:09:30.683 1 32 /usr/src/fio/parse.c 00:09:30.683 1 8 libtcmalloc_minimal.so 00:09:30.683 ----------------------------------------------------- 00:09:30.683 00:09:30.683 18:59:47 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:30.683 18:59:47 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:09:30.683 00:09:30.683 real 0m28.473s 00:09:30.683 user 0m17.066s 00:09:30.683 sys 0m20.511s 00:09:30.683 18:59:47 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:30.683 18:59:47 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:09:30.683 ************************************ 00:09:30.683 END TEST nvme_fio 00:09:30.683 ************************************ 00:09:30.683 ************************************ 00:09:30.683 END TEST nvme 00:09:30.683 ************************************ 00:09:30.683 00:09:30.683 real 1m37.090s 00:09:30.683 user 3m33.934s 00:09:30.683 sys 0m30.881s 00:09:30.683 18:59:47 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:30.683 18:59:47 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:30.683 18:59:47 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:09:30.683 18:59:47 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:30.683 18:59:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:30.683 18:59:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:30.683 18:59:47 -- common/autotest_common.sh@10 -- # set +x 00:09:30.683 ************************************ 00:09:30.683 START TEST nvme_scc 00:09:30.683 ************************************ 00:09:30.683 18:59:47 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:30.683 * Looking for test storage... 00:09:30.683 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:30.683 18:59:47 nvme_scc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:30.683 18:59:47 nvme_scc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:30.683 18:59:47 nvme_scc -- common/autotest_common.sh@1711 -- # lcov --version 00:09:30.683 18:59:47 nvme_scc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:30.683 18:59:47 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:30.683 18:59:47 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:30.683 18:59:47 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:30.683 18:59:47 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:09:30.683 18:59:47 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:09:30.683 18:59:47 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:09:30.683 18:59:47 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:09:30.683 18:59:47 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:09:30.683 18:59:47 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:09:30.683 18:59:47 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:09:30.683 18:59:47 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:30.683 18:59:47 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:09:30.683 18:59:47 nvme_scc -- scripts/common.sh@345 -- # : 1 00:09:30.683 18:59:47 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:30.683 18:59:47 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:30.683 18:59:47 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:09:30.683 18:59:47 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:09:30.683 18:59:47 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:30.683 18:59:47 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:09:30.683 18:59:47 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:30.683 18:59:47 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:09:30.683 18:59:47 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:09:30.683 18:59:47 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:30.683 18:59:47 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:09:30.683 18:59:47 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:30.683 18:59:47 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:30.683 18:59:47 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:30.683 18:59:47 nvme_scc -- scripts/common.sh@368 -- # return 0 00:09:30.683 18:59:47 nvme_scc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:30.683 18:59:47 nvme_scc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:30.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.683 --rc genhtml_branch_coverage=1 00:09:30.683 --rc genhtml_function_coverage=1 00:09:30.683 --rc genhtml_legend=1 00:09:30.683 --rc geninfo_all_blocks=1 00:09:30.683 --rc geninfo_unexecuted_blocks=1 00:09:30.683 00:09:30.683 ' 00:09:30.683 18:59:47 nvme_scc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:30.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.683 --rc genhtml_branch_coverage=1 00:09:30.683 --rc genhtml_function_coverage=1 00:09:30.683 --rc genhtml_legend=1 00:09:30.683 --rc geninfo_all_blocks=1 00:09:30.683 --rc geninfo_unexecuted_blocks=1 00:09:30.683 00:09:30.683 ' 00:09:30.683 18:59:47 nvme_scc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:30.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.683 --rc genhtml_branch_coverage=1 00:09:30.683 --rc genhtml_function_coverage=1 00:09:30.683 --rc genhtml_legend=1 00:09:30.683 --rc geninfo_all_blocks=1 00:09:30.683 --rc geninfo_unexecuted_blocks=1 00:09:30.683 00:09:30.683 ' 00:09:30.683 18:59:47 nvme_scc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:30.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.683 --rc genhtml_branch_coverage=1 00:09:30.683 --rc genhtml_function_coverage=1 00:09:30.683 --rc genhtml_legend=1 00:09:30.683 --rc geninfo_all_blocks=1 00:09:30.683 --rc geninfo_unexecuted_blocks=1 00:09:30.683 00:09:30.683 ' 00:09:30.683 18:59:47 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:30.683 18:59:47 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:30.683 18:59:47 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:30.683 18:59:47 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:30.683 18:59:47 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:30.683 18:59:47 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:09:30.683 18:59:47 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:30.683 18:59:47 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:30.683 18:59:47 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:30.683 18:59:47 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.683 18:59:47 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.683 18:59:47 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.683 18:59:47 nvme_scc -- paths/export.sh@5 -- # export PATH 00:09:30.683 18:59:47 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.683 18:59:47 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:09:30.683 18:59:47 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:30.683 18:59:47 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:09:30.683 18:59:47 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:30.683 18:59:47 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:09:30.683 18:59:47 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:30.683 18:59:47 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:30.683 18:59:47 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:30.683 18:59:47 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:09:30.683 18:59:47 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:30.683 18:59:47 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:09:30.683 18:59:47 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:09:30.683 18:59:47 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:09:30.683 18:59:47 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:30.683 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:30.946 Waiting for block devices as requested 00:09:30.946 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:30.946 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:31.206 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:31.206 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:36.597 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:36.597 18:59:53 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:09:36.597 18:59:53 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:36.597 18:59:53 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:36.597 18:59:53 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:36.597 18:59:53 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:36.597 18:59:53 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:36.597 18:59:53 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:36.597 18:59:53 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:36.597 18:59:53 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:36.597 18:59:53 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:36.597 18:59:53 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:36.597 18:59:53 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:36.597 18:59:53 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:36.597 18:59:53 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:36.597 18:59:53 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:36.597 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.597 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.597 18:59:53 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:36.597 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:36.597 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.597 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.597 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:36.597 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:36.597 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:36.597 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.597 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.597 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:36.597 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:36.597 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:36.597 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.597 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.597 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:36.597 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:36.597 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:36.597 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.597 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.597 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:36.597 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:36.597 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:36.597 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.597 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.597 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:36.597 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:36.597 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:36.597 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.597 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.597 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:36.597 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:36.597 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:36.597 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.597 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.597 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:36.597 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:36.597 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:36.597 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.597 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.597 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.597 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:36.597 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:36.597 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.597 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.597 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:36.597 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:36.597 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:36.597 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.597 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.597 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.597 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:36.597 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:36.597 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.597 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.597 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:36.597 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:36.597 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:36.597 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.597 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.597 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.597 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:36.597 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:36.597 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.597 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.597 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.597 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:36.597 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:36.597 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.597 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.597 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:36.597 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:36.597 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:36.597 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.597 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.597 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:36.597 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:36.597 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.598 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:36.599 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:09:36.600 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:36.601 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:36.602 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:36.603 18:59:53 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:36.603 18:59:53 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:36.603 18:59:53 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:36.603 18:59:53 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:36.603 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:36.604 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.605 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:09:36.606 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.607 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.608 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:36.609 18:59:53 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:36.609 18:59:53 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:36.609 18:59:53 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:36.609 18:59:53 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.609 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.610 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:36.611 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:09:36.612 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:09:36.613 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.614 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:09:36.615 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:09:36.616 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:09:36.617 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.617 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.617 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.617 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:09:36.617 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:09:36.617 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.617 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.617 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.617 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:09:36.617 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:09:36.617 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.617 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.617 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.617 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:09:36.617 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:09:36.617 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.617 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.617 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.617 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:09:36.617 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:09:36.617 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.617 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.617 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:36.617 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:09:36.617 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:09:36.617 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.617 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.617 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:36.617 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:09:36.617 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:09:36.617 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.617 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.617 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:36.617 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:36.617 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:36.617 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.617 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.617 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:36.617 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:36.617 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:36.617 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.617 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.617 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:36.617 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:36.617 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:36.617 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.617 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.617 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:36.617 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:36.617 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:36.617 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.617 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.617 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:36.617 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:36.617 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:36.617 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.617 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.617 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:36.617 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:36.617 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:36.617 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.617 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.617 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:36.617 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:36.617 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:36.617 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.617 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.617 18:59:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:36.617 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:36.617 18:59:53 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:36.617 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.617 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.617 18:59:53 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:09:36.617 18:59:53 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:36.617 18:59:53 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:36.617 18:59:53 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:36.617 18:59:53 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:36.617 18:59:53 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:36.617 18:59:53 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:36.617 18:59:53 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:36.617 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.617 18:59:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.617 18:59:53 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:36.617 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:36.617 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.617 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.617 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:36.617 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:36.617 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:36.617 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.617 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.617 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:36.617 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:36.617 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:36.617 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.617 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.617 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:36.617 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:36.617 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:36.617 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.617 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.617 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:36.617 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:36.617 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:36.617 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.617 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.617 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:36.617 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:36.617 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:36.617 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.617 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.617 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:36.617 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:36.617 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:36.617 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.617 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.617 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:36.617 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:36.617 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:36.617 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.617 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.617 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:36.617 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:36.617 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:36.617 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.617 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.617 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.617 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:36.617 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:36.617 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.617 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.617 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.617 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:36.617 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:36.617 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.617 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.617 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.617 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:36.617 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:36.617 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.617 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.617 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.617 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:36.617 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:36.617 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.617 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.617 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:36.617 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:36.617 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:36.617 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.617 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.618 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:36.619 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:36.620 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:36.621 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:36.622 18:59:54 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:36.622 18:59:54 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:36.622 18:59:54 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:36.622 18:59:54 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.622 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.623 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:36.624 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:36.625 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.625 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.625 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.625 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:36.625 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:36.625 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.625 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.625 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.625 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:36.625 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:36.625 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.625 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.625 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:36.625 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:36.625 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:36.625 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.625 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.625 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:36.625 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:36.625 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:36.625 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.625 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.625 18:59:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:36.625 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:36.625 18:59:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:36.625 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.625 18:59:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.625 18:59:54 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:36.625 18:59:54 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:36.625 18:59:54 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:36.625 18:59:54 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:36.625 18:59:54 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:36.625 18:59:54 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:36.625 18:59:54 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:09:36.625 18:59:54 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:09:36.625 18:59:54 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:36.886 18:59:54 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:09:36.886 18:59:54 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:36.886 18:59:54 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:09:36.886 18:59:54 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:09:36.886 18:59:54 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:36.886 18:59:54 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:36.886 18:59:54 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:09:36.886 18:59:54 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:09:36.886 18:59:54 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:09:36.886 18:59:54 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:09:36.886 18:59:54 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:09:36.886 18:59:54 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:09:36.886 18:59:54 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:36.886 18:59:54 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:36.886 18:59:54 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:36.886 18:59:54 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:36.886 18:59:54 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:36.886 18:59:54 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:36.886 18:59:54 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:09:36.886 18:59:54 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:36.886 18:59:54 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:09:36.886 18:59:54 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:09:36.886 18:59:54 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:09:36.886 18:59:54 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:09:36.886 18:59:54 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:09:36.886 18:59:54 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:09:36.886 18:59:54 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:36.886 18:59:54 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:36.886 18:59:54 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:36.886 18:59:54 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:36.886 18:59:54 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:36.886 18:59:54 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:36.886 18:59:54 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:09:36.886 18:59:54 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:36.886 18:59:54 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:09:36.886 18:59:54 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:09:36.886 18:59:54 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:09:36.886 18:59:54 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:09:36.886 18:59:54 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:09:36.886 18:59:54 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:09:36.886 18:59:54 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:36.886 18:59:54 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:36.886 18:59:54 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:36.886 18:59:54 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:36.886 18:59:54 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:36.886 18:59:54 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:36.886 18:59:54 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:09:36.886 18:59:54 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:36.886 18:59:54 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:09:36.886 18:59:54 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:09:36.886 18:59:54 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:09:36.886 18:59:54 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:09:36.886 18:59:54 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:09:36.886 18:59:54 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:09:36.886 18:59:54 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:36.886 18:59:54 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:36.886 18:59:54 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:36.886 18:59:54 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:36.886 18:59:54 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:36.886 18:59:54 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:36.886 18:59:54 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:09:36.886 18:59:54 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:09:36.886 18:59:54 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:09:36.886 18:59:54 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:09:36.886 18:59:54 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:09:36.886 18:59:54 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:09:36.886 18:59:54 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:37.147 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:37.717 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:37.717 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:37.717 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:37.717 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:37.717 18:59:55 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:37.717 18:59:55 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:37.717 18:59:55 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:37.717 18:59:55 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:37.717 ************************************ 00:09:37.717 START TEST nvme_simple_copy 00:09:37.717 ************************************ 00:09:37.717 18:59:55 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:37.978 Initializing NVMe Controllers 00:09:37.978 Attaching to 0000:00:10.0 00:09:37.978 Controller supports SCC. Attached to 0000:00:10.0 00:09:37.978 Namespace ID: 1 size: 6GB 00:09:37.978 Initialization complete. 00:09:37.978 00:09:37.978 Controller QEMU NVMe Ctrl (12340 ) 00:09:37.978 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:09:37.978 Namespace Block Size:4096 00:09:37.978 Writing LBAs 0 to 63 with Random Data 00:09:37.978 Copied LBAs from 0 - 63 to the Destination LBA 256 00:09:37.978 LBAs matching Written Data: 64 00:09:37.978 00:09:37.978 real 0m0.244s 00:09:37.978 user 0m0.097s 00:09:37.978 sys 0m0.045s 00:09:37.978 ************************************ 00:09:37.978 END TEST nvme_simple_copy 00:09:37.978 ************************************ 00:09:37.978 18:59:55 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:37.978 18:59:55 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:09:37.978 ************************************ 00:09:37.978 END TEST nvme_scc 00:09:37.978 ************************************ 00:09:37.978 00:09:37.978 real 0m7.844s 00:09:37.978 user 0m1.139s 00:09:37.978 sys 0m1.415s 00:09:37.978 18:59:55 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:37.978 18:59:55 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:38.239 18:59:55 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:09:38.239 18:59:55 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:09:38.239 18:59:55 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:09:38.239 18:59:55 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:09:38.239 18:59:55 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:09:38.239 18:59:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:38.239 18:59:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:38.239 18:59:55 -- common/autotest_common.sh@10 -- # set +x 00:09:38.239 ************************************ 00:09:38.239 START TEST nvme_fdp 00:09:38.239 ************************************ 00:09:38.239 18:59:55 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:09:38.239 * Looking for test storage... 00:09:38.239 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:38.239 18:59:55 nvme_fdp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:38.239 18:59:55 nvme_fdp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:38.239 18:59:55 nvme_fdp -- common/autotest_common.sh@1711 -- # lcov --version 00:09:38.239 18:59:55 nvme_fdp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:38.239 18:59:55 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:38.239 18:59:55 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:38.239 18:59:55 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:38.239 18:59:55 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:09:38.239 18:59:55 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:09:38.239 18:59:55 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:09:38.239 18:59:55 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:09:38.239 18:59:55 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:09:38.239 18:59:55 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:09:38.239 18:59:55 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:09:38.239 18:59:55 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:38.239 18:59:55 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:09:38.239 18:59:55 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:09:38.239 18:59:55 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:38.239 18:59:55 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:38.239 18:59:55 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:09:38.239 18:59:55 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:09:38.239 18:59:55 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:38.239 18:59:55 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:09:38.239 18:59:55 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:38.239 18:59:55 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:09:38.239 18:59:55 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:09:38.239 18:59:55 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:38.239 18:59:55 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:09:38.239 18:59:55 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:38.239 18:59:55 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:38.239 18:59:55 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:38.239 18:59:55 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:09:38.239 18:59:55 nvme_fdp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:38.239 18:59:55 nvme_fdp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:38.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.239 --rc genhtml_branch_coverage=1 00:09:38.239 --rc genhtml_function_coverage=1 00:09:38.239 --rc genhtml_legend=1 00:09:38.239 --rc geninfo_all_blocks=1 00:09:38.239 --rc geninfo_unexecuted_blocks=1 00:09:38.239 00:09:38.239 ' 00:09:38.239 18:59:55 nvme_fdp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:38.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.239 --rc genhtml_branch_coverage=1 00:09:38.239 --rc genhtml_function_coverage=1 00:09:38.239 --rc genhtml_legend=1 00:09:38.239 --rc geninfo_all_blocks=1 00:09:38.239 --rc geninfo_unexecuted_blocks=1 00:09:38.239 00:09:38.239 ' 00:09:38.239 18:59:55 nvme_fdp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:38.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.239 --rc genhtml_branch_coverage=1 00:09:38.239 --rc genhtml_function_coverage=1 00:09:38.239 --rc genhtml_legend=1 00:09:38.239 --rc geninfo_all_blocks=1 00:09:38.239 --rc geninfo_unexecuted_blocks=1 00:09:38.239 00:09:38.239 ' 00:09:38.239 18:59:55 nvme_fdp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:38.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.239 --rc genhtml_branch_coverage=1 00:09:38.239 --rc genhtml_function_coverage=1 00:09:38.239 --rc genhtml_legend=1 00:09:38.239 --rc geninfo_all_blocks=1 00:09:38.239 --rc geninfo_unexecuted_blocks=1 00:09:38.239 00:09:38.239 ' 00:09:38.239 18:59:55 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:38.239 18:59:55 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:38.239 18:59:55 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:38.239 18:59:55 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:38.239 18:59:55 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:38.239 18:59:55 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:09:38.239 18:59:55 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:38.239 18:59:55 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:38.239 18:59:55 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:38.239 18:59:55 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.239 18:59:55 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.239 18:59:55 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.239 18:59:55 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:09:38.239 18:59:55 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.239 18:59:55 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:09:38.239 18:59:55 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:38.239 18:59:55 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:09:38.239 18:59:55 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:38.239 18:59:55 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:09:38.239 18:59:55 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:38.239 18:59:55 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:38.239 18:59:55 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:38.239 18:59:55 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:09:38.239 18:59:55 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:38.239 18:59:55 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:38.500 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:38.761 Waiting for block devices as requested 00:09:38.761 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:38.761 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:39.021 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:39.021 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:44.322 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:44.322 19:00:01 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:09:44.322 19:00:01 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:44.322 19:00:01 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:44.322 19:00:01 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:44.322 19:00:01 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:44.323 19:00:01 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:44.323 19:00:01 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:44.323 19:00:01 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:44.323 19:00:01 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.323 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:44.324 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:09:44.325 19:00:01 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.326 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:44.327 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:44.328 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:44.329 19:00:01 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:44.329 19:00:01 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:44.329 19:00:01 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:44.329 19:00:01 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.329 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.330 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.331 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:09:44.332 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.333 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.334 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:44.335 19:00:01 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:44.335 19:00:01 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:44.335 19:00:01 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:44.335 19:00:01 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:44.335 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:44.336 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.337 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:09:44.338 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:44.339 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.340 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.341 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.342 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.610 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.610 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:09:44.610 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:09:44.610 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.610 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.610 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:44.610 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:09:44.610 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:09:44.610 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.610 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.610 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:44.610 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:09:44.610 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:09:44.610 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.610 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.610 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:44.610 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:44.610 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:44.610 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.610 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.610 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:44.610 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:44.610 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:44.610 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.610 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.610 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:44.610 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:44.610 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:44.610 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.610 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.610 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:44.610 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:44.610 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:44.610 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.610 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.610 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:44.610 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:44.611 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:44.612 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:44.613 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:44.614 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.615 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:44.616 19:00:01 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:44.616 19:00:01 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:44.616 19:00:01 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:44.616 19:00:01 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:44.616 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.617 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.618 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.618 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:44.618 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:44.618 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.618 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.618 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.618 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:44.618 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:44.618 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.618 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.618 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.618 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:44.618 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:44.618 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.618 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.618 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:44.618 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:44.618 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:44.618 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.618 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.618 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:44.618 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:44.618 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:44.618 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.618 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.618 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.618 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:44.618 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:44.618 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.618 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.618 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:44.618 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:44.618 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:44.618 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.618 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.618 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:44.618 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:44.618 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:44.618 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.618 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.618 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.618 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:44.618 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:44.618 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.618 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.618 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.618 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:44.618 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:44.618 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.618 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.618 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:44.618 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:44.618 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:44.618 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.618 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.618 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.618 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:44.618 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:44.618 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.618 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.618 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.618 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:44.618 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:44.618 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.618 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.618 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.618 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:44.618 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:44.618 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.618 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.618 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.618 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:44.618 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:44.618 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.618 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.618 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.618 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:44.618 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:44.618 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.618 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.618 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:44.618 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:44.618 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:44.618 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.618 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.618 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:44.618 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:44.618 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:44.618 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.618 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.618 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.618 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:44.618 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:44.618 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.618 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.618 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.618 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:44.618 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:44.618 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.618 19:00:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.618 19:00:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.618 19:00:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:44.618 19:00:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:44.618 19:00:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.618 19:00:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.618 19:00:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:44.618 19:00:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:44.618 19:00:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:44.618 19:00:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.618 19:00:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.618 19:00:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.618 19:00:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:44.618 19:00:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:44.618 19:00:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.618 19:00:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.618 19:00:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.618 19:00:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:44.618 19:00:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:44.618 19:00:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.618 19:00:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.618 19:00:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.618 19:00:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:44.618 19:00:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:44.618 19:00:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.618 19:00:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.618 19:00:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.618 19:00:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:44.618 19:00:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:44.618 19:00:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.618 19:00:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.618 19:00:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.618 19:00:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:44.618 19:00:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:44.618 19:00:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.618 19:00:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.618 19:00:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.618 19:00:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:44.618 19:00:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:44.618 19:00:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.618 19:00:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.618 19:00:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:44.618 19:00:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:44.618 19:00:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:44.618 19:00:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.618 19:00:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.618 19:00:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:44.618 19:00:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:44.618 19:00:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:44.618 19:00:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.618 19:00:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.618 19:00:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:44.618 19:00:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:44.618 19:00:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:44.618 19:00:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:44.618 19:00:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.619 19:00:02 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:44.619 19:00:02 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:44.619 19:00:02 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:44.619 19:00:02 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:44.619 19:00:02 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:44.619 19:00:02 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:44.619 19:00:02 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:09:44.619 19:00:02 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:09:44.619 19:00:02 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:44.619 19:00:02 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:09:44.619 19:00:02 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:44.619 19:00:02 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:09:44.619 19:00:02 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:09:44.619 19:00:02 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:44.619 19:00:02 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:44.619 19:00:02 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:09:44.619 19:00:02 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:09:44.619 19:00:02 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:09:44.619 19:00:02 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:09:44.619 19:00:02 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:09:44.619 19:00:02 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:09:44.619 19:00:02 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:44.619 19:00:02 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:44.619 19:00:02 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:44.619 19:00:02 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:44.619 19:00:02 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:44.619 19:00:02 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:44.619 19:00:02 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:44.619 19:00:02 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:09:44.619 19:00:02 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:09:44.619 19:00:02 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:09:44.619 19:00:02 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:09:44.619 19:00:02 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:09:44.619 19:00:02 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:09:44.619 19:00:02 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:44.619 19:00:02 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:44.619 19:00:02 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:44.619 19:00:02 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:44.619 19:00:02 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:44.619 19:00:02 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:44.619 19:00:02 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:44.619 19:00:02 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:09:44.619 19:00:02 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:09:44.619 19:00:02 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:09:44.619 19:00:02 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:09:44.619 19:00:02 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:09:44.619 19:00:02 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:09:44.619 19:00:02 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:44.619 19:00:02 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:44.619 19:00:02 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:09:44.619 19:00:02 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:09:44.619 19:00:02 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:09:44.619 19:00:02 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:44.619 19:00:02 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:09:44.619 19:00:02 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:44.619 19:00:02 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:09:44.619 19:00:02 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:09:44.619 19:00:02 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:09:44.619 19:00:02 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:09:44.619 19:00:02 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:09:44.619 19:00:02 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:09:44.619 19:00:02 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:44.619 19:00:02 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:44.619 19:00:02 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:44.619 19:00:02 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:44.619 19:00:02 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:44.619 19:00:02 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:44.619 19:00:02 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:09:44.619 19:00:02 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:09:44.619 19:00:02 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:09:44.619 19:00:02 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:09:44.619 19:00:02 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:09:44.619 19:00:02 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:45.190 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:45.453 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:45.453 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:45.713 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:45.713 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:45.713 19:00:03 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:45.713 19:00:03 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:45.713 19:00:03 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:45.713 19:00:03 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:45.713 ************************************ 00:09:45.713 START TEST nvme_flexible_data_placement 00:09:45.713 ************************************ 00:09:45.713 19:00:03 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:45.975 Initializing NVMe Controllers 00:09:45.975 Attaching to 0000:00:13.0 00:09:45.975 Controller supports FDP Attached to 0000:00:13.0 00:09:45.975 Namespace ID: 1 Endurance Group ID: 1 00:09:45.975 Initialization complete. 00:09:45.975 00:09:45.975 ================================== 00:09:45.975 == FDP tests for Namespace: #01 == 00:09:45.975 ================================== 00:09:45.975 00:09:45.975 Get Feature: FDP: 00:09:45.975 ================= 00:09:45.975 Enabled: Yes 00:09:45.975 FDP configuration Index: 0 00:09:45.975 00:09:45.975 FDP configurations log page 00:09:45.975 =========================== 00:09:45.975 Number of FDP configurations: 1 00:09:45.975 Version: 0 00:09:45.975 Size: 112 00:09:45.975 FDP Configuration Descriptor: 0 00:09:45.975 Descriptor Size: 96 00:09:45.975 Reclaim Group Identifier format: 2 00:09:45.975 FDP Volatile Write Cache: Not Present 00:09:45.975 FDP Configuration: Valid 00:09:45.975 Vendor Specific Size: 0 00:09:45.975 Number of Reclaim Groups: 2 00:09:45.975 Number of Recalim Unit Handles: 8 00:09:45.975 Max Placement Identifiers: 128 00:09:45.975 Number of Namespaces Suppprted: 256 00:09:45.975 Reclaim unit Nominal Size: 6000000 bytes 00:09:45.975 Estimated Reclaim Unit Time Limit: Not Reported 00:09:45.975 RUH Desc #000: RUH Type: Initially Isolated 00:09:45.975 RUH Desc #001: RUH Type: Initially Isolated 00:09:45.975 RUH Desc #002: RUH Type: Initially Isolated 00:09:45.975 RUH Desc #003: RUH Type: Initially Isolated 00:09:45.975 RUH Desc #004: RUH Type: Initially Isolated 00:09:45.975 RUH Desc #005: RUH Type: Initially Isolated 00:09:45.975 RUH Desc #006: RUH Type: Initially Isolated 00:09:45.975 RUH Desc #007: RUH Type: Initially Isolated 00:09:45.975 00:09:45.975 FDP reclaim unit handle usage log page 00:09:45.975 ====================================== 00:09:45.975 Number of Reclaim Unit Handles: 8 00:09:45.975 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:45.975 RUH Usage Desc #001: RUH Attributes: Unused 00:09:45.975 RUH Usage Desc #002: RUH Attributes: Unused 00:09:45.975 RUH Usage Desc #003: RUH Attributes: Unused 00:09:45.975 RUH Usage Desc #004: RUH Attributes: Unused 00:09:45.975 RUH Usage Desc #005: RUH Attributes: Unused 00:09:45.975 RUH Usage Desc #006: RUH Attributes: Unused 00:09:45.975 RUH Usage Desc #007: RUH Attributes: Unused 00:09:45.975 00:09:45.975 FDP statistics log page 00:09:45.975 ======================= 00:09:45.975 Host bytes with metadata written: 1982636032 00:09:45.975 Media bytes with metadata written: 1982955520 00:09:45.975 Media bytes erased: 0 00:09:45.975 00:09:45.975 FDP Reclaim unit handle status 00:09:45.975 ============================== 00:09:45.975 Number of RUHS descriptors: 2 00:09:45.975 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000001d36 00:09:45.975 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:09:45.975 00:09:45.975 FDP write on placement id: 0 success 00:09:45.975 00:09:45.975 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:09:45.975 00:09:45.975 IO mgmt send: RUH update for Placement ID: #0 Success 00:09:45.975 00:09:45.975 Get Feature: FDP Events for Placement handle: #0 00:09:45.975 ======================== 00:09:45.975 Number of FDP Events: 6 00:09:45.975 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:09:45.976 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:09:45.976 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:09:45.976 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:09:45.976 FDP Event: #4 Type: Media Reallocated Enabled: No 00:09:45.976 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:09:45.976 00:09:45.976 FDP events log page 00:09:45.976 =================== 00:09:45.976 Number of FDP events: 1 00:09:45.976 FDP Event #0: 00:09:45.976 Event Type: RU Not Written to Capacity 00:09:45.976 Placement Identifier: Valid 00:09:45.976 NSID: Valid 00:09:45.976 Location: Valid 00:09:45.976 Placement Identifier: 0 00:09:45.976 Event Timestamp: 5 00:09:45.976 Namespace Identifier: 1 00:09:45.976 Reclaim Group Identifier: 0 00:09:45.976 Reclaim Unit Handle Identifier: 0 00:09:45.976 00:09:45.976 FDP test passed 00:09:45.976 00:09:45.976 ************************************ 00:09:45.976 END TEST nvme_flexible_data_placement 00:09:45.976 ************************************ 00:09:45.976 real 0m0.238s 00:09:45.976 user 0m0.075s 00:09:45.976 sys 0m0.061s 00:09:45.976 19:00:03 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:45.976 19:00:03 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:09:45.976 ************************************ 00:09:45.976 END TEST nvme_fdp 00:09:45.976 ************************************ 00:09:45.976 00:09:45.976 real 0m7.852s 00:09:45.976 user 0m1.150s 00:09:45.976 sys 0m1.389s 00:09:45.976 19:00:03 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:45.976 19:00:03 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:45.976 19:00:03 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:09:45.976 19:00:03 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:45.976 19:00:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:45.976 19:00:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:45.976 19:00:03 -- common/autotest_common.sh@10 -- # set +x 00:09:45.976 ************************************ 00:09:45.976 START TEST nvme_rpc 00:09:45.976 ************************************ 00:09:45.976 19:00:03 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:46.237 * Looking for test storage... 00:09:46.237 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:46.238 19:00:03 nvme_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:46.238 19:00:03 nvme_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:09:46.238 19:00:03 nvme_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:46.238 19:00:03 nvme_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:46.238 19:00:03 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:46.238 19:00:03 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:46.238 19:00:03 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:46.238 19:00:03 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:46.238 19:00:03 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:46.238 19:00:03 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:46.238 19:00:03 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:46.238 19:00:03 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:46.238 19:00:03 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:46.238 19:00:03 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:46.238 19:00:03 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:46.238 19:00:03 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:46.238 19:00:03 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:09:46.238 19:00:03 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:46.238 19:00:03 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:46.238 19:00:03 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:46.238 19:00:03 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:09:46.238 19:00:03 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:46.238 19:00:03 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:09:46.238 19:00:03 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:46.238 19:00:03 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:46.238 19:00:03 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:09:46.238 19:00:03 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:46.238 19:00:03 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:09:46.238 19:00:03 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:46.238 19:00:03 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:46.238 19:00:03 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:46.238 19:00:03 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:09:46.238 19:00:03 nvme_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:46.238 19:00:03 nvme_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:46.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.238 --rc genhtml_branch_coverage=1 00:09:46.238 --rc genhtml_function_coverage=1 00:09:46.238 --rc genhtml_legend=1 00:09:46.238 --rc geninfo_all_blocks=1 00:09:46.238 --rc geninfo_unexecuted_blocks=1 00:09:46.238 00:09:46.238 ' 00:09:46.238 19:00:03 nvme_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:46.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.238 --rc genhtml_branch_coverage=1 00:09:46.238 --rc genhtml_function_coverage=1 00:09:46.238 --rc genhtml_legend=1 00:09:46.238 --rc geninfo_all_blocks=1 00:09:46.238 --rc geninfo_unexecuted_blocks=1 00:09:46.238 00:09:46.238 ' 00:09:46.238 19:00:03 nvme_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:46.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.238 --rc genhtml_branch_coverage=1 00:09:46.238 --rc genhtml_function_coverage=1 00:09:46.238 --rc genhtml_legend=1 00:09:46.238 --rc geninfo_all_blocks=1 00:09:46.238 --rc geninfo_unexecuted_blocks=1 00:09:46.238 00:09:46.238 ' 00:09:46.238 19:00:03 nvme_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:46.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.238 --rc genhtml_branch_coverage=1 00:09:46.238 --rc genhtml_function_coverage=1 00:09:46.238 --rc genhtml_legend=1 00:09:46.238 --rc geninfo_all_blocks=1 00:09:46.238 --rc geninfo_unexecuted_blocks=1 00:09:46.238 00:09:46.238 ' 00:09:46.238 19:00:03 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:46.238 19:00:03 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:09:46.238 19:00:03 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:09:46.238 19:00:03 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:09:46.238 19:00:03 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:09:46.238 19:00:03 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:09:46.238 19:00:03 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:46.238 19:00:03 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:09:46.238 19:00:03 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:46.238 19:00:03 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:46.238 19:00:03 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:46.238 19:00:03 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:46.238 19:00:03 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:46.238 19:00:03 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:09:46.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:46.238 19:00:03 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:09:46.238 19:00:03 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=77201 00:09:46.238 19:00:03 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:09:46.238 19:00:03 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 77201 00:09:46.238 19:00:03 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 77201 ']' 00:09:46.238 19:00:03 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:46.238 19:00:03 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:46.238 19:00:03 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:46.238 19:00:03 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:46.238 19:00:03 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:46.238 19:00:03 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:46.238 [2024-12-05 19:00:03.773530] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:09:46.238 [2024-12-05 19:00:03.773650] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77201 ] 00:09:46.500 [2024-12-05 19:00:03.918699] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:46.500 [2024-12-05 19:00:03.940690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.501 [2024-12-05 19:00:03.940732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:47.073 19:00:04 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:47.073 19:00:04 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:47.073 19:00:04 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:09:47.334 Nvme0n1 00:09:47.334 19:00:04 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:09:47.334 19:00:04 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:09:47.594 request: 00:09:47.594 { 00:09:47.594 "bdev_name": "Nvme0n1", 00:09:47.594 "filename": "non_existing_file", 00:09:47.594 "method": "bdev_nvme_apply_firmware", 00:09:47.594 "req_id": 1 00:09:47.594 } 00:09:47.594 Got JSON-RPC error response 00:09:47.594 response: 00:09:47.594 { 00:09:47.594 "code": -32603, 00:09:47.594 "message": "open file failed." 00:09:47.594 } 00:09:47.594 19:00:05 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:09:47.594 19:00:05 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:09:47.594 19:00:05 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:09:47.855 19:00:05 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:47.855 19:00:05 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 77201 00:09:47.855 19:00:05 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 77201 ']' 00:09:47.855 19:00:05 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 77201 00:09:47.855 19:00:05 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:09:47.855 19:00:05 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:47.855 19:00:05 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77201 00:09:47.855 killing process with pid 77201 00:09:47.855 19:00:05 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:47.855 19:00:05 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:47.855 19:00:05 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77201' 00:09:47.855 19:00:05 nvme_rpc -- common/autotest_common.sh@973 -- # kill 77201 00:09:47.855 19:00:05 nvme_rpc -- common/autotest_common.sh@978 -- # wait 77201 00:09:48.142 ************************************ 00:09:48.142 END TEST nvme_rpc 00:09:48.142 ************************************ 00:09:48.142 00:09:48.142 real 0m2.056s 00:09:48.142 user 0m4.048s 00:09:48.142 sys 0m0.467s 00:09:48.142 19:00:05 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:48.142 19:00:05 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:48.142 19:00:05 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:48.142 19:00:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:48.142 19:00:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:48.142 19:00:05 -- common/autotest_common.sh@10 -- # set +x 00:09:48.142 ************************************ 00:09:48.142 START TEST nvme_rpc_timeouts 00:09:48.142 ************************************ 00:09:48.142 19:00:05 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:48.142 * Looking for test storage... 00:09:48.142 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:48.142 19:00:05 nvme_rpc_timeouts -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:48.142 19:00:05 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lcov --version 00:09:48.142 19:00:05 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:48.403 19:00:05 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:48.403 19:00:05 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:48.403 19:00:05 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:48.403 19:00:05 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:48.403 19:00:05 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:09:48.403 19:00:05 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:09:48.403 19:00:05 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:09:48.403 19:00:05 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:09:48.403 19:00:05 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:09:48.403 19:00:05 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:09:48.403 19:00:05 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:09:48.403 19:00:05 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:48.403 19:00:05 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:09:48.403 19:00:05 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:09:48.403 19:00:05 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:48.403 19:00:05 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:48.403 19:00:05 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:09:48.403 19:00:05 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:09:48.403 19:00:05 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:48.403 19:00:05 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:09:48.403 19:00:05 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:09:48.403 19:00:05 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:09:48.403 19:00:05 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:09:48.403 19:00:05 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:48.403 19:00:05 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:09:48.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:48.403 19:00:05 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:09:48.403 19:00:05 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:48.403 19:00:05 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:48.403 19:00:05 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:09:48.403 19:00:05 nvme_rpc_timeouts -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:48.403 19:00:05 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:48.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.403 --rc genhtml_branch_coverage=1 00:09:48.403 --rc genhtml_function_coverage=1 00:09:48.403 --rc genhtml_legend=1 00:09:48.403 --rc geninfo_all_blocks=1 00:09:48.403 --rc geninfo_unexecuted_blocks=1 00:09:48.403 00:09:48.403 ' 00:09:48.403 19:00:05 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:48.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.403 --rc genhtml_branch_coverage=1 00:09:48.403 --rc genhtml_function_coverage=1 00:09:48.403 --rc genhtml_legend=1 00:09:48.403 --rc geninfo_all_blocks=1 00:09:48.403 --rc geninfo_unexecuted_blocks=1 00:09:48.403 00:09:48.403 ' 00:09:48.403 19:00:05 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:48.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.403 --rc genhtml_branch_coverage=1 00:09:48.403 --rc genhtml_function_coverage=1 00:09:48.403 --rc genhtml_legend=1 00:09:48.403 --rc geninfo_all_blocks=1 00:09:48.403 --rc geninfo_unexecuted_blocks=1 00:09:48.403 00:09:48.403 ' 00:09:48.403 19:00:05 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:48.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.403 --rc genhtml_branch_coverage=1 00:09:48.403 --rc genhtml_function_coverage=1 00:09:48.403 --rc genhtml_legend=1 00:09:48.403 --rc geninfo_all_blocks=1 00:09:48.403 --rc geninfo_unexecuted_blocks=1 00:09:48.403 00:09:48.403 ' 00:09:48.403 19:00:05 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:48.403 19:00:05 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_77249 00:09:48.403 19:00:05 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_77249 00:09:48.403 19:00:05 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=77287 00:09:48.403 19:00:05 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:09:48.403 19:00:05 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 77287 00:09:48.403 19:00:05 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 77287 ']' 00:09:48.403 19:00:05 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:48.403 19:00:05 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:48.403 19:00:05 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:48.403 19:00:05 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:48.403 19:00:05 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:48.403 19:00:05 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:48.403 [2024-12-05 19:00:05.824693] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:09:48.403 [2024-12-05 19:00:05.824965] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77287 ] 00:09:48.664 [2024-12-05 19:00:05.969976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:48.664 [2024-12-05 19:00:05.999995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:48.664 [2024-12-05 19:00:06.000050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:49.237 Checking default timeout settings: 00:09:49.237 19:00:06 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:49.237 19:00:06 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:09:49.237 19:00:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:09:49.237 19:00:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:49.497 Making settings changes with rpc: 00:09:49.497 19:00:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:09:49.497 19:00:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:09:49.757 Check default vs. modified settings: 00:09:49.757 19:00:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:09:49.758 19:00:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:50.018 19:00:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:09:50.018 19:00:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:50.018 19:00:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_77249 00:09:50.018 19:00:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:50.018 19:00:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:50.018 19:00:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:09:50.284 19:00:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_77249 00:09:50.284 19:00:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:50.284 19:00:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:50.284 Setting action_on_timeout is changed as expected. 00:09:50.284 19:00:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:09:50.284 19:00:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:09:50.284 19:00:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:09:50.284 19:00:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:50.284 19:00:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_77249 00:09:50.284 19:00:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:50.284 19:00:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:50.284 19:00:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:50.284 19:00:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_77249 00:09:50.284 19:00:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:50.284 19:00:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:50.284 Setting timeout_us is changed as expected. 00:09:50.284 19:00:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:09:50.284 19:00:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:09:50.284 19:00:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:09:50.284 19:00:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:50.284 19:00:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:50.284 19:00:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_77249 00:09:50.284 19:00:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:50.284 19:00:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:50.284 19:00:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_77249 00:09:50.284 19:00:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:50.284 19:00:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:50.284 Setting timeout_admin_us is changed as expected. 00:09:50.284 19:00:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:09:50.284 19:00:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:09:50.284 19:00:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:09:50.284 19:00:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:09:50.284 19:00:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_77249 /tmp/settings_modified_77249 00:09:50.284 19:00:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 77287 00:09:50.284 19:00:07 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 77287 ']' 00:09:50.284 19:00:07 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 77287 00:09:50.284 19:00:07 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:09:50.284 19:00:07 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:50.284 19:00:07 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77287 00:09:50.284 killing process with pid 77287 00:09:50.284 19:00:07 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:50.284 19:00:07 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:50.284 19:00:07 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77287' 00:09:50.284 19:00:07 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 77287 00:09:50.284 19:00:07 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 77287 00:09:50.546 RPC TIMEOUT SETTING TEST PASSED. 00:09:50.546 19:00:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:09:50.546 00:09:50.546 real 0m2.396s 00:09:50.546 user 0m4.751s 00:09:50.546 sys 0m0.565s 00:09:50.546 19:00:08 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:50.546 ************************************ 00:09:50.546 END TEST nvme_rpc_timeouts 00:09:50.546 ************************************ 00:09:50.546 19:00:08 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:50.546 19:00:08 -- spdk/autotest.sh@239 -- # uname -s 00:09:50.546 19:00:08 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:09:50.546 19:00:08 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:50.546 19:00:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:50.546 19:00:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:50.546 19:00:08 -- common/autotest_common.sh@10 -- # set +x 00:09:50.546 ************************************ 00:09:50.546 START TEST sw_hotplug 00:09:50.546 ************************************ 00:09:50.546 19:00:08 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:50.807 * Looking for test storage... 00:09:50.807 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:50.807 19:00:08 sw_hotplug -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:50.807 19:00:08 sw_hotplug -- common/autotest_common.sh@1711 -- # lcov --version 00:09:50.807 19:00:08 sw_hotplug -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:50.807 19:00:08 sw_hotplug -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:50.807 19:00:08 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:50.807 19:00:08 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:50.807 19:00:08 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:50.807 19:00:08 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:09:50.807 19:00:08 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:09:50.807 19:00:08 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:09:50.807 19:00:08 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:09:50.807 19:00:08 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:09:50.807 19:00:08 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:09:50.807 19:00:08 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:09:50.807 19:00:08 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:50.807 19:00:08 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:09:50.807 19:00:08 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:09:50.807 19:00:08 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:50.807 19:00:08 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:50.807 19:00:08 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:09:50.807 19:00:08 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:09:50.807 19:00:08 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:50.807 19:00:08 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:09:50.807 19:00:08 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:09:50.807 19:00:08 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:09:50.807 19:00:08 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:09:50.807 19:00:08 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:50.807 19:00:08 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:09:50.807 19:00:08 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:09:50.807 19:00:08 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:50.807 19:00:08 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:50.807 19:00:08 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:09:50.807 19:00:08 sw_hotplug -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:50.807 19:00:08 sw_hotplug -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:50.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.807 --rc genhtml_branch_coverage=1 00:09:50.807 --rc genhtml_function_coverage=1 00:09:50.807 --rc genhtml_legend=1 00:09:50.807 --rc geninfo_all_blocks=1 00:09:50.807 --rc geninfo_unexecuted_blocks=1 00:09:50.807 00:09:50.807 ' 00:09:50.807 19:00:08 sw_hotplug -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:50.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.807 --rc genhtml_branch_coverage=1 00:09:50.807 --rc genhtml_function_coverage=1 00:09:50.807 --rc genhtml_legend=1 00:09:50.807 --rc geninfo_all_blocks=1 00:09:50.807 --rc geninfo_unexecuted_blocks=1 00:09:50.807 00:09:50.807 ' 00:09:50.807 19:00:08 sw_hotplug -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:50.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.807 --rc genhtml_branch_coverage=1 00:09:50.807 --rc genhtml_function_coverage=1 00:09:50.807 --rc genhtml_legend=1 00:09:50.807 --rc geninfo_all_blocks=1 00:09:50.807 --rc geninfo_unexecuted_blocks=1 00:09:50.807 00:09:50.807 ' 00:09:50.807 19:00:08 sw_hotplug -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:50.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.807 --rc genhtml_branch_coverage=1 00:09:50.807 --rc genhtml_function_coverage=1 00:09:50.807 --rc genhtml_legend=1 00:09:50.807 --rc geninfo_all_blocks=1 00:09:50.807 --rc geninfo_unexecuted_blocks=1 00:09:50.807 00:09:50.807 ' 00:09:50.807 19:00:08 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:51.068 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:51.328 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:51.328 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:51.328 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:51.328 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:51.328 19:00:08 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:09:51.328 19:00:08 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:09:51.328 19:00:08 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:09:51.328 19:00:08 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:09:51.328 19:00:08 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:09:51.328 19:00:08 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:09:51.328 19:00:08 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:09:51.328 19:00:08 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:09:51.328 19:00:08 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:09:51.328 19:00:08 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:09:51.328 19:00:08 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:09:51.328 19:00:08 sw_hotplug -- scripts/common.sh@233 -- # local class 00:09:51.328 19:00:08 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:09:51.328 19:00:08 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:09:51.328 19:00:08 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:09:51.328 19:00:08 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:09:51.329 19:00:08 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:09:51.329 19:00:08 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:09:51.329 19:00:08 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:09:51.329 19:00:08 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:09:51.329 19:00:08 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:09:51.329 19:00:08 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:09:51.329 19:00:08 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:09:51.329 19:00:08 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:09:51.329 19:00:08 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:09:51.329 19:00:08 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:09:51.329 19:00:08 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:51.329 19:00:08 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:09:51.329 19:00:08 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:51.329 19:00:08 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:51.329 19:00:08 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:51.329 19:00:08 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:51.329 19:00:08 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:09:51.329 19:00:08 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:51.329 19:00:08 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:09:51.329 19:00:08 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:51.329 19:00:08 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:51.329 19:00:08 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:51.329 19:00:08 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:51.329 19:00:08 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:09:51.329 19:00:08 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:51.329 19:00:08 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:09:51.329 19:00:08 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:51.329 19:00:08 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:51.329 19:00:08 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:51.329 19:00:08 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:51.329 19:00:08 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:09:51.329 19:00:08 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:51.329 19:00:08 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:09:51.329 19:00:08 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:51.329 19:00:08 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:51.329 19:00:08 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:51.329 19:00:08 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:51.329 19:00:08 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:09:51.329 19:00:08 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:51.329 19:00:08 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:09:51.329 19:00:08 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:51.329 19:00:08 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:51.329 19:00:08 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:51.329 19:00:08 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:51.329 19:00:08 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:09:51.329 19:00:08 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:51.329 19:00:08 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:51.329 19:00:08 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:51.329 19:00:08 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:51.329 19:00:08 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:09:51.329 19:00:08 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:51.329 19:00:08 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:51.329 19:00:08 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:51.329 19:00:08 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:51.329 19:00:08 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:09:51.329 19:00:08 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:51.329 19:00:08 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:51.329 19:00:08 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:51.329 19:00:08 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:09:51.329 19:00:08 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:51.329 19:00:08 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:09:51.329 19:00:08 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:09:51.329 19:00:08 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:51.589 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:51.849 Waiting for block devices as requested 00:09:51.849 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:51.849 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:51.849 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:52.110 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:57.400 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:57.400 19:00:14 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:09:57.400 19:00:14 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:57.400 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:09:57.661 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:57.661 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:09:57.921 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:09:57.921 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:58.180 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:58.180 19:00:15 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:09:58.180 19:00:15 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:09:58.180 19:00:15 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:09:58.180 19:00:15 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:09:58.180 19:00:15 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=78131 00:09:58.180 19:00:15 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:09:58.180 19:00:15 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:09:58.180 19:00:15 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:09:58.180 19:00:15 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:09:58.180 19:00:15 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:09:58.180 19:00:15 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:09:58.180 19:00:15 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:09:58.180 19:00:15 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:09:58.180 19:00:15 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:09:58.180 19:00:15 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:09:58.180 19:00:15 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:09:58.180 19:00:15 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:09:58.180 19:00:15 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:09:58.180 19:00:15 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:09:58.437 Initializing NVMe Controllers 00:09:58.437 Attaching to 0000:00:10.0 00:09:58.437 Attaching to 0000:00:11.0 00:09:58.437 Attached to 0000:00:10.0 00:09:58.437 Attached to 0000:00:11.0 00:09:58.437 Initialization complete. Starting I/O... 00:09:58.437 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:09:58.437 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:09:58.437 00:09:59.369 QEMU NVMe Ctrl (12340 ): 3262 I/Os completed (+3262) 00:09:59.369 QEMU NVMe Ctrl (12341 ): 3512 I/Os completed (+3512) 00:09:59.369 00:10:00.303 QEMU NVMe Ctrl (12340 ): 7009 I/Os completed (+3747) 00:10:00.303 QEMU NVMe Ctrl (12341 ): 8159 I/Os completed (+4647) 00:10:00.303 00:10:01.676 QEMU NVMe Ctrl (12340 ): 10866 I/Os completed (+3857) 00:10:01.676 QEMU NVMe Ctrl (12341 ): 12698 I/Os completed (+4539) 00:10:01.676 00:10:02.644 QEMU NVMe Ctrl (12340 ): 14892 I/Os completed (+4026) 00:10:02.644 QEMU NVMe Ctrl (12341 ): 17114 I/Os completed (+4416) 00:10:02.644 00:10:03.576 QEMU NVMe Ctrl (12340 ): 19020 I/Os completed (+4128) 00:10:03.576 QEMU NVMe Ctrl (12341 ): 21425 I/Os completed (+4311) 00:10:03.576 00:10:04.140 19:00:21 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:04.140 19:00:21 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:04.140 19:00:21 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:04.140 [2024-12-05 19:00:21.649607] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:04.140 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:04.140 [2024-12-05 19:00:21.650455] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:04.140 [2024-12-05 19:00:21.650493] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:04.140 [2024-12-05 19:00:21.650505] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:04.140 [2024-12-05 19:00:21.650517] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:04.140 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:04.140 [2024-12-05 19:00:21.651512] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:04.140 [2024-12-05 19:00:21.651549] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:04.140 [2024-12-05 19:00:21.651559] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:04.140 [2024-12-05 19:00:21.651570] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:04.140 19:00:21 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:04.140 19:00:21 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:04.140 [2024-12-05 19:00:21.669247] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:04.140 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:04.140 [2024-12-05 19:00:21.669964] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:04.140 [2024-12-05 19:00:21.669995] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:04.140 [2024-12-05 19:00:21.670008] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:04.140 [2024-12-05 19:00:21.670020] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:04.140 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:04.140 [2024-12-05 19:00:21.670854] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:04.141 [2024-12-05 19:00:21.670880] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:04.141 [2024-12-05 19:00:21.670893] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:04.141 [2024-12-05 19:00:21.670903] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:04.141 19:00:21 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:04.141 19:00:21 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:04.398 19:00:21 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:04.398 19:00:21 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:04.398 19:00:21 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:04.398 19:00:21 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:04.398 19:00:21 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:04.398 19:00:21 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:04.398 19:00:21 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:04.398 19:00:21 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:04.398 Attaching to 0000:00:10.0 00:10:04.398 Attached to 0000:00:10.0 00:10:04.398 QEMU NVMe Ctrl (12340 ): 4 I/Os completed (+4) 00:10:04.398 00:10:04.398 19:00:21 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:04.398 19:00:21 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:04.398 19:00:21 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:04.398 Attaching to 0000:00:11.0 00:10:04.398 Attached to 0000:00:11.0 00:10:05.331 QEMU NVMe Ctrl (12340 ): 4438 I/Os completed (+4434) 00:10:05.331 QEMU NVMe Ctrl (12341 ): 4026 I/Os completed (+4026) 00:10:05.331 00:10:06.265 QEMU NVMe Ctrl (12340 ): 8784 I/Os completed (+4346) 00:10:06.265 QEMU NVMe Ctrl (12341 ): 8352 I/Os completed (+4326) 00:10:06.265 00:10:07.646 QEMU NVMe Ctrl (12340 ): 12953 I/Os completed (+4169) 00:10:07.646 QEMU NVMe Ctrl (12341 ): 12560 I/Os completed (+4208) 00:10:07.646 00:10:08.585 QEMU NVMe Ctrl (12340 ): 16590 I/Os completed (+3637) 00:10:08.585 QEMU NVMe Ctrl (12341 ): 16223 I/Os completed (+3663) 00:10:08.585 00:10:09.522 QEMU NVMe Ctrl (12340 ): 20301 I/Os completed (+3711) 00:10:09.522 QEMU NVMe Ctrl (12341 ): 19943 I/Os completed (+3720) 00:10:09.522 00:10:10.461 QEMU NVMe Ctrl (12340 ): 24211 I/Os completed (+3910) 00:10:10.461 QEMU NVMe Ctrl (12341 ): 24068 I/Os completed (+4125) 00:10:10.461 00:10:11.440 QEMU NVMe Ctrl (12340 ): 27970 I/Os completed (+3759) 00:10:11.440 QEMU NVMe Ctrl (12341 ): 27836 I/Os completed (+3768) 00:10:11.440 00:10:12.419 QEMU NVMe Ctrl (12340 ): 31754 I/Os completed (+3784) 00:10:12.419 QEMU NVMe Ctrl (12341 ): 31631 I/Os completed (+3795) 00:10:12.419 00:10:13.364 QEMU NVMe Ctrl (12340 ): 35549 I/Os completed (+3795) 00:10:13.364 QEMU NVMe Ctrl (12341 ): 35436 I/Os completed (+3805) 00:10:13.364 00:10:14.306 QEMU NVMe Ctrl (12340 ): 39328 I/Os completed (+3779) 00:10:14.306 QEMU NVMe Ctrl (12341 ): 39242 I/Os completed (+3806) 00:10:14.306 00:10:15.690 QEMU NVMe Ctrl (12340 ): 43084 I/Os completed (+3756) 00:10:15.690 QEMU NVMe Ctrl (12341 ): 43088 I/Os completed (+3846) 00:10:15.690 00:10:16.263 QEMU NVMe Ctrl (12340 ): 47737 I/Os completed (+4653) 00:10:16.263 QEMU NVMe Ctrl (12341 ): 47612 I/Os completed (+4524) 00:10:16.263 00:10:16.520 19:00:33 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:16.520 19:00:33 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:16.520 19:00:33 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:16.520 19:00:33 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:16.520 [2024-12-05 19:00:33.891666] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:16.520 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:16.520 [2024-12-05 19:00:33.892709] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:16.520 [2024-12-05 19:00:33.892833] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:16.520 [2024-12-05 19:00:33.892867] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:16.520 [2024-12-05 19:00:33.892940] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:16.520 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:16.520 [2024-12-05 19:00:33.894230] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:16.520 [2024-12-05 19:00:33.894347] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:16.520 [2024-12-05 19:00:33.894380] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:16.520 [2024-12-05 19:00:33.894440] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:16.520 19:00:33 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:16.520 19:00:33 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:16.520 [2024-12-05 19:00:33.912222] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:16.520 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:16.520 [2024-12-05 19:00:33.913204] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:16.520 [2024-12-05 19:00:33.913277] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:16.520 [2024-12-05 19:00:33.913311] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:16.520 [2024-12-05 19:00:33.913340] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:16.520 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:16.520 [2024-12-05 19:00:33.914419] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:16.520 [2024-12-05 19:00:33.914512] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:16.520 [2024-12-05 19:00:33.914585] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:16.520 [2024-12-05 19:00:33.914615] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:16.520 19:00:33 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:16.520 19:00:33 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:16.520 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:16.520 EAL: Scan for (pci) bus failed. 00:10:16.520 19:00:33 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:16.520 19:00:33 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:16.520 19:00:33 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:16.520 19:00:34 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:16.520 19:00:34 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:16.520 19:00:34 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:16.520 19:00:34 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:16.520 19:00:34 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:16.520 Attaching to 0000:00:10.0 00:10:16.520 Attached to 0000:00:10.0 00:10:16.777 19:00:34 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:16.777 19:00:34 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:16.777 19:00:34 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:16.777 Attaching to 0000:00:11.0 00:10:16.777 Attached to 0000:00:11.0 00:10:17.341 QEMU NVMe Ctrl (12340 ): 2928 I/Os completed (+2928) 00:10:17.341 QEMU NVMe Ctrl (12341 ): 3181 I/Os completed (+3181) 00:10:17.341 00:10:18.282 QEMU NVMe Ctrl (12340 ): 6645 I/Os completed (+3717) 00:10:18.282 QEMU NVMe Ctrl (12341 ): 6883 I/Os completed (+3702) 00:10:18.282 00:10:19.277 QEMU NVMe Ctrl (12340 ): 10361 I/Os completed (+3716) 00:10:19.277 QEMU NVMe Ctrl (12341 ): 10599 I/Os completed (+3716) 00:10:19.277 00:10:20.666 QEMU NVMe Ctrl (12340 ): 14137 I/Os completed (+3776) 00:10:20.666 QEMU NVMe Ctrl (12341 ): 14377 I/Os completed (+3778) 00:10:20.666 00:10:21.610 QEMU NVMe Ctrl (12340 ): 17945 I/Os completed (+3808) 00:10:21.611 QEMU NVMe Ctrl (12341 ): 18181 I/Os completed (+3804) 00:10:21.611 00:10:22.552 QEMU NVMe Ctrl (12340 ): 21801 I/Os completed (+3856) 00:10:22.552 QEMU NVMe Ctrl (12341 ): 22038 I/Os completed (+3857) 00:10:22.552 00:10:23.484 QEMU NVMe Ctrl (12340 ): 26152 I/Os completed (+4351) 00:10:23.484 QEMU NVMe Ctrl (12341 ): 26367 I/Os completed (+4329) 00:10:23.484 00:10:24.416 QEMU NVMe Ctrl (12340 ): 30485 I/Os completed (+4333) 00:10:24.416 QEMU NVMe Ctrl (12341 ): 30696 I/Os completed (+4329) 00:10:24.416 00:10:25.348 QEMU NVMe Ctrl (12340 ): 34814 I/Os completed (+4329) 00:10:25.348 QEMU NVMe Ctrl (12341 ): 35027 I/Os completed (+4331) 00:10:25.348 00:10:26.316 QEMU NVMe Ctrl (12340 ): 39156 I/Os completed (+4342) 00:10:26.316 QEMU NVMe Ctrl (12341 ): 39369 I/Os completed (+4342) 00:10:26.316 00:10:27.689 QEMU NVMe Ctrl (12340 ): 43510 I/Os completed (+4354) 00:10:27.689 QEMU NVMe Ctrl (12341 ): 43712 I/Os completed (+4343) 00:10:27.689 00:10:28.624 QEMU NVMe Ctrl (12340 ): 47859 I/Os completed (+4349) 00:10:28.624 QEMU NVMe Ctrl (12341 ): 48059 I/Os completed (+4347) 00:10:28.624 00:10:28.624 19:00:46 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:28.624 19:00:46 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:28.624 19:00:46 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:28.624 19:00:46 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:28.624 [2024-12-05 19:00:46.137287] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:28.624 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:28.624 [2024-12-05 19:00:46.138156] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:28.624 [2024-12-05 19:00:46.138210] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:28.624 [2024-12-05 19:00:46.138236] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:28.624 [2024-12-05 19:00:46.138282] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:28.624 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:28.624 [2024-12-05 19:00:46.139302] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:28.624 [2024-12-05 19:00:46.139391] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:28.624 [2024-12-05 19:00:46.139417] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:28.624 [2024-12-05 19:00:46.139467] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:28.624 19:00:46 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:28.624 19:00:46 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:28.624 [2024-12-05 19:00:46.156596] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:28.624 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:28.624 [2024-12-05 19:00:46.157427] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:28.624 [2024-12-05 19:00:46.157518] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:28.624 [2024-12-05 19:00:46.157549] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:28.624 [2024-12-05 19:00:46.157610] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:28.624 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:28.624 [2024-12-05 19:00:46.158542] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:28.624 [2024-12-05 19:00:46.158569] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:28.624 [2024-12-05 19:00:46.158580] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:28.624 [2024-12-05 19:00:46.158590] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:28.624 19:00:46 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:28.624 19:00:46 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:28.624 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:28.624 EAL: Scan for (pci) bus failed. 00:10:28.883 19:00:46 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:28.883 19:00:46 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:28.883 19:00:46 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:28.883 19:00:46 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:28.883 19:00:46 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:28.883 19:00:46 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:28.883 19:00:46 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:28.883 19:00:46 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:28.883 Attaching to 0000:00:10.0 00:10:28.883 Attached to 0000:00:10.0 00:10:28.883 19:00:46 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:28.883 19:00:46 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:28.883 19:00:46 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:28.883 Attaching to 0000:00:11.0 00:10:28.883 Attached to 0000:00:11.0 00:10:28.883 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:28.883 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:28.883 [2024-12-05 19:00:46.415966] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:10:41.105 19:00:58 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:41.105 19:00:58 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:41.105 19:00:58 sw_hotplug -- common/autotest_common.sh@719 -- # time=42.77 00:10:41.105 19:00:58 sw_hotplug -- common/autotest_common.sh@720 -- # echo 42.77 00:10:41.105 19:00:58 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:10:41.105 19:00:58 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.77 00:10:41.105 19:00:58 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.77 2 00:10:41.105 remove_attach_helper took 42.77s to complete (handling 2 nvme drive(s)) 19:00:58 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:10:47.712 19:01:04 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 78131 00:10:47.712 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (78131) - No such process 00:10:47.712 19:01:04 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 78131 00:10:47.712 19:01:04 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:10:47.712 19:01:04 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:10:47.712 19:01:04 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:10:47.712 19:01:04 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=78685 00:10:47.712 19:01:04 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:10:47.712 19:01:04 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:47.712 19:01:04 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 78685 00:10:47.712 19:01:04 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 78685 ']' 00:10:47.712 19:01:04 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:47.712 19:01:04 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:47.712 19:01:04 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:47.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:47.712 19:01:04 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:47.712 19:01:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:47.712 [2024-12-05 19:01:04.495167] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:10:47.712 [2024-12-05 19:01:04.495477] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78685 ] 00:10:47.712 [2024-12-05 19:01:04.638673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:47.712 [2024-12-05 19:01:04.657607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.003 19:01:05 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:48.003 19:01:05 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:10:48.003 19:01:05 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:10:48.003 19:01:05 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.003 19:01:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:48.003 19:01:05 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.003 19:01:05 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:10:48.003 19:01:05 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:48.003 19:01:05 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:10:48.003 19:01:05 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:10:48.003 19:01:05 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:10:48.003 19:01:05 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:10:48.003 19:01:05 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:10:48.003 19:01:05 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:10:48.003 19:01:05 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:48.003 19:01:05 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:48.003 19:01:05 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:10:48.003 19:01:05 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:48.003 19:01:05 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:54.578 19:01:11 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:54.578 19:01:11 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:54.578 19:01:11 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:54.578 19:01:11 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:54.578 19:01:11 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:54.578 19:01:11 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:54.578 19:01:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:54.579 19:01:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:54.579 19:01:11 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:54.579 19:01:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:54.579 19:01:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:54.579 19:01:11 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.579 19:01:11 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:54.579 19:01:11 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.579 19:01:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:54.579 19:01:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:54.579 [2024-12-05 19:01:11.435360] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:54.579 [2024-12-05 19:01:11.436410] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:54.579 [2024-12-05 19:01:11.436445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:54.579 [2024-12-05 19:01:11.436458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:54.579 [2024-12-05 19:01:11.436471] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:54.579 [2024-12-05 19:01:11.436480] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:54.579 [2024-12-05 19:01:11.436487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:54.579 [2024-12-05 19:01:11.436496] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:54.579 [2024-12-05 19:01:11.436503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:54.579 [2024-12-05 19:01:11.436511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:54.579 [2024-12-05 19:01:11.436517] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:54.579 [2024-12-05 19:01:11.436524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:54.579 [2024-12-05 19:01:11.436531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:54.579 19:01:11 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:54.579 19:01:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:54.579 19:01:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:54.579 [2024-12-05 19:01:11.935356] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:54.579 [2024-12-05 19:01:11.936366] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:54.579 [2024-12-05 19:01:11.936394] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:54.579 [2024-12-05 19:01:11.936404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:54.579 [2024-12-05 19:01:11.936416] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:54.579 [2024-12-05 19:01:11.936423] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:54.579 [2024-12-05 19:01:11.936431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:54.579 [2024-12-05 19:01:11.936438] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:54.579 [2024-12-05 19:01:11.936445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:54.579 [2024-12-05 19:01:11.936452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:54.579 [2024-12-05 19:01:11.936462] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:54.579 [2024-12-05 19:01:11.936469] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:54.579 [2024-12-05 19:01:11.936476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:54.579 19:01:11 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:54.579 19:01:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:54.579 19:01:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:54.579 19:01:11 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.579 19:01:11 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:54.579 19:01:11 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.579 19:01:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:54.579 19:01:11 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:54.579 19:01:12 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:54.579 19:01:12 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:54.579 19:01:12 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:54.579 19:01:12 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:54.579 19:01:12 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:54.579 19:01:12 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:54.579 19:01:12 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:54.579 19:01:12 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:54.838 19:01:12 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:54.838 19:01:12 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:54.838 19:01:12 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:07.038 19:01:24 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:07.038 19:01:24 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:07.038 19:01:24 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:07.038 19:01:24 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:07.038 19:01:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:07.038 19:01:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:07.038 19:01:24 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.038 19:01:24 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:07.038 19:01:24 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.038 19:01:24 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:07.038 19:01:24 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:07.038 19:01:24 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:07.038 19:01:24 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:07.038 19:01:24 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:07.038 19:01:24 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:07.038 19:01:24 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:07.038 19:01:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:07.038 19:01:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:07.038 19:01:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:07.038 19:01:24 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:07.038 19:01:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:07.038 19:01:24 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.038 19:01:24 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:07.038 19:01:24 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.038 19:01:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:07.038 19:01:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:07.038 [2024-12-05 19:01:24.335536] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:07.038 [2024-12-05 19:01:24.336574] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:07.038 [2024-12-05 19:01:24.336603] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:07.038 [2024-12-05 19:01:24.336616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:07.038 [2024-12-05 19:01:24.336628] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:07.038 [2024-12-05 19:01:24.336636] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:07.038 [2024-12-05 19:01:24.336643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:07.038 [2024-12-05 19:01:24.336650] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:07.038 [2024-12-05 19:01:24.336656] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:07.038 [2024-12-05 19:01:24.336664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:07.038 [2024-12-05 19:01:24.336670] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:07.038 [2024-12-05 19:01:24.336679] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:07.038 [2024-12-05 19:01:24.336685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:07.297 [2024-12-05 19:01:24.735539] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:07.297 [2024-12-05 19:01:24.736526] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:07.297 [2024-12-05 19:01:24.736555] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:07.297 [2024-12-05 19:01:24.736564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:07.297 [2024-12-05 19:01:24.736576] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:07.297 [2024-12-05 19:01:24.736583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:07.297 [2024-12-05 19:01:24.736591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:07.297 [2024-12-05 19:01:24.736597] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:07.297 [2024-12-05 19:01:24.736606] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:07.297 [2024-12-05 19:01:24.736613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:07.297 [2024-12-05 19:01:24.736620] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:07.297 [2024-12-05 19:01:24.736626] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:07.297 [2024-12-05 19:01:24.736634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:07.297 19:01:24 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:07.297 19:01:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:07.297 19:01:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:07.297 19:01:24 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:07.297 19:01:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:07.297 19:01:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:07.297 19:01:24 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.297 19:01:24 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:07.297 19:01:24 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.555 19:01:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:07.555 19:01:24 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:07.555 19:01:24 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:07.555 19:01:24 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:07.555 19:01:24 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:07.555 19:01:25 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:07.555 19:01:25 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:07.555 19:01:25 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:07.555 19:01:25 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:07.555 19:01:25 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:07.555 19:01:25 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:07.813 19:01:25 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:07.813 19:01:25 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:20.003 19:01:37 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:20.003 19:01:37 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:20.003 19:01:37 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:20.003 19:01:37 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:20.003 19:01:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:20.003 19:01:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:20.003 19:01:37 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.003 19:01:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:20.003 19:01:37 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.003 19:01:37 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:20.003 19:01:37 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:20.003 19:01:37 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:20.003 19:01:37 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:20.003 19:01:37 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:20.003 19:01:37 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:20.003 19:01:37 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:20.003 19:01:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:20.003 19:01:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:20.003 19:01:37 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:20.003 19:01:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:20.003 19:01:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:20.003 19:01:37 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.003 19:01:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:20.003 19:01:37 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.003 19:01:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:20.003 19:01:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:20.004 [2024-12-05 19:01:37.235737] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:20.004 [2024-12-05 19:01:37.236783] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:20.004 [2024-12-05 19:01:37.236815] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:20.004 [2024-12-05 19:01:37.236829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.004 [2024-12-05 19:01:37.236840] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:20.004 [2024-12-05 19:01:37.236848] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:20.004 [2024-12-05 19:01:37.236856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.004 [2024-12-05 19:01:37.236863] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:20.004 [2024-12-05 19:01:37.236870] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:20.004 [2024-12-05 19:01:37.236878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.004 [2024-12-05 19:01:37.236885] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:20.004 [2024-12-05 19:01:37.236892] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:20.004 [2024-12-05 19:01:37.236899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.262 [2024-12-05 19:01:37.635743] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:20.262 [2024-12-05 19:01:37.636724] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:20.262 [2024-12-05 19:01:37.636755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:20.262 [2024-12-05 19:01:37.636764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.262 [2024-12-05 19:01:37.636776] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:20.262 [2024-12-05 19:01:37.636782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:20.262 [2024-12-05 19:01:37.636792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.262 [2024-12-05 19:01:37.636798] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:20.262 [2024-12-05 19:01:37.636807] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:20.262 [2024-12-05 19:01:37.636814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.262 [2024-12-05 19:01:37.636821] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:20.262 [2024-12-05 19:01:37.636827] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:20.262 [2024-12-05 19:01:37.636835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.262 19:01:37 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:20.262 19:01:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:20.262 19:01:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:20.262 19:01:37 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:20.262 19:01:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:20.262 19:01:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:20.262 19:01:37 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.262 19:01:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:20.262 19:01:37 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.262 19:01:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:20.262 19:01:37 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:20.520 19:01:37 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:20.520 19:01:37 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:20.520 19:01:37 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:20.520 19:01:37 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:20.520 19:01:37 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:20.520 19:01:37 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:20.520 19:01:37 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:20.520 19:01:37 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:20.520 19:01:37 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:20.520 19:01:38 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:20.520 19:01:38 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:32.727 19:01:50 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:32.727 19:01:50 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:32.727 19:01:50 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:32.727 19:01:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:32.727 19:01:50 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:32.727 19:01:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:32.727 19:01:50 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.727 19:01:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:32.727 19:01:50 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.727 19:01:50 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:32.727 19:01:50 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:32.727 19:01:50 sw_hotplug -- common/autotest_common.sh@719 -- # time=44.70 00:11:32.727 19:01:50 sw_hotplug -- common/autotest_common.sh@720 -- # echo 44.70 00:11:32.727 19:01:50 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:11:32.727 19:01:50 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.70 00:11:32.727 19:01:50 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.70 2 00:11:32.727 remove_attach_helper took 44.70s to complete (handling 2 nvme drive(s)) 19:01:50 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:11:32.727 19:01:50 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.727 19:01:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:32.727 19:01:50 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.727 19:01:50 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:32.727 19:01:50 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.727 19:01:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:32.727 19:01:50 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.727 19:01:50 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:11:32.727 19:01:50 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:32.727 19:01:50 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:32.727 19:01:50 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:11:32.727 19:01:50 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:11:32.727 19:01:50 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:11:32.727 19:01:50 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:11:32.727 19:01:50 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:11:32.727 19:01:50 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:32.727 19:01:50 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:32.727 19:01:50 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:32.727 19:01:50 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:32.727 19:01:50 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:39.339 19:01:56 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:39.339 19:01:56 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:39.339 19:01:56 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:39.339 19:01:56 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:39.339 19:01:56 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:39.339 19:01:56 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:39.339 19:01:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:39.339 19:01:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:39.339 19:01:56 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:39.339 19:01:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:39.339 19:01:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:39.339 19:01:56 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.339 19:01:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:39.339 19:01:56 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.339 19:01:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:39.339 19:01:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:39.339 [2024-12-05 19:01:56.159509] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:39.339 [2024-12-05 19:01:56.160261] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:39.339 [2024-12-05 19:01:56.160285] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:39.339 [2024-12-05 19:01:56.160298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.339 [2024-12-05 19:01:56.160310] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:39.339 [2024-12-05 19:01:56.160318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:39.339 [2024-12-05 19:01:56.160325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.339 [2024-12-05 19:01:56.160332] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:39.339 [2024-12-05 19:01:56.160339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:39.339 [2024-12-05 19:01:56.160350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.339 [2024-12-05 19:01:56.160356] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:39.339 [2024-12-05 19:01:56.160364] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:39.339 [2024-12-05 19:01:56.160370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.339 19:01:56 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:39.339 19:01:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:39.339 19:01:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:39.339 19:01:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:39.339 19:01:56 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:39.339 19:01:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:39.339 19:01:56 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.339 19:01:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:39.339 19:01:56 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.339 19:01:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:39.339 19:01:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:39.339 [2024-12-05 19:01:56.759512] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:39.339 [2024-12-05 19:01:56.760225] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:39.339 [2024-12-05 19:01:56.760264] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:39.339 [2024-12-05 19:01:56.760274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.339 [2024-12-05 19:01:56.760286] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:39.339 [2024-12-05 19:01:56.760293] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:39.339 [2024-12-05 19:01:56.760301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.339 [2024-12-05 19:01:56.760308] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:39.339 [2024-12-05 19:01:56.760316] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:39.340 [2024-12-05 19:01:56.760323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.340 [2024-12-05 19:01:56.760330] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:39.340 [2024-12-05 19:01:56.760337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:39.340 [2024-12-05 19:01:56.760347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.911 19:01:57 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:39.911 19:01:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:39.911 19:01:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:39.911 19:01:57 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:39.911 19:01:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:39.911 19:01:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:39.911 19:01:57 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.911 19:01:57 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:39.911 19:01:57 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.911 19:01:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:39.911 19:01:57 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:39.911 19:01:57 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:39.911 19:01:57 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:39.911 19:01:57 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:39.911 19:01:57 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:39.911 19:01:57 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:39.911 19:01:57 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:39.911 19:01:57 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:39.911 19:01:57 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:39.911 19:01:57 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:39.911 19:01:57 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:39.911 19:01:57 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:52.113 19:02:09 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:52.113 19:02:09 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:52.113 19:02:09 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:52.113 19:02:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:52.113 19:02:09 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:52.113 19:02:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:52.113 19:02:09 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.113 19:02:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:52.113 19:02:09 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.113 19:02:09 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:52.113 19:02:09 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:52.113 19:02:09 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:52.113 19:02:09 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:52.113 19:02:09 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:52.113 19:02:09 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:52.113 19:02:09 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:52.113 19:02:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:52.113 19:02:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:52.113 19:02:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:52.113 19:02:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:52.113 19:02:09 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:52.113 19:02:09 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.113 19:02:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:52.113 19:02:09 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.113 19:02:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:52.113 19:02:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:52.113 [2024-12-05 19:02:09.559720] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:52.113 [2024-12-05 19:02:09.560476] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:52.113 [2024-12-05 19:02:09.560505] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:52.113 [2024-12-05 19:02:09.560519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:52.113 [2024-12-05 19:02:09.560531] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:52.113 [2024-12-05 19:02:09.560539] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:52.113 [2024-12-05 19:02:09.560546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:52.113 [2024-12-05 19:02:09.560555] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:52.113 [2024-12-05 19:02:09.560561] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:52.113 [2024-12-05 19:02:09.560569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:52.113 [2024-12-05 19:02:09.560575] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:52.113 [2024-12-05 19:02:09.560583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:52.113 [2024-12-05 19:02:09.560589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:52.679 19:02:10 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:52.680 19:02:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:52.680 [2024-12-05 19:02:10.059730] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:52.680 19:02:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:52.680 [2024-12-05 19:02:10.060478] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:52.680 [2024-12-05 19:02:10.060510] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:52.680 [2024-12-05 19:02:10.060520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:52.680 [2024-12-05 19:02:10.060532] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:52.680 [2024-12-05 19:02:10.060539] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:52.680 [2024-12-05 19:02:10.060547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:52.680 [2024-12-05 19:02:10.060554] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:52.680 [2024-12-05 19:02:10.060562] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:52.680 [2024-12-05 19:02:10.060569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:52.680 [2024-12-05 19:02:10.060576] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:52.680 [2024-12-05 19:02:10.060582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:52.680 [2024-12-05 19:02:10.060590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:52.680 19:02:10 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:52.680 19:02:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:52.680 19:02:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:52.680 19:02:10 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.680 19:02:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:52.680 19:02:10 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.680 19:02:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:52.680 19:02:10 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:52.680 19:02:10 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:52.680 19:02:10 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:52.680 19:02:10 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:52.938 19:02:10 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:52.938 19:02:10 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:52.938 19:02:10 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:52.938 19:02:10 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:52.938 19:02:10 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:52.938 19:02:10 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:52.938 19:02:10 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:52.938 19:02:10 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:05.135 19:02:22 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:05.135 19:02:22 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:05.135 19:02:22 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:05.135 19:02:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:05.135 19:02:22 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:05.135 19:02:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:05.135 19:02:22 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.135 19:02:22 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:05.135 19:02:22 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.135 19:02:22 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:05.135 19:02:22 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:05.135 19:02:22 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:05.135 19:02:22 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:05.135 19:02:22 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:05.135 19:02:22 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:05.135 19:02:22 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:05.135 19:02:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:05.135 19:02:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:05.135 19:02:22 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:05.135 19:02:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:05.135 19:02:22 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.135 19:02:22 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:05.135 19:02:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:05.135 19:02:22 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.135 19:02:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:05.135 19:02:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:05.135 [2024-12-05 19:02:22.459909] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:05.135 [2024-12-05 19:02:22.460664] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:05.135 [2024-12-05 19:02:22.460691] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:05.135 [2024-12-05 19:02:22.460703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:05.135 [2024-12-05 19:02:22.460715] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:05.135 [2024-12-05 19:02:22.460726] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:05.135 [2024-12-05 19:02:22.460733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:05.135 [2024-12-05 19:02:22.460741] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:05.135 [2024-12-05 19:02:22.460748] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:05.135 [2024-12-05 19:02:22.460756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:05.135 [2024-12-05 19:02:22.460762] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:05.135 [2024-12-05 19:02:22.460769] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:05.135 [2024-12-05 19:02:22.460775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:05.396 [2024-12-05 19:02:22.859916] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:05.396 [2024-12-05 19:02:22.860641] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:05.396 [2024-12-05 19:02:22.860671] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:05.396 [2024-12-05 19:02:22.860681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:05.396 [2024-12-05 19:02:22.860692] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:05.396 [2024-12-05 19:02:22.860698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:05.396 [2024-12-05 19:02:22.860707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:05.396 [2024-12-05 19:02:22.860714] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:05.396 [2024-12-05 19:02:22.860723] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:05.396 [2024-12-05 19:02:22.860731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:05.396 [2024-12-05 19:02:22.860738] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:05.396 [2024-12-05 19:02:22.860745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:05.396 [2024-12-05 19:02:22.860752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:05.667 19:02:22 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:05.667 19:02:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:05.667 19:02:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:05.667 19:02:22 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:05.667 19:02:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:05.667 19:02:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:05.667 19:02:22 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.667 19:02:22 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:05.667 19:02:22 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.667 19:02:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:05.667 19:02:22 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:05.667 19:02:23 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:05.667 19:02:23 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:05.667 19:02:23 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:05.667 19:02:23 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:05.667 19:02:23 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:05.667 19:02:23 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:05.667 19:02:23 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:05.667 19:02:23 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:05.667 19:02:23 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:05.667 19:02:23 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:05.667 19:02:23 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:17.882 19:02:35 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:17.882 19:02:35 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:17.882 19:02:35 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:17.882 19:02:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:17.882 19:02:35 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:17.882 19:02:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:17.882 19:02:35 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.882 19:02:35 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:17.882 19:02:35 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.882 19:02:35 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:17.882 19:02:35 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:17.882 19:02:35 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.18 00:12:17.882 19:02:35 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.18 00:12:17.882 19:02:35 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:12:17.882 19:02:35 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.18 00:12:17.882 19:02:35 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.18 2 00:12:17.882 remove_attach_helper took 45.18s to complete (handling 2 nvme drive(s)) 19:02:35 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:12:17.882 19:02:35 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 78685 00:12:17.882 19:02:35 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 78685 ']' 00:12:17.882 19:02:35 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 78685 00:12:17.882 19:02:35 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:12:17.882 19:02:35 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:17.882 19:02:35 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78685 00:12:17.882 killing process with pid 78685 00:12:17.882 19:02:35 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:17.882 19:02:35 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:17.882 19:02:35 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78685' 00:12:17.882 19:02:35 sw_hotplug -- common/autotest_common.sh@973 -- # kill 78685 00:12:17.882 19:02:35 sw_hotplug -- common/autotest_common.sh@978 -- # wait 78685 00:12:18.141 19:02:35 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:18.401 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:18.970 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:18.970 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:18.970 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:18.970 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:18.970 00:12:18.970 real 2m28.407s 00:12:18.970 user 1m48.087s 00:12:18.970 sys 0m18.759s 00:12:18.970 19:02:36 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:18.970 19:02:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:18.970 ************************************ 00:12:18.970 END TEST sw_hotplug 00:12:18.970 ************************************ 00:12:19.234 19:02:36 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:12:19.234 19:02:36 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:19.234 19:02:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:19.234 19:02:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:19.234 19:02:36 -- common/autotest_common.sh@10 -- # set +x 00:12:19.234 ************************************ 00:12:19.234 START TEST nvme_xnvme 00:12:19.234 ************************************ 00:12:19.234 19:02:36 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:19.234 * Looking for test storage... 00:12:19.234 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:19.234 19:02:36 nvme_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:19.234 19:02:36 nvme_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:19.234 19:02:36 nvme_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:12:19.234 19:02:36 nvme_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:19.234 19:02:36 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:19.234 19:02:36 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:19.234 19:02:36 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:19.234 19:02:36 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:19.234 19:02:36 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:19.234 19:02:36 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:19.234 19:02:36 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:19.234 19:02:36 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:19.234 19:02:36 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:19.234 19:02:36 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:19.234 19:02:36 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:19.234 19:02:36 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:19.234 19:02:36 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:19.234 19:02:36 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:19.234 19:02:36 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:19.234 19:02:36 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:19.234 19:02:36 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:19.234 19:02:36 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:19.234 19:02:36 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:19.234 19:02:36 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:19.234 19:02:36 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:19.234 19:02:36 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:19.234 19:02:36 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:19.234 19:02:36 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:19.234 19:02:36 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:19.234 19:02:36 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:19.234 19:02:36 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:19.234 19:02:36 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:19.234 19:02:36 nvme_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:19.234 19:02:36 nvme_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:19.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.234 --rc genhtml_branch_coverage=1 00:12:19.234 --rc genhtml_function_coverage=1 00:12:19.234 --rc genhtml_legend=1 00:12:19.234 --rc geninfo_all_blocks=1 00:12:19.234 --rc geninfo_unexecuted_blocks=1 00:12:19.234 00:12:19.234 ' 00:12:19.234 19:02:36 nvme_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:19.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.234 --rc genhtml_branch_coverage=1 00:12:19.234 --rc genhtml_function_coverage=1 00:12:19.234 --rc genhtml_legend=1 00:12:19.234 --rc geninfo_all_blocks=1 00:12:19.234 --rc geninfo_unexecuted_blocks=1 00:12:19.234 00:12:19.234 ' 00:12:19.234 19:02:36 nvme_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:19.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.234 --rc genhtml_branch_coverage=1 00:12:19.234 --rc genhtml_function_coverage=1 00:12:19.234 --rc genhtml_legend=1 00:12:19.234 --rc geninfo_all_blocks=1 00:12:19.234 --rc geninfo_unexecuted_blocks=1 00:12:19.234 00:12:19.234 ' 00:12:19.234 19:02:36 nvme_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:19.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.234 --rc genhtml_branch_coverage=1 00:12:19.234 --rc genhtml_function_coverage=1 00:12:19.234 --rc genhtml_legend=1 00:12:19.234 --rc geninfo_all_blocks=1 00:12:19.234 --rc geninfo_unexecuted_blocks=1 00:12:19.234 00:12:19.234 ' 00:12:19.234 19:02:36 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh 00:12:19.234 19:02:36 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:12:19.234 19:02:36 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:12:19.234 19:02:36 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e 00:12:19.234 19:02:36 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:12:19.234 19:02:36 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob 00:12:19.234 19:02:36 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:12:19.234 19:02:36 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:12:19.234 19:02:36 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:12:19.234 19:02:36 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:12:19.234 19:02:36 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:12:19.234 19:02:36 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:12:19.234 19:02:36 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:12:19.234 19:02:36 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:12:19.234 19:02:36 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:12:19.234 19:02:36 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:12:19.234 19:02:36 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:12:19.234 19:02:36 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:12:19.234 19:02:36 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:12:19.234 19:02:36 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:12:19.234 19:02:36 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:12:19.234 19:02:36 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:12:19.234 19:02:36 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:12:19.234 19:02:36 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:12:19.234 19:02:36 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:12:19.234 19:02:36 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:12:19.235 19:02:36 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:12:19.235 19:02:36 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:12:19.235 19:02:36 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:12:19.235 19:02:36 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:12:19.235 19:02:36 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:12:19.235 19:02:36 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:12:19.235 19:02:36 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n 00:12:19.235 19:02:36 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:12:19.235 19:02:36 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:12:19.235 19:02:36 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:12:19.235 19:02:36 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:12:19.235 19:02:36 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:12:19.235 19:02:36 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:12:19.235 19:02:36 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:12:19.235 19:02:36 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:12:19.235 19:02:36 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:12:19.235 19:02:36 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:12:19.235 19:02:36 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:12:19.235 19:02:36 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:12:19.235 19:02:36 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:12:19.235 19:02:36 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:12:19.235 19:02:36 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:12:19.235 19:02:36 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:12:19.235 19:02:36 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:12:19.235 19:02:36 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:12:19.235 19:02:36 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:12:19.235 19:02:36 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:12:19.235 19:02:36 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:12:19.235 19:02:36 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:12:19.235 19:02:36 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:12:19.235 19:02:36 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:12:19.235 19:02:36 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:12:19.235 19:02:36 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:12:19.235 19:02:36 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:12:19.235 19:02:36 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:12:19.235 19:02:36 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:12:19.235 19:02:36 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:12:19.235 19:02:36 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:12:19.235 19:02:36 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:12:19.235 19:02:36 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y 00:12:19.235 19:02:36 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:12:19.235 19:02:36 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:12:19.235 19:02:36 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:12:19.235 19:02:36 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:12:19.235 19:02:36 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:12:19.235 19:02:36 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:12:19.235 19:02:36 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:12:19.235 19:02:36 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:12:19.235 19:02:36 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:12:19.235 19:02:36 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:12:19.235 19:02:36 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:12:19.235 19:02:36 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:12:19.235 19:02:36 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:12:19.235 19:02:36 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:12:19.235 19:02:36 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:12:19.235 19:02:36 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:12:19.235 19:02:36 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:12:19.235 19:02:36 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:12:19.235 19:02:36 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:12:19.235 19:02:36 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n 00:12:19.235 19:02:36 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:12:19.235 19:02:36 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:12:19.235 19:02:36 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:12:19.235 19:02:36 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:12:19.235 19:02:36 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:12:19.235 19:02:36 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:12:19.235 19:02:36 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:12:19.235 19:02:36 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:12:19.235 19:02:36 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:12:19.235 19:02:36 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:12:19.235 19:02:36 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:12:19.235 19:02:36 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:12:19.235 19:02:36 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:12:19.235 19:02:36 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n 00:12:19.235 19:02:36 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:12:19.235 19:02:36 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:12:19.235 19:02:36 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:12:19.235 19:02:36 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:12:19.235 19:02:36 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:12:19.235 19:02:36 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:12:19.235 19:02:36 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:12:19.235 19:02:36 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:12:19.235 19:02:36 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:12:19.235 19:02:36 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:12:19.235 19:02:36 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:12:19.235 19:02:36 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:12:19.235 19:02:36 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:12:19.235 19:02:36 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:12:19.235 19:02:36 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:12:19.235 19:02:36 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:12:19.235 #define SPDK_CONFIG_H 00:12:19.235 #define SPDK_CONFIG_AIO_FSDEV 1 00:12:19.235 #define SPDK_CONFIG_APPS 1 00:12:19.235 #define SPDK_CONFIG_ARCH native 00:12:19.235 #define SPDK_CONFIG_ASAN 1 00:12:19.235 #undef SPDK_CONFIG_AVAHI 00:12:19.235 #undef SPDK_CONFIG_CET 00:12:19.235 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:12:19.235 #define SPDK_CONFIG_COVERAGE 1 00:12:19.235 #define SPDK_CONFIG_CROSS_PREFIX 00:12:19.235 #undef SPDK_CONFIG_CRYPTO 00:12:19.235 #undef SPDK_CONFIG_CRYPTO_MLX5 00:12:19.235 #undef SPDK_CONFIG_CUSTOMOCF 00:12:19.235 #undef SPDK_CONFIG_DAOS 00:12:19.235 #define SPDK_CONFIG_DAOS_DIR 00:12:19.235 #define SPDK_CONFIG_DEBUG 1 00:12:19.235 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:12:19.235 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build 00:12:19.235 #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include 00:12:19.236 #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib 00:12:19.236 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:12:19.236 #undef SPDK_CONFIG_DPDK_UADK 00:12:19.236 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:12:19.236 #define SPDK_CONFIG_EXAMPLES 1 00:12:19.236 #undef SPDK_CONFIG_FC 00:12:19.236 #define SPDK_CONFIG_FC_PATH 00:12:19.236 #define SPDK_CONFIG_FIO_PLUGIN 1 00:12:19.236 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:12:19.236 #define SPDK_CONFIG_FSDEV 1 00:12:19.236 #undef SPDK_CONFIG_FUSE 00:12:19.236 #undef SPDK_CONFIG_FUZZER 00:12:19.236 #define SPDK_CONFIG_FUZZER_LIB 00:12:19.236 #undef SPDK_CONFIG_GOLANG 00:12:19.236 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:12:19.236 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:12:19.236 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:12:19.236 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:12:19.236 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:12:19.236 #undef SPDK_CONFIG_HAVE_LIBBSD 00:12:19.236 #undef SPDK_CONFIG_HAVE_LZ4 00:12:19.236 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:12:19.236 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:12:19.236 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:12:19.236 #define SPDK_CONFIG_IDXD 1 00:12:19.236 #define SPDK_CONFIG_IDXD_KERNEL 1 00:12:19.236 #undef SPDK_CONFIG_IPSEC_MB 00:12:19.236 #define SPDK_CONFIG_IPSEC_MB_DIR 00:12:19.236 #define SPDK_CONFIG_ISAL 1 00:12:19.236 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:12:19.236 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:12:19.236 #define SPDK_CONFIG_LIBDIR 00:12:19.236 #undef SPDK_CONFIG_LTO 00:12:19.236 #define SPDK_CONFIG_MAX_LCORES 128 00:12:19.236 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:12:19.236 #define SPDK_CONFIG_NVME_CUSE 1 00:12:19.236 #undef SPDK_CONFIG_OCF 00:12:19.236 #define SPDK_CONFIG_OCF_PATH 00:12:19.236 #define SPDK_CONFIG_OPENSSL_PATH 00:12:19.236 #undef SPDK_CONFIG_PGO_CAPTURE 00:12:19.236 #define SPDK_CONFIG_PGO_DIR 00:12:19.236 #undef SPDK_CONFIG_PGO_USE 00:12:19.236 #define SPDK_CONFIG_PREFIX /usr/local 00:12:19.236 #undef SPDK_CONFIG_RAID5F 00:12:19.236 #undef SPDK_CONFIG_RBD 00:12:19.236 #define SPDK_CONFIG_RDMA 1 00:12:19.236 #define SPDK_CONFIG_RDMA_PROV verbs 00:12:19.236 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:12:19.236 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:12:19.236 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:12:19.236 #define SPDK_CONFIG_SHARED 1 00:12:19.236 #undef SPDK_CONFIG_SMA 00:12:19.236 #define SPDK_CONFIG_TESTS 1 00:12:19.236 #undef SPDK_CONFIG_TSAN 00:12:19.236 #define SPDK_CONFIG_UBLK 1 00:12:19.236 #define SPDK_CONFIG_UBSAN 1 00:12:19.236 #undef SPDK_CONFIG_UNIT_TESTS 00:12:19.236 #undef SPDK_CONFIG_URING 00:12:19.236 #define SPDK_CONFIG_URING_PATH 00:12:19.236 #undef SPDK_CONFIG_URING_ZNS 00:12:19.236 #undef SPDK_CONFIG_USDT 00:12:19.236 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:12:19.236 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:12:19.236 #undef SPDK_CONFIG_VFIO_USER 00:12:19.236 #define SPDK_CONFIG_VFIO_USER_DIR 00:12:19.236 #define SPDK_CONFIG_VHOST 1 00:12:19.236 #define SPDK_CONFIG_VIRTIO 1 00:12:19.236 #undef SPDK_CONFIG_VTUNE 00:12:19.236 #define SPDK_CONFIG_VTUNE_DIR 00:12:19.236 #define SPDK_CONFIG_WERROR 1 00:12:19.236 #define SPDK_CONFIG_WPDK_DIR 00:12:19.236 #define SPDK_CONFIG_XNVME 1 00:12:19.236 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:12:19.236 19:02:36 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:12:19.236 19:02:36 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:19.236 19:02:36 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:19.236 19:02:36 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:19.236 19:02:36 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:19.236 19:02:36 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:19.236 19:02:36 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.236 19:02:36 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.236 19:02:36 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.236 19:02:36 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:19.236 19:02:36 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.236 19:02:36 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:12:19.236 19:02:36 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:12:19.236 19:02:36 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:12:19.236 19:02:36 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:12:19.236 19:02:36 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:12:19.236 19:02:36 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:12:19.236 19:02:36 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A 00:12:19.236 19:02:36 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:12:19.236 19:02:36 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:12:19.236 19:02:36 nvme_xnvme -- pm/common@68 -- # uname -s 00:12:19.236 19:02:36 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux 00:12:19.236 19:02:36 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:12:19.236 19:02:36 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:12:19.236 19:02:36 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:12:19.236 19:02:36 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:12:19.236 19:02:36 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:12:19.236 19:02:36 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:12:19.236 19:02:36 nvme_xnvme -- pm/common@76 -- # SUDO[0]= 00:12:19.236 19:02:36 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E' 00:12:19.236 19:02:36 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:12:19.236 19:02:36 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:12:19.236 19:02:36 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]] 00:12:19.236 19:02:36 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:12:19.236 19:02:36 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:12:19.236 19:02:36 nvme_xnvme -- common/autotest_common.sh@58 -- # : 1 00:12:19.236 19:02:36 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:12:19.236 19:02:36 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0 00:12:19.236 19:02:36 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:12:19.236 19:02:36 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0 00:12:19.236 19:02:36 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:12:19.236 19:02:36 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1 00:12:19.236 19:02:36 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:12:19.236 19:02:36 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@70 -- # : 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@126 -- # : /home/vagrant/spdk_repo/dpdk/build 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@140 -- # : v22.11.4 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@142 -- # : true 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@154 -- # : 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@169 -- # : 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:12:19.237 19:02:36 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0 00:12:19.238 19:02:36 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:12:19.238 19:02:36 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0 00:12:19.238 19:02:36 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:12:19.238 19:02:36 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0 00:12:19.238 19:02:36 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:12:19.238 19:02:36 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:12:19.238 19:02:36 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:12:19.238 19:02:36 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:12:19.238 19:02:36 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:12:19.238 19:02:36 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:19.238 19:02:36 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:19.238 19:02:36 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:19.238 19:02:36 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:19.238 19:02:36 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:12:19.238 19:02:36 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:12:19.238 19:02:36 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:12:19.238 19:02:36 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:12:19.238 19:02:36 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:12:19.238 19:02:36 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:12:19.238 19:02:36 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:19.238 19:02:36 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:19.238 19:02:36 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:19.238 19:02:36 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:19.238 19:02:36 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:12:19.238 19:02:36 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:12:19.238 19:02:36 nvme_xnvme -- common/autotest_common.sh@206 -- # cat 00:12:19.238 19:02:36 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:12:19.238 19:02:36 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:19.238 19:02:36 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:19.238 19:02:36 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:19.238 19:02:36 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:19.238 19:02:36 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:12:19.238 19:02:36 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:12:19.238 19:02:36 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:12:19.238 19:02:36 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:12:19.238 19:02:36 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:12:19.238 19:02:36 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:12:19.238 19:02:36 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:19.238 19:02:36 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:19.238 19:02:36 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:19.238 19:02:36 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:19.238 19:02:36 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:12:19.238 19:02:36 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:12:19.238 19:02:36 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:19.238 19:02:36 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:19.238 19:02:36 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:12:19.238 19:02:36 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:12:19.238 19:02:36 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV= 00:12:19.238 19:02:36 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:12:19.238 19:02:36 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:12:19.238 19:02:36 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:12:19.238 19:02:36 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:12:19.238 19:02:36 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt= 00:12:19.238 19:02:36 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:12:19.238 19:02:36 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind= 00:12:19.238 19:02:36 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind= 00:12:19.238 19:02:36 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s 00:12:19.238 19:02:36 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:12:19.238 19:02:36 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:12:19.238 19:02:36 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:12:19.238 19:02:36 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:12:19.238 19:02:36 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make 00:12:19.238 19:02:36 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:12:19.238 19:02:36 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:12:19.238 19:02:36 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:12:19.238 19:02:36 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:12:19.238 19:02:36 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE= 00:12:19.238 19:02:36 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 80019 ]] 00:12:19.238 19:02:36 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 80019 00:12:19.238 19:02:36 nvme_xnvme -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:12:19.238 19:02:36 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:12:19.238 19:02:36 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:12:19.238 19:02:36 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir 00:12:19.238 19:02:36 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:12:19.238 19:02:36 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:12:19.238 19:02:36 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:12:19.239 19:02:36 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:12:19.239 19:02:36 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.u3W37F 00:12:19.239 19:02:36 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:12:19.239 19:02:36 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:12:19.239 19:02:36 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:12:19.239 19:02:36 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.u3W37F/tests/xnvme /tmp/spdk.u3W37F 00:12:19.501 19:02:36 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:12:19.501 19:02:36 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:19.501 19:02:36 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T 00:12:19.501 19:02:36 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:12:19.501 19:02:36 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:12:19.501 19:02:36 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:12:19.501 19:02:36 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13354487808 00:12:19.501 19:02:36 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:12:19.501 19:02:36 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=6227881984 00:12:19.501 19:02:36 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:19.501 19:02:36 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:12:19.501 19:02:36 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:12:19.501 19:02:36 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:12:19.501 19:02:36 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:12:19.501 19:02:36 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:12:19.501 19:02:36 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:19.501 19:02:36 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:19.501 19:02:36 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:19.501 19:02:36 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6261964800 00:12:19.501 19:02:36 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265393152 00:12:19.501 19:02:36 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=3428352 00:12:19.501 19:02:36 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:19.501 19:02:36 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:19.501 19:02:36 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:19.501 19:02:36 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493362176 00:12:19.501 19:02:36 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506158080 00:12:19.501 19:02:36 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904 00:12:19.501 19:02:36 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:19.501 19:02:36 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:12:19.501 19:02:36 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:12:19.501 19:02:36 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13354487808 00:12:19.501 19:02:36 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:12:19.501 19:02:36 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=6227881984 00:12:19.501 19:02:36 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:19.501 19:02:36 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:19.501 19:02:36 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:19.501 19:02:36 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6265241600 00:12:19.501 19:02:36 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265393152 00:12:19.501 19:02:36 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=151552 00:12:19.501 19:02:36 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:19.501 19:02:36 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:12:19.501 19:02:36 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:12:19.501 19:02:36 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:12:19.501 19:02:36 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:12:19.501 19:02:36 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:12:19.501 19:02:36 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:19.501 19:02:36 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:12:19.501 19:02:36 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:12:19.501 19:02:36 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:12:19.501 19:02:36 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:12:19.501 19:02:36 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:12:19.501 19:02:36 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:19.501 19:02:36 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:19.501 19:02:36 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:19.501 19:02:36 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253064704 00:12:19.501 19:02:36 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253076992 00:12:19.501 19:02:36 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:12:19.501 19:02:36 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:19.501 19:02:36 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output 00:12:19.501 19:02:36 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:12:19.501 19:02:36 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=97224704000 00:12:19.501 19:02:36 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:12:19.501 19:02:36 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=2478075904 00:12:19.501 19:02:36 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:19.501 19:02:36 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:12:19.501 * Looking for test storage... 00:12:19.501 19:02:36 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size 00:12:19.501 19:02:36 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:12:19.501 19:02:36 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:19.501 19:02:36 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:12:19.501 19:02:36 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home 00:12:19.501 19:02:36 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13354487808 00:12:19.501 19:02:36 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:12:19.501 19:02:36 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:12:19.501 19:02:36 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:12:19.501 19:02:36 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:12:19.501 19:02:36 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:12:19.501 19:02:36 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:19.501 19:02:36 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:19.501 19:02:36 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:19.501 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:19.501 19:02:36 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0 00:12:19.501 19:02:36 nvme_xnvme -- common/autotest_common.sh@1698 -- # set -o errtrace 00:12:19.501 19:02:36 nvme_xnvme -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:12:19.501 19:02:36 nvme_xnvme -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:12:19.501 19:02:36 nvme_xnvme -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:12:19.501 19:02:36 nvme_xnvme -- common/autotest_common.sh@1703 -- # true 00:12:19.501 19:02:36 nvme_xnvme -- common/autotest_common.sh@1705 -- # xtrace_fd 00:12:19.501 19:02:36 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:12:19.501 19:02:36 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:12:19.502 19:02:36 nvme_xnvme -- common/autotest_common.sh@27 -- # exec 00:12:19.502 19:02:36 nvme_xnvme -- common/autotest_common.sh@29 -- # exec 00:12:19.502 19:02:36 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore 00:12:19.502 19:02:36 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:12:19.502 19:02:36 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:12:19.502 19:02:36 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x 00:12:19.502 19:02:36 nvme_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:19.502 19:02:36 nvme_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:19.502 19:02:36 nvme_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:12:19.502 19:02:36 nvme_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:19.502 19:02:36 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:19.502 19:02:36 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:19.502 19:02:36 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:19.502 19:02:36 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:19.502 19:02:36 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:19.502 19:02:36 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:19.502 19:02:36 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:19.502 19:02:36 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:19.502 19:02:36 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:19.502 19:02:36 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:19.502 19:02:36 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:19.502 19:02:36 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:19.502 19:02:36 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:19.502 19:02:36 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:19.502 19:02:36 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:19.502 19:02:36 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:19.502 19:02:36 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:19.502 19:02:36 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:19.502 19:02:36 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:19.502 19:02:36 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:19.502 19:02:36 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:19.502 19:02:36 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:19.502 19:02:36 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:19.502 19:02:36 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:19.502 19:02:36 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:19.502 19:02:36 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:19.502 19:02:36 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:19.502 19:02:36 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:19.502 19:02:36 nvme_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:19.502 19:02:36 nvme_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:19.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.502 --rc genhtml_branch_coverage=1 00:12:19.502 --rc genhtml_function_coverage=1 00:12:19.502 --rc genhtml_legend=1 00:12:19.502 --rc geninfo_all_blocks=1 00:12:19.502 --rc geninfo_unexecuted_blocks=1 00:12:19.502 00:12:19.502 ' 00:12:19.502 19:02:36 nvme_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:19.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.502 --rc genhtml_branch_coverage=1 00:12:19.502 --rc genhtml_function_coverage=1 00:12:19.502 --rc genhtml_legend=1 00:12:19.502 --rc geninfo_all_blocks=1 00:12:19.502 --rc geninfo_unexecuted_blocks=1 00:12:19.502 00:12:19.502 ' 00:12:19.502 19:02:36 nvme_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:19.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.502 --rc genhtml_branch_coverage=1 00:12:19.502 --rc genhtml_function_coverage=1 00:12:19.502 --rc genhtml_legend=1 00:12:19.502 --rc geninfo_all_blocks=1 00:12:19.502 --rc geninfo_unexecuted_blocks=1 00:12:19.502 00:12:19.502 ' 00:12:19.502 19:02:36 nvme_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:19.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.502 --rc genhtml_branch_coverage=1 00:12:19.502 --rc genhtml_function_coverage=1 00:12:19.502 --rc genhtml_legend=1 00:12:19.502 --rc geninfo_all_blocks=1 00:12:19.502 --rc geninfo_unexecuted_blocks=1 00:12:19.502 00:12:19.502 ' 00:12:19.502 19:02:36 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:19.502 19:02:36 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:19.502 19:02:36 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:19.502 19:02:36 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:19.502 19:02:36 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:19.502 19:02:36 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.502 19:02:36 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.502 19:02:36 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.502 19:02:36 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:19.502 19:02:36 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.502 19:02:36 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd') 00:12:19.502 19:02:36 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io 00:12:19.502 19:02:36 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite') 00:12:19.502 19:02:36 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio 00:12:19.502 19:02:36 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite') 00:12:19.502 19:02:36 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring 00:12:19.502 19:02:36 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes') 00:12:19.502 19:02:36 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd 00:12:19.502 19:02:36 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite') 00:12:19.502 19:02:36 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio 00:12:19.502 19:02:36 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite') 00:12:19.502 19:02:36 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio 00:12:19.502 19:02:36 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite') 00:12:19.502 19:02:36 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio 00:12:19.502 19:02:36 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1') 00:12:19.502 19:02:36 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename 00:12:19.502 19:02:36 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true') 00:12:19.502 19:02:36 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu 00:12:19.502 19:02:36 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false') 00:12:19.502 19:02:36 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0 00:12:19.502 19:02:36 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme 00:12:19.502 19:02:36 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:19.762 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:20.022 Waiting for block devices as requested 00:12:20.022 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:20.022 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:20.022 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:20.282 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:25.688 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:25.688 19:02:42 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme 00:12:25.688 19:02:43 nvme_xnvme -- xnvme/common.sh@74 -- # nproc 00:12:25.688 19:02:43 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10 00:12:25.950 19:02:43 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme 00:12:25.950 19:02:43 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*) 00:12:25.950 19:02:43 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1 00:12:25.950 19:02:43 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:12:25.950 19:02:43 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:12:25.950 No valid GPT data, bailing 00:12:25.950 19:02:43 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:12:25.950 19:02:43 nvme_xnvme -- scripts/common.sh@394 -- # pt= 00:12:25.950 19:02:43 nvme_xnvme -- scripts/common.sh@395 -- # return 1 00:12:25.950 19:02:43 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1 00:12:25.950 19:02:43 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1 00:12:25.950 19:02:43 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1 00:12:25.950 19:02:43 nvme_xnvme -- xnvme/common.sh@83 -- # return 0 00:12:25.950 19:02:43 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT 00:12:25.950 19:02:43 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:12:25.950 19:02:43 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:25.950 19:02:43 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:12:25.950 19:02:43 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:12:25.950 19:02:43 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:12:25.950 19:02:43 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:12:25.950 19:02:43 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:12:25.950 19:02:43 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:12:25.950 19:02:43 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:12:25.950 19:02:43 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:25.950 19:02:43 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:25.950 19:02:43 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:25.950 ************************************ 00:12:25.950 START TEST xnvme_rpc 00:12:25.950 ************************************ 00:12:25.950 19:02:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:12:25.950 19:02:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:12:25.950 19:02:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:12:25.950 19:02:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:12:25.950 19:02:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:12:25.950 19:02:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=80415 00:12:25.950 19:02:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 80415 00:12:25.950 19:02:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 80415 ']' 00:12:25.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:25.950 19:02:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:25.950 19:02:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:25.950 19:02:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:25.950 19:02:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:25.950 19:02:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:25.950 19:02:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:26.211 [2024-12-05 19:02:43.559065] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:12:26.211 [2024-12-05 19:02:43.559211] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80415 ] 00:12:26.211 [2024-12-05 19:02:43.706651] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:26.211 [2024-12-05 19:02:43.735460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:27.152 19:02:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:27.152 19:02:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:27.152 19:02:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio '' 00:12:27.152 19:02:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.152 19:02:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.152 xnvme_bdev 00:12:27.152 19:02:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.152 19:02:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:12:27.152 19:02:44 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:27.152 19:02:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.152 19:02:44 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:12:27.152 19:02:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.152 19:02:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.152 19:02:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:12:27.152 19:02:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:12:27.152 19:02:44 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:12:27.152 19:02:44 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:27.152 19:02:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.152 19:02:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.152 19:02:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.152 19:02:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:12:27.152 19:02:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:12:27.152 19:02:44 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:27.152 19:02:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.152 19:02:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.152 19:02:44 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:12:27.152 19:02:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.152 19:02:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:12:27.152 19:02:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:12:27.152 19:02:44 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:27.152 19:02:44 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:12:27.152 19:02:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.152 19:02:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.152 19:02:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.152 19:02:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:12:27.152 19:02:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:12:27.152 19:02:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.152 19:02:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.152 19:02:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.152 19:02:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 80415 00:12:27.152 19:02:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 80415 ']' 00:12:27.152 19:02:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 80415 00:12:27.152 19:02:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:12:27.153 19:02:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:27.153 19:02:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80415 00:12:27.153 19:02:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:27.153 killing process with pid 80415 00:12:27.153 19:02:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:27.153 19:02:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80415' 00:12:27.153 19:02:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 80415 00:12:27.153 19:02:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 80415 00:12:27.412 00:12:27.412 real 0m1.400s 00:12:27.412 user 0m1.459s 00:12:27.412 sys 0m0.404s 00:12:27.412 19:02:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:27.412 ************************************ 00:12:27.412 END TEST xnvme_rpc 00:12:27.412 19:02:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.412 ************************************ 00:12:27.412 19:02:44 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:27.412 19:02:44 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:27.412 19:02:44 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:27.412 19:02:44 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:27.412 ************************************ 00:12:27.412 START TEST xnvme_bdevperf 00:12:27.412 ************************************ 00:12:27.412 19:02:44 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:12:27.412 19:02:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:12:27.412 19:02:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:12:27.412 19:02:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:27.412 19:02:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:12:27.413 19:02:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:27.413 19:02:44 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:27.413 19:02:44 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:27.413 { 00:12:27.413 "subsystems": [ 00:12:27.413 { 00:12:27.413 "subsystem": "bdev", 00:12:27.413 "config": [ 00:12:27.413 { 00:12:27.413 "params": { 00:12:27.413 "io_mechanism": "libaio", 00:12:27.413 "conserve_cpu": false, 00:12:27.413 "filename": "/dev/nvme0n1", 00:12:27.413 "name": "xnvme_bdev" 00:12:27.413 }, 00:12:27.413 "method": "bdev_xnvme_create" 00:12:27.413 }, 00:12:27.413 { 00:12:27.413 "method": "bdev_wait_for_examine" 00:12:27.413 } 00:12:27.413 ] 00:12:27.413 } 00:12:27.413 ] 00:12:27.413 } 00:12:27.672 [2024-12-05 19:02:45.003469] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:12:27.672 [2024-12-05 19:02:45.003600] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80467 ] 00:12:27.672 [2024-12-05 19:02:45.142683] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:27.672 [2024-12-05 19:02:45.171244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:27.931 Running I/O for 5 seconds... 00:12:29.814 23525.00 IOPS, 91.89 MiB/s [2024-12-05T19:02:48.316Z] 23311.00 IOPS, 91.06 MiB/s [2024-12-05T19:02:49.706Z] 23618.00 IOPS, 92.26 MiB/s [2024-12-05T19:02:50.645Z] 23717.25 IOPS, 92.65 MiB/s 00:12:33.086 Latency(us) 00:12:33.086 [2024-12-05T19:02:50.645Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:33.086 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:33.086 xnvme_bdev : 5.01 23903.70 93.37 0.00 0.00 2671.70 494.67 9830.40 00:12:33.086 [2024-12-05T19:02:50.645Z] =================================================================================================================== 00:12:33.086 [2024-12-05T19:02:50.645Z] Total : 23903.70 93.37 0.00 0.00 2671.70 494.67 9830.40 00:12:33.086 19:02:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:33.086 19:02:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:12:33.086 19:02:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:33.086 19:02:50 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:33.086 19:02:50 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:33.086 { 00:12:33.086 "subsystems": [ 00:12:33.086 { 00:12:33.086 "subsystem": "bdev", 00:12:33.086 "config": [ 00:12:33.086 { 00:12:33.086 "params": { 00:12:33.086 "io_mechanism": "libaio", 00:12:33.086 "conserve_cpu": false, 00:12:33.086 "filename": "/dev/nvme0n1", 00:12:33.086 "name": "xnvme_bdev" 00:12:33.086 }, 00:12:33.086 "method": "bdev_xnvme_create" 00:12:33.086 }, 00:12:33.086 { 00:12:33.086 "method": "bdev_wait_for_examine" 00:12:33.086 } 00:12:33.086 ] 00:12:33.086 } 00:12:33.086 ] 00:12:33.086 } 00:12:33.086 [2024-12-05 19:02:50.561234] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:12:33.086 [2024-12-05 19:02:50.561392] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80540 ] 00:12:33.347 [2024-12-05 19:02:50.708344] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:33.347 [2024-12-05 19:02:50.737823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:33.347 Running I/O for 5 seconds... 00:12:35.676 28815.00 IOPS, 112.56 MiB/s [2024-12-05T19:02:54.179Z] 29881.00 IOPS, 116.72 MiB/s [2024-12-05T19:02:55.124Z] 30698.67 IOPS, 119.92 MiB/s [2024-12-05T19:02:56.086Z] 31284.50 IOPS, 122.21 MiB/s [2024-12-05T19:02:56.086Z] 31013.60 IOPS, 121.15 MiB/s 00:12:38.527 Latency(us) 00:12:38.527 [2024-12-05T19:02:56.086Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:38.527 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:12:38.527 xnvme_bdev : 5.00 30997.98 121.09 0.00 0.00 2059.67 419.05 6452.78 00:12:38.527 [2024-12-05T19:02:56.086Z] =================================================================================================================== 00:12:38.527 [2024-12-05T19:02:56.086Z] Total : 30997.98 121.09 0.00 0.00 2059.67 419.05 6452.78 00:12:38.527 00:12:38.527 real 0m11.104s 00:12:38.527 user 0m2.989s 00:12:38.527 sys 0m6.721s 00:12:38.527 ************************************ 00:12:38.527 END TEST xnvme_bdevperf 00:12:38.527 ************************************ 00:12:38.527 19:02:56 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:38.527 19:02:56 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:38.527 19:02:56 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:12:38.527 19:02:56 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:38.527 19:02:56 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:38.527 19:02:56 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:38.790 ************************************ 00:12:38.790 START TEST xnvme_fio_plugin 00:12:38.790 ************************************ 00:12:38.790 19:02:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:12:38.790 19:02:56 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:12:38.790 19:02:56 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:12:38.790 19:02:56 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:38.790 19:02:56 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:38.790 19:02:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:38.790 19:02:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:38.790 19:02:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:38.790 19:02:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:38.790 19:02:56 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:38.790 19:02:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:38.790 19:02:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:38.790 19:02:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:38.790 19:02:56 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:38.790 19:02:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:38.790 19:02:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:38.790 19:02:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:38.790 19:02:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:38.790 19:02:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:38.790 19:02:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:38.790 19:02:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:38.790 19:02:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:38.790 19:02:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:38.790 19:02:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:38.790 { 00:12:38.790 "subsystems": [ 00:12:38.790 { 00:12:38.790 "subsystem": "bdev", 00:12:38.790 "config": [ 00:12:38.790 { 00:12:38.790 "params": { 00:12:38.790 "io_mechanism": "libaio", 00:12:38.790 "conserve_cpu": false, 00:12:38.790 "filename": "/dev/nvme0n1", 00:12:38.790 "name": "xnvme_bdev" 00:12:38.790 }, 00:12:38.790 "method": "bdev_xnvme_create" 00:12:38.790 }, 00:12:38.790 { 00:12:38.790 "method": "bdev_wait_for_examine" 00:12:38.790 } 00:12:38.790 ] 00:12:38.790 } 00:12:38.790 ] 00:12:38.790 } 00:12:38.790 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:38.790 fio-3.35 00:12:38.790 Starting 1 thread 00:12:45.378 00:12:45.378 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=80649: Thu Dec 5 19:03:01 2024 00:12:45.378 read: IOPS=31.3k, BW=122MiB/s (128MB/s)(612MiB/5001msec) 00:12:45.378 slat (usec): min=4, max=2509, avg=22.97, stdev=99.94 00:12:45.378 clat (usec): min=111, max=9615, avg=1417.89, stdev=530.83 00:12:45.378 lat (usec): min=177, max=9621, avg=1440.86, stdev=519.58 00:12:45.378 clat percentiles (usec): 00:12:45.378 | 1.00th=[ 273], 5.00th=[ 603], 10.00th=[ 775], 20.00th=[ 963], 00:12:45.378 | 30.00th=[ 1123], 40.00th=[ 1270], 50.00th=[ 1401], 60.00th=[ 1532], 00:12:45.378 | 70.00th=[ 1663], 80.00th=[ 1827], 90.00th=[ 2057], 95.00th=[ 2311], 00:12:45.378 | 99.00th=[ 2900], 99.50th=[ 3130], 99.90th=[ 3785], 99.95th=[ 3949], 00:12:45.378 | 99.99th=[ 4490] 00:12:45.378 bw ( KiB/s): min=120576, max=138720, per=100.00%, avg=125829.33, stdev=5293.12, samples=9 00:12:45.378 iops : min=30144, max=34680, avg=31457.33, stdev=1323.28, samples=9 00:12:45.379 lat (usec) : 250=0.77%, 500=2.24%, 750=6.17%, 1000=13.25% 00:12:45.379 lat (msec) : 2=65.78%, 4=11.76%, 10=0.04% 00:12:45.379 cpu : usr=41.24%, sys=51.70%, ctx=14, majf=0, minf=1065 00:12:45.379 IO depths : 1=0.5%, 2=1.2%, 4=3.1%, 8=8.4%, 16=23.5%, 32=61.3%, >=64=2.0% 00:12:45.379 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:45.379 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.2%, 64=1.7%, >=64=0.0% 00:12:45.379 issued rwts: total=156545,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:45.379 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:45.379 00:12:45.379 Run status group 0 (all jobs): 00:12:45.379 READ: bw=122MiB/s (128MB/s), 122MiB/s-122MiB/s (128MB/s-128MB/s), io=612MiB (641MB), run=5001-5001msec 00:12:45.379 ----------------------------------------------------- 00:12:45.379 Suppressions used: 00:12:45.379 count bytes template 00:12:45.379 1 11 /usr/src/fio/parse.c 00:12:45.379 1 8 libtcmalloc_minimal.so 00:12:45.379 1 904 libcrypto.so 00:12:45.379 ----------------------------------------------------- 00:12:45.379 00:12:45.379 19:03:02 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:45.379 19:03:02 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:45.379 19:03:02 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:45.379 19:03:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:45.379 19:03:02 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:45.379 19:03:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:45.379 19:03:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:45.379 19:03:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:45.379 19:03:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:45.379 19:03:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:45.379 19:03:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:45.379 19:03:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:45.379 19:03:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:45.379 19:03:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:45.379 19:03:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:45.379 19:03:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:45.379 19:03:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:45.379 19:03:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:45.379 19:03:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:45.379 19:03:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:45.379 19:03:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:45.379 { 00:12:45.379 "subsystems": [ 00:12:45.379 { 00:12:45.379 "subsystem": "bdev", 00:12:45.379 "config": [ 00:12:45.379 { 00:12:45.379 "params": { 00:12:45.379 "io_mechanism": "libaio", 00:12:45.379 "conserve_cpu": false, 00:12:45.379 "filename": "/dev/nvme0n1", 00:12:45.379 "name": "xnvme_bdev" 00:12:45.379 }, 00:12:45.379 "method": "bdev_xnvme_create" 00:12:45.379 }, 00:12:45.379 { 00:12:45.379 "method": "bdev_wait_for_examine" 00:12:45.379 } 00:12:45.379 ] 00:12:45.379 } 00:12:45.379 ] 00:12:45.379 } 00:12:45.379 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:45.379 fio-3.35 00:12:45.379 Starting 1 thread 00:12:50.671 00:12:50.671 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=80730: Thu Dec 5 19:03:07 2024 00:12:50.671 write: IOPS=38.7k, BW=151MiB/s (158MB/s)(756MiB/5001msec); 0 zone resets 00:12:50.671 slat (usec): min=4, max=1632, avg=21.20, stdev=71.96 00:12:50.671 clat (usec): min=71, max=11470, avg=1073.04, stdev=560.44 00:12:50.671 lat (usec): min=163, max=11475, avg=1094.24, stdev=558.06 00:12:50.671 clat percentiles (usec): 00:12:50.671 | 1.00th=[ 223], 5.00th=[ 343], 10.00th=[ 461], 20.00th=[ 627], 00:12:50.671 | 30.00th=[ 758], 40.00th=[ 881], 50.00th=[ 996], 60.00th=[ 1106], 00:12:50.671 | 70.00th=[ 1254], 80.00th=[ 1450], 90.00th=[ 1762], 95.00th=[ 2057], 00:12:50.671 | 99.00th=[ 2737], 99.50th=[ 3032], 99.90th=[ 3752], 99.95th=[ 5997], 00:12:50.671 | 99.99th=[11338] 00:12:50.671 bw ( KiB/s): min=115880, max=170664, per=98.75%, avg=152824.67, stdev=19730.65, samples=9 00:12:50.671 iops : min=28970, max=42666, avg=38206.11, stdev=4932.65, samples=9 00:12:50.671 lat (usec) : 100=0.01%, 250=1.69%, 500=10.33%, 750=17.24%, 1000=21.29% 00:12:50.671 lat (msec) : 2=43.75%, 4=5.63%, 10=0.06%, 20=0.01% 00:12:50.671 cpu : usr=32.02%, sys=56.52%, ctx=20, majf=0, minf=1066 00:12:50.671 IO depths : 1=0.2%, 2=0.8%, 4=3.0%, 8=9.0%, 16=24.2%, 32=60.8%, >=64=2.0% 00:12:50.671 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:50.671 complete : 0=0.0%, 4=98.1%, 8=0.1%, 16=0.1%, 32=0.2%, 64=1.7%, >=64=0.0% 00:12:50.671 issued rwts: total=0,193493,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:50.671 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:50.671 00:12:50.671 Run status group 0 (all jobs): 00:12:50.671 WRITE: bw=151MiB/s (158MB/s), 151MiB/s-151MiB/s (158MB/s-158MB/s), io=756MiB (793MB), run=5001-5001msec 00:12:50.671 ----------------------------------------------------- 00:12:50.671 Suppressions used: 00:12:50.671 count bytes template 00:12:50.671 1 11 /usr/src/fio/parse.c 00:12:50.671 1 8 libtcmalloc_minimal.so 00:12:50.671 1 904 libcrypto.so 00:12:50.671 ----------------------------------------------------- 00:12:50.671 00:12:50.671 ************************************ 00:12:50.671 END TEST xnvme_fio_plugin 00:12:50.671 ************************************ 00:12:50.671 00:12:50.671 real 0m11.927s 00:12:50.671 user 0m4.694s 00:12:50.671 sys 0m5.915s 00:12:50.671 19:03:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:50.671 19:03:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:50.671 19:03:08 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:12:50.671 19:03:08 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:12:50.671 19:03:08 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:12:50.671 19:03:08 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:12:50.671 19:03:08 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:50.671 19:03:08 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:50.671 19:03:08 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:50.671 ************************************ 00:12:50.671 START TEST xnvme_rpc 00:12:50.671 ************************************ 00:12:50.671 19:03:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:12:50.671 19:03:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:12:50.672 19:03:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:12:50.672 19:03:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:12:50.672 19:03:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:12:50.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:50.672 19:03:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=80805 00:12:50.672 19:03:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 80805 00:12:50.672 19:03:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 80805 ']' 00:12:50.672 19:03:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:50.672 19:03:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:50.672 19:03:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:50.672 19:03:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:50.672 19:03:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:50.672 19:03:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:50.672 [2024-12-05 19:03:08.175568] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:12:50.672 [2024-12-05 19:03:08.175914] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80805 ] 00:12:50.934 [2024-12-05 19:03:08.322720] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:50.934 [2024-12-05 19:03:08.351583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:51.507 19:03:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:51.507 19:03:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:51.507 19:03:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c 00:12:51.507 19:03:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.507 19:03:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.507 xnvme_bdev 00:12:51.507 19:03:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.507 19:03:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:12:51.507 19:03:09 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:51.507 19:03:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.507 19:03:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.507 19:03:09 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:12:51.769 19:03:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.769 19:03:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:12:51.769 19:03:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:12:51.769 19:03:09 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:51.769 19:03:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.769 19:03:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.769 19:03:09 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:12:51.769 19:03:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.769 19:03:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:12:51.769 19:03:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:12:51.769 19:03:09 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:12:51.769 19:03:09 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:51.769 19:03:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.769 19:03:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.769 19:03:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.769 19:03:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:12:51.769 19:03:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:12:51.769 19:03:09 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:51.769 19:03:09 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:12:51.769 19:03:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.770 19:03:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.770 19:03:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.770 19:03:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:12:51.770 19:03:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:12:51.770 19:03:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.770 19:03:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.770 19:03:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.770 19:03:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 80805 00:12:51.770 19:03:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 80805 ']' 00:12:51.770 19:03:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 80805 00:12:51.770 19:03:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:12:51.770 19:03:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:51.770 19:03:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80805 00:12:51.770 killing process with pid 80805 00:12:51.770 19:03:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:51.770 19:03:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:51.770 19:03:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80805' 00:12:51.770 19:03:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 80805 00:12:51.770 19:03:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 80805 00:12:52.031 ************************************ 00:12:52.031 END TEST xnvme_rpc 00:12:52.031 ************************************ 00:12:52.031 00:12:52.031 real 0m1.416s 00:12:52.031 user 0m1.496s 00:12:52.031 sys 0m0.389s 00:12:52.031 19:03:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:52.031 19:03:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.031 19:03:09 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:52.031 19:03:09 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:52.031 19:03:09 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:52.031 19:03:09 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:52.031 ************************************ 00:12:52.031 START TEST xnvme_bdevperf 00:12:52.031 ************************************ 00:12:52.031 19:03:09 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:12:52.031 19:03:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:12:52.031 19:03:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:12:52.031 19:03:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:52.031 19:03:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:12:52.031 19:03:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:52.031 19:03:09 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:52.031 19:03:09 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:52.292 { 00:12:52.292 "subsystems": [ 00:12:52.292 { 00:12:52.292 "subsystem": "bdev", 00:12:52.292 "config": [ 00:12:52.292 { 00:12:52.292 "params": { 00:12:52.292 "io_mechanism": "libaio", 00:12:52.292 "conserve_cpu": true, 00:12:52.292 "filename": "/dev/nvme0n1", 00:12:52.292 "name": "xnvme_bdev" 00:12:52.292 }, 00:12:52.292 "method": "bdev_xnvme_create" 00:12:52.292 }, 00:12:52.292 { 00:12:52.292 "method": "bdev_wait_for_examine" 00:12:52.292 } 00:12:52.292 ] 00:12:52.292 } 00:12:52.292 ] 00:12:52.292 } 00:12:52.292 [2024-12-05 19:03:09.636119] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:12:52.292 [2024-12-05 19:03:09.636290] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80862 ] 00:12:52.292 [2024-12-05 19:03:09.784056] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:52.292 [2024-12-05 19:03:09.813676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:52.554 Running I/O for 5 seconds... 00:12:54.441 33735.00 IOPS, 131.78 MiB/s [2024-12-05T19:03:12.945Z] 33700.50 IOPS, 131.64 MiB/s [2024-12-05T19:03:14.334Z] 34058.67 IOPS, 133.04 MiB/s [2024-12-05T19:03:15.326Z] 33627.50 IOPS, 131.36 MiB/s 00:12:57.767 Latency(us) 00:12:57.767 [2024-12-05T19:03:15.326Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:57.767 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:57.767 xnvme_bdev : 5.00 34177.50 133.51 0.00 0.00 1868.22 192.98 9880.81 00:12:57.767 [2024-12-05T19:03:15.326Z] =================================================================================================================== 00:12:57.767 [2024-12-05T19:03:15.326Z] Total : 34177.50 133.51 0.00 0.00 1868.22 192.98 9880.81 00:12:57.767 19:03:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:57.767 19:03:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:12:57.767 19:03:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:57.767 19:03:15 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:57.767 19:03:15 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:57.767 { 00:12:57.767 "subsystems": [ 00:12:57.767 { 00:12:57.767 "subsystem": "bdev", 00:12:57.767 "config": [ 00:12:57.767 { 00:12:57.767 "params": { 00:12:57.767 "io_mechanism": "libaio", 00:12:57.767 "conserve_cpu": true, 00:12:57.767 "filename": "/dev/nvme0n1", 00:12:57.767 "name": "xnvme_bdev" 00:12:57.767 }, 00:12:57.767 "method": "bdev_xnvme_create" 00:12:57.767 }, 00:12:57.767 { 00:12:57.767 "method": "bdev_wait_for_examine" 00:12:57.767 } 00:12:57.767 ] 00:12:57.767 } 00:12:57.767 ] 00:12:57.767 } 00:12:57.767 [2024-12-05 19:03:15.208574] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:12:57.767 [2024-12-05 19:03:15.208710] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80934 ] 00:12:58.042 [2024-12-05 19:03:15.355918] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:58.042 [2024-12-05 19:03:15.384365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:58.042 Running I/O for 5 seconds... 00:13:00.402 35933.00 IOPS, 140.36 MiB/s [2024-12-05T19:03:18.533Z] 36079.50 IOPS, 140.94 MiB/s [2024-12-05T19:03:19.918Z] 35951.33 IOPS, 140.43 MiB/s [2024-12-05T19:03:20.858Z] 35404.75 IOPS, 138.30 MiB/s [2024-12-05T19:03:20.858Z] 35105.00 IOPS, 137.13 MiB/s 00:13:03.299 Latency(us) 00:13:03.299 [2024-12-05T19:03:20.858Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:03.299 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:03.299 xnvme_bdev : 5.01 35081.08 137.04 0.00 0.00 1819.91 285.14 9527.93 00:13:03.299 [2024-12-05T19:03:20.858Z] =================================================================================================================== 00:13:03.299 [2024-12-05T19:03:20.858Z] Total : 35081.08 137.04 0.00 0.00 1819.91 285.14 9527.93 00:13:03.299 ************************************ 00:13:03.299 END TEST xnvme_bdevperf 00:13:03.299 ************************************ 00:13:03.299 00:13:03.299 real 0m11.135s 00:13:03.299 user 0m3.338s 00:13:03.299 sys 0m6.112s 00:13:03.299 19:03:20 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:03.299 19:03:20 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:03.299 19:03:20 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:03.299 19:03:20 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:03.299 19:03:20 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:03.299 19:03:20 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:03.299 ************************************ 00:13:03.299 START TEST xnvme_fio_plugin 00:13:03.299 ************************************ 00:13:03.299 19:03:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:03.299 19:03:20 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:03.299 19:03:20 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:13:03.299 19:03:20 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:03.299 19:03:20 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:03.299 19:03:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:03.299 19:03:20 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:03.299 19:03:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:03.299 19:03:20 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:03.299 19:03:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:03.299 19:03:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:03.299 19:03:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:03.299 19:03:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:03.299 19:03:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:03.299 19:03:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:03.299 19:03:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:03.299 19:03:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:03.299 19:03:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:03.299 19:03:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:03.299 19:03:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:03.299 19:03:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:03.299 19:03:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:03.299 19:03:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:03.299 19:03:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:03.299 { 00:13:03.299 "subsystems": [ 00:13:03.299 { 00:13:03.299 "subsystem": "bdev", 00:13:03.299 "config": [ 00:13:03.299 { 00:13:03.299 "params": { 00:13:03.299 "io_mechanism": "libaio", 00:13:03.299 "conserve_cpu": true, 00:13:03.299 "filename": "/dev/nvme0n1", 00:13:03.299 "name": "xnvme_bdev" 00:13:03.299 }, 00:13:03.299 "method": "bdev_xnvme_create" 00:13:03.299 }, 00:13:03.299 { 00:13:03.299 "method": "bdev_wait_for_examine" 00:13:03.299 } 00:13:03.299 ] 00:13:03.299 } 00:13:03.299 ] 00:13:03.299 } 00:13:03.561 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:03.561 fio-3.35 00:13:03.561 Starting 1 thread 00:13:08.855 00:13:08.855 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=81041: Thu Dec 5 19:03:26 2024 00:13:08.855 read: IOPS=33.5k, BW=131MiB/s (137MB/s)(654MiB/5001msec) 00:13:08.855 slat (usec): min=4, max=1968, avg=21.67, stdev=93.58 00:13:08.855 clat (usec): min=108, max=4937, avg=1323.89, stdev=523.21 00:13:08.855 lat (usec): min=197, max=5027, avg=1345.56, stdev=515.26 00:13:08.855 clat percentiles (usec): 00:13:08.855 | 1.00th=[ 285], 5.00th=[ 529], 10.00th=[ 693], 20.00th=[ 898], 00:13:08.855 | 30.00th=[ 1057], 40.00th=[ 1188], 50.00th=[ 1303], 60.00th=[ 1418], 00:13:08.855 | 70.00th=[ 1532], 80.00th=[ 1696], 90.00th=[ 1958], 95.00th=[ 2212], 00:13:08.855 | 99.00th=[ 2900], 99.50th=[ 3228], 99.90th=[ 3818], 99.95th=[ 4047], 00:13:08.855 | 99.99th=[ 4686] 00:13:08.855 bw ( KiB/s): min=127512, max=141936, per=99.96%, avg=133928.89, stdev=5764.54, samples=9 00:13:08.855 iops : min=31878, max=35484, avg=33482.22, stdev=1441.14, samples=9 00:13:08.855 lat (usec) : 250=0.64%, 500=3.77%, 750=7.92%, 1000=13.79% 00:13:08.855 lat (msec) : 2=65.11%, 4=8.71%, 10=0.06% 00:13:08.855 cpu : usr=40.56%, sys=50.94%, ctx=13, majf=0, minf=1065 00:13:08.855 IO depths : 1=0.5%, 2=1.2%, 4=3.1%, 8=8.5%, 16=23.2%, 32=61.3%, >=64=2.1% 00:13:08.855 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:08.855 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.6%, >=64=0.0% 00:13:08.855 issued rwts: total=167505,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:08.855 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:08.855 00:13:08.855 Run status group 0 (all jobs): 00:13:08.855 READ: bw=131MiB/s (137MB/s), 131MiB/s-131MiB/s (137MB/s-137MB/s), io=654MiB (686MB), run=5001-5001msec 00:13:09.427 ----------------------------------------------------- 00:13:09.427 Suppressions used: 00:13:09.427 count bytes template 00:13:09.427 1 11 /usr/src/fio/parse.c 00:13:09.427 1 8 libtcmalloc_minimal.so 00:13:09.427 1 904 libcrypto.so 00:13:09.427 ----------------------------------------------------- 00:13:09.427 00:13:09.427 19:03:26 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:09.427 19:03:26 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:09.427 19:03:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:09.427 19:03:26 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:09.427 19:03:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:09.427 19:03:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:09.427 19:03:26 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:09.427 19:03:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:09.427 19:03:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:09.427 19:03:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:09.427 19:03:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:09.427 19:03:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:09.427 19:03:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:09.427 19:03:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:09.427 19:03:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:09.427 19:03:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:09.427 19:03:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:09.427 19:03:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:09.427 19:03:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:09.427 19:03:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:09.427 19:03:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:09.427 { 00:13:09.427 "subsystems": [ 00:13:09.427 { 00:13:09.427 "subsystem": "bdev", 00:13:09.427 "config": [ 00:13:09.427 { 00:13:09.427 "params": { 00:13:09.427 "io_mechanism": "libaio", 00:13:09.427 "conserve_cpu": true, 00:13:09.427 "filename": "/dev/nvme0n1", 00:13:09.427 "name": "xnvme_bdev" 00:13:09.427 }, 00:13:09.427 "method": "bdev_xnvme_create" 00:13:09.427 }, 00:13:09.427 { 00:13:09.427 "method": "bdev_wait_for_examine" 00:13:09.427 } 00:13:09.427 ] 00:13:09.427 } 00:13:09.427 ] 00:13:09.427 } 00:13:09.688 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:09.688 fio-3.35 00:13:09.688 Starting 1 thread 00:13:14.977 00:13:14.977 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=81123: Thu Dec 5 19:03:32 2024 00:13:14.977 write: IOPS=34.5k, BW=135MiB/s (141MB/s)(673MiB/5001msec); 0 zone resets 00:13:14.977 slat (usec): min=4, max=1970, avg=22.03, stdev=86.30 00:13:14.977 clat (usec): min=107, max=5481, avg=1263.19, stdev=548.40 00:13:14.977 lat (usec): min=172, max=5486, avg=1285.22, stdev=542.84 00:13:14.977 clat percentiles (usec): 00:13:14.977 | 1.00th=[ 269], 5.00th=[ 449], 10.00th=[ 603], 20.00th=[ 816], 00:13:14.977 | 30.00th=[ 963], 40.00th=[ 1090], 50.00th=[ 1221], 60.00th=[ 1352], 00:13:14.977 | 70.00th=[ 1483], 80.00th=[ 1663], 90.00th=[ 1926], 95.00th=[ 2212], 00:13:14.977 | 99.00th=[ 2966], 99.50th=[ 3228], 99.90th=[ 3818], 99.95th=[ 4047], 00:13:14.977 | 99.99th=[ 4555] 00:13:14.977 bw ( KiB/s): min=115760, max=150832, per=99.18%, avg=136724.44, stdev=12720.38, samples=9 00:13:14.977 iops : min=28940, max=37708, avg=34181.11, stdev=3180.09, samples=9 00:13:14.977 lat (usec) : 250=0.77%, 500=5.64%, 750=9.84%, 1000=16.63% 00:13:14.977 lat (msec) : 2=58.62%, 4=8.45%, 10=0.07% 00:13:14.977 cpu : usr=37.66%, sys=52.60%, ctx=12, majf=0, minf=1066 00:13:14.977 IO depths : 1=0.4%, 2=1.1%, 4=3.1%, 8=9.0%, 16=24.2%, 32=60.2%, >=64=2.0% 00:13:14.977 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:14.977 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.2%, 64=1.6%, >=64=0.0% 00:13:14.977 issued rwts: total=0,172350,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:14.977 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:14.977 00:13:14.977 Run status group 0 (all jobs): 00:13:14.977 WRITE: bw=135MiB/s (141MB/s), 135MiB/s-135MiB/s (141MB/s-141MB/s), io=673MiB (706MB), run=5001-5001msec 00:13:15.238 ----------------------------------------------------- 00:13:15.238 Suppressions used: 00:13:15.238 count bytes template 00:13:15.238 1 11 /usr/src/fio/parse.c 00:13:15.238 1 8 libtcmalloc_minimal.so 00:13:15.238 1 904 libcrypto.so 00:13:15.238 ----------------------------------------------------- 00:13:15.238 00:13:15.238 00:13:15.238 real 0m12.020s 00:13:15.238 user 0m4.990s 00:13:15.238 sys 0m5.729s 00:13:15.238 ************************************ 00:13:15.238 END TEST xnvme_fio_plugin 00:13:15.238 ************************************ 00:13:15.238 19:03:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:15.238 19:03:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:15.500 19:03:32 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:13:15.500 19:03:32 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:13:15.500 19:03:32 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:13:15.500 19:03:32 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:13:15.500 19:03:32 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:13:15.500 19:03:32 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:15.500 19:03:32 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:13:15.500 19:03:32 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:13:15.500 19:03:32 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:15.500 19:03:32 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:15.500 19:03:32 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:15.500 19:03:32 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:15.500 ************************************ 00:13:15.500 START TEST xnvme_rpc 00:13:15.500 ************************************ 00:13:15.500 19:03:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:15.500 19:03:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:15.500 19:03:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:15.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:15.500 19:03:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:15.500 19:03:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:15.500 19:03:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=81198 00:13:15.500 19:03:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 81198 00:13:15.500 19:03:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 81198 ']' 00:13:15.500 19:03:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:15.500 19:03:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:15.500 19:03:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:15.500 19:03:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:15.500 19:03:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.500 19:03:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:15.500 [2024-12-05 19:03:32.929542] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:13:15.500 [2024-12-05 19:03:32.929667] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81198 ] 00:13:15.761 [2024-12-05 19:03:33.075855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:15.761 [2024-12-05 19:03:33.094762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:16.339 19:03:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:16.339 19:03:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:16.339 19:03:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring '' 00:13:16.339 19:03:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.339 19:03:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.339 xnvme_bdev 00:13:16.339 19:03:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.339 19:03:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:16.339 19:03:33 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:16.339 19:03:33 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:16.339 19:03:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.339 19:03:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.339 19:03:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.339 19:03:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:16.339 19:03:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:16.339 19:03:33 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:16.339 19:03:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.339 19:03:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.339 19:03:33 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:16.339 19:03:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.339 19:03:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:13:16.339 19:03:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:16.339 19:03:33 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:16.339 19:03:33 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:16.339 19:03:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.339 19:03:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.339 19:03:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.339 19:03:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:13:16.339 19:03:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:16.339 19:03:33 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:16.339 19:03:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.339 19:03:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.339 19:03:33 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:16.659 19:03:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.659 19:03:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:13:16.659 19:03:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:16.659 19:03:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.659 19:03:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.659 19:03:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.659 19:03:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 81198 00:13:16.659 19:03:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 81198 ']' 00:13:16.659 19:03:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 81198 00:13:16.659 19:03:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:16.659 19:03:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:16.659 19:03:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81198 00:13:16.659 killing process with pid 81198 00:13:16.659 19:03:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:16.659 19:03:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:16.659 19:03:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81198' 00:13:16.659 19:03:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 81198 00:13:16.659 19:03:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 81198 00:13:16.659 ************************************ 00:13:16.659 END TEST xnvme_rpc 00:13:16.659 ************************************ 00:13:16.659 00:13:16.659 real 0m1.333s 00:13:16.659 user 0m1.486s 00:13:16.659 sys 0m0.302s 00:13:16.659 19:03:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:16.659 19:03:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.920 19:03:34 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:16.920 19:03:34 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:16.920 19:03:34 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:16.920 19:03:34 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:16.920 ************************************ 00:13:16.920 START TEST xnvme_bdevperf 00:13:16.920 ************************************ 00:13:16.920 19:03:34 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:16.920 19:03:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:16.920 19:03:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:13:16.920 19:03:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:16.920 19:03:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:16.920 19:03:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:16.920 19:03:34 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:16.920 19:03:34 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:16.920 { 00:13:16.920 "subsystems": [ 00:13:16.920 { 00:13:16.920 "subsystem": "bdev", 00:13:16.920 "config": [ 00:13:16.920 { 00:13:16.920 "params": { 00:13:16.920 "io_mechanism": "io_uring", 00:13:16.920 "conserve_cpu": false, 00:13:16.920 "filename": "/dev/nvme0n1", 00:13:16.920 "name": "xnvme_bdev" 00:13:16.920 }, 00:13:16.920 "method": "bdev_xnvme_create" 00:13:16.920 }, 00:13:16.920 { 00:13:16.920 "method": "bdev_wait_for_examine" 00:13:16.920 } 00:13:16.920 ] 00:13:16.920 } 00:13:16.920 ] 00:13:16.920 } 00:13:16.920 [2024-12-05 19:03:34.310671] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:13:16.920 [2024-12-05 19:03:34.310887] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81250 ] 00:13:16.920 [2024-12-05 19:03:34.456011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:16.920 [2024-12-05 19:03:34.475120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:17.182 Running I/O for 5 seconds... 00:13:19.076 38823.00 IOPS, 151.65 MiB/s [2024-12-05T19:03:37.580Z] 38041.50 IOPS, 148.60 MiB/s [2024-12-05T19:03:38.968Z] 38767.33 IOPS, 151.43 MiB/s [2024-12-05T19:03:39.914Z] 38951.50 IOPS, 152.15 MiB/s 00:13:22.355 Latency(us) 00:13:22.355 [2024-12-05T19:03:39.914Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:22.355 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:22.355 xnvme_bdev : 5.00 38185.18 149.16 0.00 0.00 1672.09 441.11 15728.64 00:13:22.355 [2024-12-05T19:03:39.914Z] =================================================================================================================== 00:13:22.355 [2024-12-05T19:03:39.914Z] Total : 38185.18 149.16 0.00 0.00 1672.09 441.11 15728.64 00:13:22.355 19:03:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:22.355 19:03:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:22.355 19:03:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:22.355 19:03:39 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:22.355 19:03:39 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:22.355 { 00:13:22.355 "subsystems": [ 00:13:22.355 { 00:13:22.355 "subsystem": "bdev", 00:13:22.355 "config": [ 00:13:22.355 { 00:13:22.355 "params": { 00:13:22.355 "io_mechanism": "io_uring", 00:13:22.355 "conserve_cpu": false, 00:13:22.355 "filename": "/dev/nvme0n1", 00:13:22.355 "name": "xnvme_bdev" 00:13:22.355 }, 00:13:22.355 "method": "bdev_xnvme_create" 00:13:22.355 }, 00:13:22.355 { 00:13:22.355 "method": "bdev_wait_for_examine" 00:13:22.355 } 00:13:22.355 ] 00:13:22.355 } 00:13:22.355 ] 00:13:22.355 } 00:13:22.355 [2024-12-05 19:03:39.797374] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:13:22.355 [2024-12-05 19:03:39.797509] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81324 ] 00:13:22.616 [2024-12-05 19:03:39.944763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:22.616 [2024-12-05 19:03:39.973599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:22.616 Running I/O for 5 seconds... 00:13:24.944 35540.00 IOPS, 138.83 MiB/s [2024-12-05T19:03:43.444Z] 35071.00 IOPS, 137.00 MiB/s [2024-12-05T19:03:44.385Z] 35008.67 IOPS, 136.75 MiB/s [2024-12-05T19:03:45.327Z] 35023.50 IOPS, 136.81 MiB/s 00:13:27.768 Latency(us) 00:13:27.768 [2024-12-05T19:03:45.327Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:27.768 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:27.768 xnvme_bdev : 5.00 34729.32 135.66 0.00 0.00 1838.10 288.30 9729.58 00:13:27.768 [2024-12-05T19:03:45.327Z] =================================================================================================================== 00:13:27.768 [2024-12-05T19:03:45.327Z] Total : 34729.32 135.66 0.00 0.00 1838.10 288.30 9729.58 00:13:27.768 00:13:27.768 real 0m11.017s 00:13:27.768 user 0m4.229s 00:13:27.768 sys 0m6.519s 00:13:27.768 ************************************ 00:13:27.768 END TEST xnvme_bdevperf 00:13:27.768 ************************************ 00:13:27.768 19:03:45 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:27.768 19:03:45 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:27.768 19:03:45 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:27.768 19:03:45 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:27.768 19:03:45 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:27.768 19:03:45 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:28.029 ************************************ 00:13:28.029 START TEST xnvme_fio_plugin 00:13:28.029 ************************************ 00:13:28.029 19:03:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:28.029 19:03:45 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:28.029 19:03:45 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:13:28.029 19:03:45 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:28.029 19:03:45 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:28.029 19:03:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:28.029 19:03:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:28.029 19:03:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:28.029 19:03:45 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:28.029 19:03:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:28.029 19:03:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:28.029 19:03:45 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:28.029 19:03:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:28.029 19:03:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:28.029 19:03:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:28.029 19:03:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:28.029 19:03:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:28.029 19:03:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:28.029 19:03:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:28.029 19:03:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:28.029 19:03:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:28.029 19:03:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:28.029 19:03:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:28.029 19:03:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:28.029 { 00:13:28.029 "subsystems": [ 00:13:28.029 { 00:13:28.029 "subsystem": "bdev", 00:13:28.029 "config": [ 00:13:28.029 { 00:13:28.029 "params": { 00:13:28.029 "io_mechanism": "io_uring", 00:13:28.029 "conserve_cpu": false, 00:13:28.029 "filename": "/dev/nvme0n1", 00:13:28.029 "name": "xnvme_bdev" 00:13:28.029 }, 00:13:28.029 "method": "bdev_xnvme_create" 00:13:28.029 }, 00:13:28.029 { 00:13:28.029 "method": "bdev_wait_for_examine" 00:13:28.029 } 00:13:28.029 ] 00:13:28.029 } 00:13:28.030 ] 00:13:28.030 } 00:13:28.030 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:28.030 fio-3.35 00:13:28.030 Starting 1 thread 00:13:34.623 00:13:34.623 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=81428: Thu Dec 5 19:03:50 2024 00:13:34.623 read: IOPS=34.3k, BW=134MiB/s (140MB/s)(670MiB/5001msec) 00:13:34.623 slat (nsec): min=2910, max=72418, avg=3890.31, stdev=2006.34 00:13:34.623 clat (usec): min=968, max=3536, avg=1709.35, stdev=302.29 00:13:34.623 lat (usec): min=972, max=3564, avg=1713.24, stdev=302.52 00:13:34.623 clat percentiles (usec): 00:13:34.623 | 1.00th=[ 1188], 5.00th=[ 1303], 10.00th=[ 1369], 20.00th=[ 1450], 00:13:34.623 | 30.00th=[ 1516], 40.00th=[ 1582], 50.00th=[ 1663], 60.00th=[ 1762], 00:13:34.623 | 70.00th=[ 1844], 80.00th=[ 1958], 90.00th=[ 2114], 95.00th=[ 2245], 00:13:34.623 | 99.00th=[ 2540], 99.50th=[ 2671], 99.90th=[ 2868], 99.95th=[ 3064], 00:13:34.623 | 99.99th=[ 3392] 00:13:34.623 bw ( KiB/s): min=128512, max=148992, per=100.00%, avg=137841.78, stdev=6290.40, samples=9 00:13:34.623 iops : min=32128, max=37248, avg=34460.44, stdev=1572.60, samples=9 00:13:34.623 lat (usec) : 1000=0.01% 00:13:34.623 lat (msec) : 2=82.74%, 4=17.25% 00:13:34.623 cpu : usr=31.22%, sys=67.40%, ctx=12, majf=0, minf=1063 00:13:34.623 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:13:34.623 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:34.623 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:13:34.623 issued rwts: total=171456,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:34.623 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:34.623 00:13:34.623 Run status group 0 (all jobs): 00:13:34.623 READ: bw=134MiB/s (140MB/s), 134MiB/s-134MiB/s (140MB/s-140MB/s), io=670MiB (702MB), run=5001-5001msec 00:13:34.623 ----------------------------------------------------- 00:13:34.623 Suppressions used: 00:13:34.623 count bytes template 00:13:34.623 1 11 /usr/src/fio/parse.c 00:13:34.623 1 8 libtcmalloc_minimal.so 00:13:34.623 1 904 libcrypto.so 00:13:34.623 ----------------------------------------------------- 00:13:34.623 00:13:34.623 19:03:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:34.623 19:03:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:34.623 19:03:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:34.623 19:03:51 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:34.623 19:03:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:34.623 19:03:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:34.623 19:03:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:34.623 19:03:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:34.623 19:03:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:34.623 19:03:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:34.623 19:03:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:34.623 19:03:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:34.623 19:03:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:34.623 19:03:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:34.623 19:03:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:34.623 19:03:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:34.623 19:03:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:34.623 19:03:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:34.623 19:03:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:34.623 19:03:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:34.623 19:03:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:34.623 { 00:13:34.623 "subsystems": [ 00:13:34.623 { 00:13:34.623 "subsystem": "bdev", 00:13:34.623 "config": [ 00:13:34.623 { 00:13:34.623 "params": { 00:13:34.623 "io_mechanism": "io_uring", 00:13:34.623 "conserve_cpu": false, 00:13:34.623 "filename": "/dev/nvme0n1", 00:13:34.623 "name": "xnvme_bdev" 00:13:34.623 }, 00:13:34.623 "method": "bdev_xnvme_create" 00:13:34.623 }, 00:13:34.623 { 00:13:34.623 "method": "bdev_wait_for_examine" 00:13:34.623 } 00:13:34.623 ] 00:13:34.623 } 00:13:34.623 ] 00:13:34.623 } 00:13:34.623 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:34.623 fio-3.35 00:13:34.623 Starting 1 thread 00:13:39.921 00:13:39.921 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=81514: Thu Dec 5 19:03:56 2024 00:13:39.921 write: IOPS=35.2k, BW=138MiB/s (144MB/s)(688MiB/5002msec); 0 zone resets 00:13:39.921 slat (usec): min=2, max=151, avg= 4.09, stdev= 2.26 00:13:39.921 clat (usec): min=369, max=4055, avg=1651.64, stdev=282.20 00:13:39.921 lat (usec): min=379, max=4063, avg=1655.74, stdev=282.59 00:13:39.921 clat percentiles (usec): 00:13:39.921 | 1.00th=[ 1156], 5.00th=[ 1270], 10.00th=[ 1336], 20.00th=[ 1418], 00:13:39.921 | 30.00th=[ 1483], 40.00th=[ 1549], 50.00th=[ 1614], 60.00th=[ 1680], 00:13:39.921 | 70.00th=[ 1762], 80.00th=[ 1860], 90.00th=[ 2008], 95.00th=[ 2147], 00:13:39.921 | 99.00th=[ 2540], 99.50th=[ 2638], 99.90th=[ 3163], 99.95th=[ 3392], 00:13:39.921 | 99.99th=[ 3687] 00:13:39.921 bw ( KiB/s): min=136568, max=156160, per=100.00%, avg=141312.00, stdev=6520.88, samples=9 00:13:39.921 iops : min=34142, max=39040, avg=35328.00, stdev=1630.22, samples=9 00:13:39.921 lat (usec) : 500=0.01%, 750=0.02%, 1000=0.02% 00:13:39.921 lat (msec) : 2=89.66%, 4=10.29%, 10=0.01% 00:13:39.921 cpu : usr=33.89%, sys=64.67%, ctx=12, majf=0, minf=1064 00:13:39.921 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.4%, 16=25.0%, 32=50.2%, >=64=1.6% 00:13:39.921 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:39.921 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:13:39.921 issued rwts: total=0,176185,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:39.921 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:39.921 00:13:39.921 Run status group 0 (all jobs): 00:13:39.921 WRITE: bw=138MiB/s (144MB/s), 138MiB/s-138MiB/s (144MB/s-144MB/s), io=688MiB (722MB), run=5002-5002msec 00:13:39.921 ----------------------------------------------------- 00:13:39.921 Suppressions used: 00:13:39.921 count bytes template 00:13:39.921 1 11 /usr/src/fio/parse.c 00:13:39.921 1 8 libtcmalloc_minimal.so 00:13:39.921 1 904 libcrypto.so 00:13:39.921 ----------------------------------------------------- 00:13:39.921 00:13:39.921 00:13:39.921 real 0m12.005s 00:13:39.921 user 0m4.398s 00:13:39.921 sys 0m7.158s 00:13:39.921 19:03:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:39.921 ************************************ 00:13:39.921 END TEST xnvme_fio_plugin 00:13:39.921 ************************************ 00:13:39.921 19:03:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:39.921 19:03:57 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:39.921 19:03:57 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:13:39.921 19:03:57 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:13:39.921 19:03:57 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:39.921 19:03:57 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:39.921 19:03:57 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:39.921 19:03:57 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:39.921 ************************************ 00:13:39.921 START TEST xnvme_rpc 00:13:39.921 ************************************ 00:13:39.921 19:03:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:39.921 19:03:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:39.922 19:03:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:39.922 19:03:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:39.922 19:03:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:39.922 19:03:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=81589 00:13:39.922 19:03:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 81589 00:13:39.922 19:03:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 81589 ']' 00:13:39.922 19:03:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:39.922 19:03:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:39.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:39.922 19:03:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:39.922 19:03:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:39.922 19:03:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:39.922 19:03:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:40.183 [2024-12-05 19:03:57.498724] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:13:40.183 [2024-12-05 19:03:57.499092] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81589 ] 00:13:40.183 [2024-12-05 19:03:57.640713] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:40.183 [2024-12-05 19:03:57.669280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:41.127 19:03:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:41.127 19:03:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:41.127 19:03:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c 00:13:41.127 19:03:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.127 19:03:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:41.127 xnvme_bdev 00:13:41.127 19:03:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.127 19:03:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:41.127 19:03:58 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:41.127 19:03:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.127 19:03:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:41.127 19:03:58 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:41.127 19:03:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.127 19:03:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:41.127 19:03:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:41.127 19:03:58 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:41.127 19:03:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.127 19:03:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:41.127 19:03:58 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:41.127 19:03:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.127 19:03:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:13:41.127 19:03:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:41.128 19:03:58 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:41.128 19:03:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.128 19:03:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:41.128 19:03:58 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:41.128 19:03:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.128 19:03:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:13:41.128 19:03:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:41.128 19:03:58 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:41.128 19:03:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.128 19:03:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:41.128 19:03:58 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:41.128 19:03:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.128 19:03:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:13:41.128 19:03:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:41.128 19:03:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.128 19:03:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:41.128 19:03:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.128 19:03:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 81589 00:13:41.128 19:03:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 81589 ']' 00:13:41.128 19:03:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 81589 00:13:41.128 19:03:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:41.128 19:03:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:41.128 19:03:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81589 00:13:41.128 killing process with pid 81589 00:13:41.128 19:03:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:41.128 19:03:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:41.128 19:03:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81589' 00:13:41.128 19:03:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 81589 00:13:41.128 19:03:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 81589 00:13:41.702 ************************************ 00:13:41.702 END TEST xnvme_rpc 00:13:41.702 ************************************ 00:13:41.702 00:13:41.702 real 0m1.672s 00:13:41.702 user 0m1.803s 00:13:41.702 sys 0m0.389s 00:13:41.702 19:03:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:41.702 19:03:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:41.702 19:03:59 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:41.702 19:03:59 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:41.702 19:03:59 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:41.702 19:03:59 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:41.702 ************************************ 00:13:41.702 START TEST xnvme_bdevperf 00:13:41.702 ************************************ 00:13:41.702 19:03:59 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:41.702 19:03:59 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:41.702 19:03:59 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:13:41.702 19:03:59 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:41.702 19:03:59 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:41.702 19:03:59 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:41.702 19:03:59 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:41.702 19:03:59 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:41.702 { 00:13:41.702 "subsystems": [ 00:13:41.702 { 00:13:41.702 "subsystem": "bdev", 00:13:41.702 "config": [ 00:13:41.702 { 00:13:41.702 "params": { 00:13:41.702 "io_mechanism": "io_uring", 00:13:41.702 "conserve_cpu": true, 00:13:41.702 "filename": "/dev/nvme0n1", 00:13:41.702 "name": "xnvme_bdev" 00:13:41.702 }, 00:13:41.702 "method": "bdev_xnvme_create" 00:13:41.702 }, 00:13:41.702 { 00:13:41.702 "method": "bdev_wait_for_examine" 00:13:41.702 } 00:13:41.702 ] 00:13:41.702 } 00:13:41.702 ] 00:13:41.702 } 00:13:41.702 [2024-12-05 19:03:59.223907] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:13:41.702 [2024-12-05 19:03:59.224233] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81647 ] 00:13:41.965 [2024-12-05 19:03:59.372694] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:41.965 [2024-12-05 19:03:59.412145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:42.227 Running I/O for 5 seconds... 00:13:44.116 35031.00 IOPS, 136.84 MiB/s [2024-12-05T19:04:02.617Z] 34955.50 IOPS, 136.54 MiB/s [2024-12-05T19:04:03.561Z] 37394.33 IOPS, 146.07 MiB/s [2024-12-05T19:04:04.948Z] 38749.75 IOPS, 151.37 MiB/s 00:13:47.389 Latency(us) 00:13:47.389 [2024-12-05T19:04:04.948Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:47.389 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:47.389 xnvme_bdev : 5.00 38922.54 152.04 0.00 0.00 1640.47 850.71 6427.57 00:13:47.389 [2024-12-05T19:04:04.948Z] =================================================================================================================== 00:13:47.389 [2024-12-05T19:04:04.948Z] Total : 38922.54 152.04 0.00 0.00 1640.47 850.71 6427.57 00:13:47.389 19:04:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:47.389 19:04:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:47.389 19:04:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:47.389 19:04:04 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:47.389 19:04:04 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:47.389 { 00:13:47.389 "subsystems": [ 00:13:47.389 { 00:13:47.389 "subsystem": "bdev", 00:13:47.389 "config": [ 00:13:47.389 { 00:13:47.389 "params": { 00:13:47.389 "io_mechanism": "io_uring", 00:13:47.389 "conserve_cpu": true, 00:13:47.389 "filename": "/dev/nvme0n1", 00:13:47.389 "name": "xnvme_bdev" 00:13:47.389 }, 00:13:47.389 "method": "bdev_xnvme_create" 00:13:47.389 }, 00:13:47.389 { 00:13:47.389 "method": "bdev_wait_for_examine" 00:13:47.389 } 00:13:47.389 ] 00:13:47.389 } 00:13:47.389 ] 00:13:47.389 } 00:13:47.389 [2024-12-05 19:04:04.815168] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:13:47.389 [2024-12-05 19:04:04.815346] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81711 ] 00:13:47.649 [2024-12-05 19:04:04.965085] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:47.649 [2024-12-05 19:04:04.993764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:47.649 Running I/O for 5 seconds... 00:13:49.972 34578.00 IOPS, 135.07 MiB/s [2024-12-05T19:04:08.101Z] 35640.00 IOPS, 139.22 MiB/s [2024-12-05T19:04:09.487Z] 37463.67 IOPS, 146.34 MiB/s [2024-12-05T19:04:10.439Z] 37326.25 IOPS, 145.81 MiB/s [2024-12-05T19:04:10.439Z] 37238.00 IOPS, 145.46 MiB/s 00:13:52.880 Latency(us) 00:13:52.880 [2024-12-05T19:04:10.439Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:52.880 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:52.880 xnvme_bdev : 5.01 37181.65 145.24 0.00 0.00 1715.39 343.43 12451.84 00:13:52.880 [2024-12-05T19:04:10.439Z] =================================================================================================================== 00:13:52.880 [2024-12-05T19:04:10.439Z] Total : 37181.65 145.24 0.00 0.00 1715.39 343.43 12451.84 00:13:52.880 00:13:52.880 real 0m11.123s 00:13:52.880 user 0m6.575s 00:13:52.880 sys 0m3.965s 00:13:52.880 ************************************ 00:13:52.880 END TEST xnvme_bdevperf 00:13:52.880 ************************************ 00:13:52.880 19:04:10 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:52.880 19:04:10 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:52.880 19:04:10 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:52.880 19:04:10 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:52.880 19:04:10 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:52.880 19:04:10 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:52.880 ************************************ 00:13:52.880 START TEST xnvme_fio_plugin 00:13:52.880 ************************************ 00:13:52.880 19:04:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:52.880 19:04:10 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:52.880 19:04:10 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:13:52.880 19:04:10 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:52.880 19:04:10 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:52.880 19:04:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:52.880 19:04:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:52.880 19:04:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:52.880 19:04:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:52.880 19:04:10 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:52.880 19:04:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:52.880 19:04:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:52.880 19:04:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:52.880 19:04:10 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:52.880 19:04:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:52.880 19:04:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:52.880 19:04:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:52.880 19:04:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:52.880 19:04:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:52.880 { 00:13:52.880 "subsystems": [ 00:13:52.880 { 00:13:52.880 "subsystem": "bdev", 00:13:52.880 "config": [ 00:13:52.880 { 00:13:52.880 "params": { 00:13:52.880 "io_mechanism": "io_uring", 00:13:52.880 "conserve_cpu": true, 00:13:52.880 "filename": "/dev/nvme0n1", 00:13:52.880 "name": "xnvme_bdev" 00:13:52.880 }, 00:13:52.880 "method": "bdev_xnvme_create" 00:13:52.880 }, 00:13:52.880 { 00:13:52.880 "method": "bdev_wait_for_examine" 00:13:52.880 } 00:13:52.880 ] 00:13:52.880 } 00:13:52.880 ] 00:13:52.880 } 00:13:52.880 19:04:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:52.880 19:04:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:52.880 19:04:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:52.880 19:04:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:52.880 19:04:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:53.141 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:53.141 fio-3.35 00:13:53.141 Starting 1 thread 00:13:58.432 00:13:58.432 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=81827: Thu Dec 5 19:04:15 2024 00:13:58.432 read: IOPS=37.4k, BW=146MiB/s (153MB/s)(730MiB/5002msec) 00:13:58.432 slat (nsec): min=2891, max=98810, avg=3563.95, stdev=1653.92 00:13:58.433 clat (usec): min=906, max=4418, avg=1569.11, stdev=286.30 00:13:58.433 lat (usec): min=909, max=4462, avg=1572.67, stdev=286.41 00:13:58.433 clat percentiles (usec): 00:13:58.433 | 1.00th=[ 1106], 5.00th=[ 1205], 10.00th=[ 1270], 20.00th=[ 1336], 00:13:58.433 | 30.00th=[ 1401], 40.00th=[ 1450], 50.00th=[ 1516], 60.00th=[ 1582], 00:13:58.433 | 70.00th=[ 1680], 80.00th=[ 1795], 90.00th=[ 1958], 95.00th=[ 2114], 00:13:58.433 | 99.00th=[ 2409], 99.50th=[ 2507], 99.90th=[ 3032], 99.95th=[ 3195], 00:13:58.433 | 99.99th=[ 4228] 00:13:58.433 bw ( KiB/s): min=143872, max=155136, per=100.00%, avg=149504.00, stdev=3248.28, samples=9 00:13:58.433 iops : min=35968, max=38784, avg=37376.00, stdev=812.07, samples=9 00:13:58.433 lat (usec) : 1000=0.14% 00:13:58.433 lat (msec) : 2=91.74%, 4=8.10%, 10=0.02% 00:13:58.433 cpu : usr=50.85%, sys=45.31%, ctx=14, majf=0, minf=1063 00:13:58.433 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:13:58.433 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:58.433 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:13:58.433 issued rwts: total=186944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:58.433 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:58.433 00:13:58.433 Run status group 0 (all jobs): 00:13:58.433 READ: bw=146MiB/s (153MB/s), 146MiB/s-146MiB/s (153MB/s-153MB/s), io=730MiB (766MB), run=5002-5002msec 00:13:59.005 ----------------------------------------------------- 00:13:59.005 Suppressions used: 00:13:59.005 count bytes template 00:13:59.005 1 11 /usr/src/fio/parse.c 00:13:59.005 1 8 libtcmalloc_minimal.so 00:13:59.005 1 904 libcrypto.so 00:13:59.005 ----------------------------------------------------- 00:13:59.005 00:13:59.005 19:04:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:59.005 19:04:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:59.005 19:04:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:59.005 19:04:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:59.005 19:04:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:59.005 19:04:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:59.005 19:04:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:59.005 19:04:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:59.005 19:04:16 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:59.005 19:04:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:59.005 19:04:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:59.005 19:04:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:59.005 19:04:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:59.005 19:04:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:59.005 19:04:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:59.005 19:04:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:59.005 19:04:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:59.005 19:04:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:59.005 19:04:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:59.005 19:04:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:59.005 19:04:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:59.005 { 00:13:59.005 "subsystems": [ 00:13:59.005 { 00:13:59.005 "subsystem": "bdev", 00:13:59.005 "config": [ 00:13:59.005 { 00:13:59.005 "params": { 00:13:59.005 "io_mechanism": "io_uring", 00:13:59.005 "conserve_cpu": true, 00:13:59.005 "filename": "/dev/nvme0n1", 00:13:59.005 "name": "xnvme_bdev" 00:13:59.005 }, 00:13:59.005 "method": "bdev_xnvme_create" 00:13:59.005 }, 00:13:59.005 { 00:13:59.005 "method": "bdev_wait_for_examine" 00:13:59.005 } 00:13:59.005 ] 00:13:59.005 } 00:13:59.005 ] 00:13:59.005 } 00:13:59.005 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:59.005 fio-3.35 00:13:59.005 Starting 1 thread 00:14:05.597 00:14:05.597 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=81908: Thu Dec 5 19:04:21 2024 00:14:05.597 write: IOPS=36.8k, BW=144MiB/s (151MB/s)(719MiB/5001msec); 0 zone resets 00:14:05.597 slat (nsec): min=2935, max=79082, avg=4018.09, stdev=2175.95 00:14:05.597 clat (usec): min=639, max=5832, avg=1577.36, stdev=267.81 00:14:05.597 lat (usec): min=642, max=5836, avg=1581.38, stdev=268.30 00:14:05.597 clat percentiles (usec): 00:14:05.597 | 1.00th=[ 1090], 5.00th=[ 1188], 10.00th=[ 1270], 20.00th=[ 1352], 00:14:05.597 | 30.00th=[ 1434], 40.00th=[ 1483], 50.00th=[ 1549], 60.00th=[ 1614], 00:14:05.597 | 70.00th=[ 1680], 80.00th=[ 1778], 90.00th=[ 1926], 95.00th=[ 2040], 00:14:05.597 | 99.00th=[ 2343], 99.50th=[ 2507], 99.90th=[ 2900], 99.95th=[ 3097], 00:14:05.597 | 99.99th=[ 4228] 00:14:05.597 bw ( KiB/s): min=131504, max=167032, per=100.00%, avg=147403.56, stdev=11576.73, samples=9 00:14:05.597 iops : min=32876, max=41758, avg=36850.89, stdev=2894.18, samples=9 00:14:05.597 lat (usec) : 750=0.01%, 1000=0.05% 00:14:05.597 lat (msec) : 2=93.49%, 4=6.43%, 10=0.01% 00:14:05.597 cpu : usr=51.80%, sys=43.80%, ctx=18, majf=0, minf=1064 00:14:05.597 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.1%, >=64=1.6% 00:14:05.597 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:05.597 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:14:05.597 issued rwts: total=0,183991,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:05.597 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:05.597 00:14:05.597 Run status group 0 (all jobs): 00:14:05.597 WRITE: bw=144MiB/s (151MB/s), 144MiB/s-144MiB/s (151MB/s-151MB/s), io=719MiB (754MB), run=5001-5001msec 00:14:05.597 ----------------------------------------------------- 00:14:05.597 Suppressions used: 00:14:05.597 count bytes template 00:14:05.597 1 11 /usr/src/fio/parse.c 00:14:05.597 1 8 libtcmalloc_minimal.so 00:14:05.597 1 904 libcrypto.so 00:14:05.597 ----------------------------------------------------- 00:14:05.597 00:14:05.597 00:14:05.597 real 0m12.058s 00:14:05.597 user 0m6.321s 00:14:05.597 sys 0m5.005s 00:14:05.597 19:04:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:05.597 ************************************ 00:14:05.597 END TEST xnvme_fio_plugin 00:14:05.597 ************************************ 00:14:05.597 19:04:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:05.597 19:04:22 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:14:05.597 19:04:22 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd 00:14:05.597 19:04:22 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1 00:14:05.597 19:04:22 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1 00:14:05.597 19:04:22 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:14:05.597 19:04:22 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:14:05.597 19:04:22 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:14:05.597 19:04:22 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:14:05.597 19:04:22 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:14:05.597 19:04:22 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:05.597 19:04:22 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:05.597 19:04:22 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:05.597 ************************************ 00:14:05.597 START TEST xnvme_rpc 00:14:05.597 ************************************ 00:14:05.597 19:04:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:14:05.597 19:04:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:14:05.597 19:04:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:14:05.597 19:04:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:14:05.597 19:04:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:14:05.597 19:04:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=81988 00:14:05.597 19:04:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 81988 00:14:05.597 19:04:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 81988 ']' 00:14:05.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:05.597 19:04:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:05.597 19:04:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:05.597 19:04:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:05.597 19:04:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:05.597 19:04:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:05.597 19:04:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:05.597 [2024-12-05 19:04:22.562238] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:14:05.597 [2024-12-05 19:04:22.562433] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81988 ] 00:14:05.597 [2024-12-05 19:04:22.708675] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:05.597 [2024-12-05 19:04:22.748780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:05.859 19:04:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:05.859 19:04:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:05.859 19:04:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd '' 00:14:05.859 19:04:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.859 19:04:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.121 xnvme_bdev 00:14:06.121 19:04:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.121 19:04:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:14:06.121 19:04:23 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:06.121 19:04:23 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:14:06.121 19:04:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.121 19:04:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.121 19:04:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.121 19:04:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:14:06.121 19:04:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:14:06.121 19:04:23 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:06.121 19:04:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.121 19:04:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.121 19:04:23 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:14:06.121 19:04:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.121 19:04:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:14:06.121 19:04:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:14:06.121 19:04:23 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:06.121 19:04:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.121 19:04:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.121 19:04:23 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:14:06.121 19:04:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.121 19:04:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:14:06.121 19:04:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:14:06.121 19:04:23 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:06.121 19:04:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.121 19:04:23 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:14:06.121 19:04:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.121 19:04:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.121 19:04:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:14:06.121 19:04:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:14:06.121 19:04:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.121 19:04:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.121 19:04:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.121 19:04:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 81988 00:14:06.121 19:04:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 81988 ']' 00:14:06.121 19:04:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 81988 00:14:06.121 19:04:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:14:06.121 19:04:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:06.121 19:04:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81988 00:14:06.121 19:04:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:06.121 killing process with pid 81988 00:14:06.121 19:04:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:06.121 19:04:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81988' 00:14:06.121 19:04:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 81988 00:14:06.121 19:04:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 81988 00:14:06.695 00:14:06.695 real 0m1.618s 00:14:06.695 user 0m1.601s 00:14:06.695 sys 0m0.492s 00:14:06.695 19:04:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:06.695 ************************************ 00:14:06.695 END TEST xnvme_rpc 00:14:06.695 ************************************ 00:14:06.695 19:04:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.695 19:04:24 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:06.695 19:04:24 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:06.695 19:04:24 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:06.695 19:04:24 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:06.695 ************************************ 00:14:06.695 START TEST xnvme_bdevperf 00:14:06.695 ************************************ 00:14:06.695 19:04:24 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:14:06.695 19:04:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:14:06.695 19:04:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:14:06.695 19:04:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:06.695 19:04:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:14:06.695 19:04:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:06.695 19:04:24 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:06.695 19:04:24 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:06.695 { 00:14:06.695 "subsystems": [ 00:14:06.695 { 00:14:06.695 "subsystem": "bdev", 00:14:06.695 "config": [ 00:14:06.695 { 00:14:06.695 "params": { 00:14:06.695 "io_mechanism": "io_uring_cmd", 00:14:06.695 "conserve_cpu": false, 00:14:06.695 "filename": "/dev/ng0n1", 00:14:06.695 "name": "xnvme_bdev" 00:14:06.695 }, 00:14:06.695 "method": "bdev_xnvme_create" 00:14:06.695 }, 00:14:06.695 { 00:14:06.695 "method": "bdev_wait_for_examine" 00:14:06.695 } 00:14:06.695 ] 00:14:06.695 } 00:14:06.695 ] 00:14:06.695 } 00:14:06.695 [2024-12-05 19:04:24.237957] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:14:06.695 [2024-12-05 19:04:24.238090] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82046 ] 00:14:06.957 [2024-12-05 19:04:24.385147] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:06.957 [2024-12-05 19:04:24.422152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:07.218 Running I/O for 5 seconds... 00:14:09.106 37499.00 IOPS, 146.48 MiB/s [2024-12-05T19:04:27.605Z] 35766.50 IOPS, 139.71 MiB/s [2024-12-05T19:04:28.991Z] 35693.67 IOPS, 139.43 MiB/s [2024-12-05T19:04:29.932Z] 35883.50 IOPS, 140.17 MiB/s [2024-12-05T19:04:29.932Z] 35729.40 IOPS, 139.57 MiB/s 00:14:12.373 Latency(us) 00:14:12.373 [2024-12-05T19:04:29.932Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:12.373 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:12.373 xnvme_bdev : 5.00 35714.16 139.51 0.00 0.00 1787.84 627.00 6503.19 00:14:12.373 [2024-12-05T19:04:29.932Z] =================================================================================================================== 00:14:12.373 [2024-12-05T19:04:29.932Z] Total : 35714.16 139.51 0.00 0.00 1787.84 627.00 6503.19 00:14:12.373 19:04:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:12.373 19:04:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:12.373 19:04:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:12.373 19:04:29 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:12.373 19:04:29 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:12.373 { 00:14:12.373 "subsystems": [ 00:14:12.373 { 00:14:12.373 "subsystem": "bdev", 00:14:12.373 "config": [ 00:14:12.373 { 00:14:12.374 "params": { 00:14:12.374 "io_mechanism": "io_uring_cmd", 00:14:12.374 "conserve_cpu": false, 00:14:12.374 "filename": "/dev/ng0n1", 00:14:12.374 "name": "xnvme_bdev" 00:14:12.374 }, 00:14:12.374 "method": "bdev_xnvme_create" 00:14:12.374 }, 00:14:12.374 { 00:14:12.374 "method": "bdev_wait_for_examine" 00:14:12.374 } 00:14:12.374 ] 00:14:12.374 } 00:14:12.374 ] 00:14:12.374 } 00:14:12.374 [2024-12-05 19:04:29.809606] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:14:12.374 [2024-12-05 19:04:29.809746] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82115 ] 00:14:12.686 [2024-12-05 19:04:29.957055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:12.686 [2024-12-05 19:04:29.985652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:12.686 Running I/O for 5 seconds... 00:14:14.570 35583.00 IOPS, 139.00 MiB/s [2024-12-05T19:04:33.516Z] 35167.50 IOPS, 137.37 MiB/s [2024-12-05T19:04:34.459Z] 35573.67 IOPS, 138.96 MiB/s [2024-12-05T19:04:35.403Z] 35625.00 IOPS, 139.16 MiB/s [2024-12-05T19:04:35.403Z] 35728.40 IOPS, 139.56 MiB/s 00:14:17.844 Latency(us) 00:14:17.844 [2024-12-05T19:04:35.403Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:17.844 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:17.844 xnvme_bdev : 5.00 35711.30 139.50 0.00 0.00 1787.64 340.28 8973.39 00:14:17.844 [2024-12-05T19:04:35.403Z] =================================================================================================================== 00:14:17.844 [2024-12-05T19:04:35.403Z] Total : 35711.30 139.50 0.00 0.00 1787.64 340.28 8973.39 00:14:17.844 19:04:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:17.844 19:04:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:14:17.844 19:04:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:17.844 19:04:35 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:17.844 19:04:35 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:17.844 { 00:14:17.844 "subsystems": [ 00:14:17.844 { 00:14:17.844 "subsystem": "bdev", 00:14:17.844 "config": [ 00:14:17.844 { 00:14:17.844 "params": { 00:14:17.844 "io_mechanism": "io_uring_cmd", 00:14:17.844 "conserve_cpu": false, 00:14:17.844 "filename": "/dev/ng0n1", 00:14:17.844 "name": "xnvme_bdev" 00:14:17.844 }, 00:14:17.844 "method": "bdev_xnvme_create" 00:14:17.844 }, 00:14:17.844 { 00:14:17.844 "method": "bdev_wait_for_examine" 00:14:17.844 } 00:14:17.844 ] 00:14:17.844 } 00:14:17.844 ] 00:14:17.844 } 00:14:17.844 [2024-12-05 19:04:35.335578] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:14:17.844 [2024-12-05 19:04:35.335716] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82180 ] 00:14:18.104 [2024-12-05 19:04:35.483566] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:18.104 [2024-12-05 19:04:35.512508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:18.104 Running I/O for 5 seconds... 00:14:20.466 78784.00 IOPS, 307.75 MiB/s [2024-12-05T19:04:38.970Z] 79136.00 IOPS, 309.12 MiB/s [2024-12-05T19:04:39.915Z] 79296.00 IOPS, 309.75 MiB/s [2024-12-05T19:04:40.859Z] 79312.00 IOPS, 309.81 MiB/s [2024-12-05T19:04:40.860Z] 79308.80 IOPS, 309.80 MiB/s 00:14:23.301 Latency(us) 00:14:23.301 [2024-12-05T19:04:40.860Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:23.301 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:14:23.301 xnvme_bdev : 5.00 79277.05 309.68 0.00 0.00 803.94 504.12 3806.13 00:14:23.301 [2024-12-05T19:04:40.860Z] =================================================================================================================== 00:14:23.301 [2024-12-05T19:04:40.860Z] Total : 79277.05 309.68 0.00 0.00 803.94 504.12 3806.13 00:14:23.301 19:04:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:23.301 19:04:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:14:23.301 19:04:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:23.301 19:04:40 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:23.301 19:04:40 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:23.301 { 00:14:23.301 "subsystems": [ 00:14:23.301 { 00:14:23.301 "subsystem": "bdev", 00:14:23.301 "config": [ 00:14:23.301 { 00:14:23.301 "params": { 00:14:23.301 "io_mechanism": "io_uring_cmd", 00:14:23.301 "conserve_cpu": false, 00:14:23.301 "filename": "/dev/ng0n1", 00:14:23.301 "name": "xnvme_bdev" 00:14:23.301 }, 00:14:23.301 "method": "bdev_xnvme_create" 00:14:23.301 }, 00:14:23.301 { 00:14:23.301 "method": "bdev_wait_for_examine" 00:14:23.301 } 00:14:23.301 ] 00:14:23.301 } 00:14:23.301 ] 00:14:23.301 } 00:14:23.301 [2024-12-05 19:04:40.809990] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:14:23.301 [2024-12-05 19:04:40.810111] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82243 ] 00:14:23.560 [2024-12-05 19:04:40.951658] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:23.560 [2024-12-05 19:04:40.972040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:23.560 Running I/O for 5 seconds... 00:14:25.562 42174.00 IOPS, 164.74 MiB/s [2024-12-05T19:04:44.060Z] 36776.50 IOPS, 143.66 MiB/s [2024-12-05T19:04:45.444Z] 25328.00 IOPS, 98.94 MiB/s [2024-12-05T19:04:46.384Z] 19259.75 IOPS, 75.23 MiB/s [2024-12-05T19:04:46.385Z] 22508.00 IOPS, 87.92 MiB/s 00:14:28.826 Latency(us) 00:14:28.826 [2024-12-05T19:04:46.385Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:28.826 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:14:28.826 xnvme_bdev : 5.00 22507.61 87.92 0.00 0.00 2839.51 66.17 214554.78 00:14:28.826 [2024-12-05T19:04:46.385Z] =================================================================================================================== 00:14:28.826 [2024-12-05T19:04:46.385Z] Total : 22507.61 87.92 0.00 0.00 2839.51 66.17 214554.78 00:14:28.826 00:14:28.826 real 0m22.065s 00:14:28.826 user 0m10.774s 00:14:28.826 sys 0m10.796s 00:14:28.826 19:04:46 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:28.826 ************************************ 00:14:28.826 END TEST xnvme_bdevperf 00:14:28.826 ************************************ 00:14:28.826 19:04:46 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:28.826 19:04:46 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:14:28.826 19:04:46 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:28.826 19:04:46 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:28.826 19:04:46 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:28.826 ************************************ 00:14:28.826 START TEST xnvme_fio_plugin 00:14:28.826 ************************************ 00:14:28.826 19:04:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:14:28.826 19:04:46 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:14:28.826 19:04:46 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:14:28.826 19:04:46 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:28.826 19:04:46 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:28.826 19:04:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:28.826 19:04:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:28.826 19:04:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:28.826 19:04:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:28.826 19:04:46 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:28.826 19:04:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:28.826 19:04:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:28.826 19:04:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:28.826 19:04:46 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:28.826 19:04:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:28.826 19:04:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:28.826 19:04:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:28.826 19:04:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:28.826 19:04:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:28.826 19:04:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:28.826 19:04:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:28.826 19:04:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:28.826 19:04:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:28.826 19:04:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:28.826 { 00:14:28.826 "subsystems": [ 00:14:28.826 { 00:14:28.826 "subsystem": "bdev", 00:14:28.826 "config": [ 00:14:28.826 { 00:14:28.826 "params": { 00:14:28.826 "io_mechanism": "io_uring_cmd", 00:14:28.826 "conserve_cpu": false, 00:14:28.826 "filename": "/dev/ng0n1", 00:14:28.826 "name": "xnvme_bdev" 00:14:28.826 }, 00:14:28.826 "method": "bdev_xnvme_create" 00:14:28.826 }, 00:14:28.826 { 00:14:28.826 "method": "bdev_wait_for_examine" 00:14:28.826 } 00:14:28.826 ] 00:14:28.826 } 00:14:28.826 ] 00:14:28.826 } 00:14:29.088 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:29.088 fio-3.35 00:14:29.088 Starting 1 thread 00:14:34.387 00:14:34.387 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=82350: Thu Dec 5 19:04:51 2024 00:14:34.387 read: IOPS=35.6k, BW=139MiB/s (146MB/s)(695MiB/5001msec) 00:14:34.387 slat (usec): min=2, max=150, avg= 3.96, stdev= 2.20 00:14:34.387 clat (usec): min=896, max=3816, avg=1638.69, stdev=269.86 00:14:34.387 lat (usec): min=900, max=3879, avg=1642.65, stdev=270.30 00:14:34.387 clat percentiles (usec): 00:14:34.387 | 1.00th=[ 1172], 5.00th=[ 1287], 10.00th=[ 1336], 20.00th=[ 1418], 00:14:34.387 | 30.00th=[ 1467], 40.00th=[ 1532], 50.00th=[ 1598], 60.00th=[ 1663], 00:14:34.387 | 70.00th=[ 1745], 80.00th=[ 1844], 90.00th=[ 1991], 95.00th=[ 2147], 00:14:34.387 | 99.00th=[ 2442], 99.50th=[ 2540], 99.90th=[ 2999], 99.95th=[ 3425], 00:14:34.387 | 99.99th=[ 3621] 00:14:34.387 bw ( KiB/s): min=136192, max=148992, per=100.00%, avg=143075.56, stdev=4168.25, samples=9 00:14:34.387 iops : min=34048, max=37248, avg=35768.89, stdev=1042.06, samples=9 00:14:34.387 lat (usec) : 1000=0.05% 00:14:34.387 lat (msec) : 2=90.04%, 4=9.90% 00:14:34.387 cpu : usr=35.40%, sys=63.18%, ctx=15, majf=0, minf=1063 00:14:34.387 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:14:34.387 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:34.387 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:14:34.387 issued rwts: total=177792,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:34.387 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:34.387 00:14:34.387 Run status group 0 (all jobs): 00:14:34.387 READ: bw=139MiB/s (146MB/s), 139MiB/s-139MiB/s (146MB/s-146MB/s), io=695MiB (728MB), run=5001-5001msec 00:14:34.960 ----------------------------------------------------- 00:14:34.961 Suppressions used: 00:14:34.961 count bytes template 00:14:34.961 1 11 /usr/src/fio/parse.c 00:14:34.961 1 8 libtcmalloc_minimal.so 00:14:34.961 1 904 libcrypto.so 00:14:34.961 ----------------------------------------------------- 00:14:34.961 00:14:34.961 19:04:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:34.961 19:04:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:34.961 19:04:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:34.961 19:04:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:34.961 19:04:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:34.961 19:04:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:34.961 19:04:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:34.961 19:04:52 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:34.961 19:04:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:34.961 19:04:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:34.961 19:04:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:34.961 19:04:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:34.961 19:04:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:34.961 19:04:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:34.961 19:04:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:34.961 19:04:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:34.961 19:04:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:34.961 19:04:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:34.961 19:04:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:34.961 19:04:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:34.961 19:04:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:34.961 { 00:14:34.961 "subsystems": [ 00:14:34.961 { 00:14:34.961 "subsystem": "bdev", 00:14:34.961 "config": [ 00:14:34.961 { 00:14:34.961 "params": { 00:14:34.961 "io_mechanism": "io_uring_cmd", 00:14:34.961 "conserve_cpu": false, 00:14:34.961 "filename": "/dev/ng0n1", 00:14:34.961 "name": "xnvme_bdev" 00:14:34.961 }, 00:14:34.961 "method": "bdev_xnvme_create" 00:14:34.961 }, 00:14:34.961 { 00:14:34.961 "method": "bdev_wait_for_examine" 00:14:34.961 } 00:14:34.961 ] 00:14:34.961 } 00:14:34.961 ] 00:14:34.961 } 00:14:34.961 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:34.961 fio-3.35 00:14:34.961 Starting 1 thread 00:14:41.547 00:14:41.547 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=82430: Thu Dec 5 19:04:57 2024 00:14:41.547 write: IOPS=38.7k, BW=151MiB/s (158MB/s)(755MiB/5001msec); 0 zone resets 00:14:41.547 slat (usec): min=2, max=535, avg= 3.92, stdev= 2.38 00:14:41.547 clat (usec): min=133, max=4871, avg=1500.35, stdev=308.04 00:14:41.547 lat (usec): min=136, max=4875, avg=1504.27, stdev=308.53 00:14:41.547 clat percentiles (usec): 00:14:41.547 | 1.00th=[ 971], 5.00th=[ 1074], 10.00th=[ 1139], 20.00th=[ 1237], 00:14:41.547 | 30.00th=[ 1319], 40.00th=[ 1401], 50.00th=[ 1483], 60.00th=[ 1549], 00:14:41.547 | 70.00th=[ 1631], 80.00th=[ 1745], 90.00th=[ 1893], 95.00th=[ 2040], 00:14:41.547 | 99.00th=[ 2343], 99.50th=[ 2507], 99.90th=[ 3097], 99.95th=[ 3425], 00:14:41.547 | 99.99th=[ 3949] 00:14:41.547 bw ( KiB/s): min=140968, max=186016, per=100.00%, avg=155619.56, stdev=15396.68, samples=9 00:14:41.547 iops : min=35242, max=46504, avg=38904.89, stdev=3849.17, samples=9 00:14:41.547 lat (usec) : 250=0.01%, 500=0.06%, 750=0.12%, 1000=1.37% 00:14:41.547 lat (msec) : 2=92.55%, 4=5.89%, 10=0.01% 00:14:41.547 cpu : usr=38.26%, sys=60.52%, ctx=12, majf=0, minf=1064 00:14:41.547 IO depths : 1=1.5%, 2=3.0%, 4=6.0%, 8=12.1%, 16=24.4%, 32=51.3%, >=64=1.6% 00:14:41.547 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:41.547 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:14:41.547 issued rwts: total=0,193393,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:41.547 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:41.547 00:14:41.547 Run status group 0 (all jobs): 00:14:41.547 WRITE: bw=151MiB/s (158MB/s), 151MiB/s-151MiB/s (158MB/s-158MB/s), io=755MiB (792MB), run=5001-5001msec 00:14:41.547 ----------------------------------------------------- 00:14:41.547 Suppressions used: 00:14:41.547 count bytes template 00:14:41.547 1 11 /usr/src/fio/parse.c 00:14:41.547 1 8 libtcmalloc_minimal.so 00:14:41.547 1 904 libcrypto.so 00:14:41.547 ----------------------------------------------------- 00:14:41.547 00:14:41.547 00:14:41.547 real 0m12.036s 00:14:41.547 user 0m4.853s 00:14:41.547 sys 0m6.729s 00:14:41.547 19:04:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:41.547 ************************************ 00:14:41.547 END TEST xnvme_fio_plugin 00:14:41.547 ************************************ 00:14:41.547 19:04:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:41.547 19:04:58 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:14:41.547 19:04:58 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:14:41.547 19:04:58 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:14:41.547 19:04:58 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:14:41.547 19:04:58 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:41.547 19:04:58 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:41.547 19:04:58 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:41.547 ************************************ 00:14:41.547 START TEST xnvme_rpc 00:14:41.547 ************************************ 00:14:41.547 19:04:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:14:41.547 19:04:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:14:41.547 19:04:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:14:41.547 19:04:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:14:41.547 19:04:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:14:41.547 19:04:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=82506 00:14:41.547 19:04:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 82506 00:14:41.547 19:04:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 82506 ']' 00:14:41.547 19:04:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:41.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:41.547 19:04:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:41.547 19:04:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:41.547 19:04:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:41.547 19:04:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:41.547 19:04:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:41.547 [2024-12-05 19:04:58.493631] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:14:41.547 [2024-12-05 19:04:58.493786] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82506 ] 00:14:41.547 [2024-12-05 19:04:58.643378] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:41.547 [2024-12-05 19:04:58.674297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:41.808 19:04:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:41.808 19:04:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:41.808 19:04:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c 00:14:41.808 19:04:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.808 19:04:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:41.808 xnvme_bdev 00:14:41.808 19:04:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.808 19:04:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:14:41.808 19:04:59 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:41.808 19:04:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.808 19:04:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:41.808 19:04:59 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:14:42.068 19:04:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.068 19:04:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:14:42.068 19:04:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:14:42.068 19:04:59 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:42.068 19:04:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.068 19:04:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:42.068 19:04:59 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:14:42.068 19:04:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.068 19:04:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:14:42.068 19:04:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:14:42.068 19:04:59 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:42.068 19:04:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.068 19:04:59 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:14:42.068 19:04:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:42.068 19:04:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.068 19:04:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:14:42.068 19:04:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:14:42.068 19:04:59 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:14:42.068 19:04:59 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:42.068 19:04:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.068 19:04:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:42.068 19:04:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.068 19:04:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:14:42.068 19:04:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:14:42.068 19:04:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.068 19:04:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:42.068 19:04:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.068 19:04:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 82506 00:14:42.068 19:04:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 82506 ']' 00:14:42.068 19:04:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 82506 00:14:42.068 19:04:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:14:42.068 19:04:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:42.068 19:04:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82506 00:14:42.068 19:04:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:42.068 19:04:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:42.068 19:04:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82506' 00:14:42.068 killing process with pid 82506 00:14:42.068 19:04:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 82506 00:14:42.068 19:04:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 82506 00:14:42.348 00:14:42.348 real 0m1.414s 00:14:42.348 user 0m1.476s 00:14:42.348 sys 0m0.422s 00:14:42.348 ************************************ 00:14:42.348 19:04:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:42.348 19:04:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:42.348 END TEST xnvme_rpc 00:14:42.348 ************************************ 00:14:42.348 19:04:59 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:42.348 19:04:59 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:42.348 19:04:59 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:42.348 19:04:59 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:42.348 ************************************ 00:14:42.348 START TEST xnvme_bdevperf 00:14:42.348 ************************************ 00:14:42.348 19:04:59 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:14:42.348 19:04:59 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:14:42.348 19:04:59 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:14:42.348 19:04:59 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:42.348 19:04:59 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:14:42.349 19:04:59 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:42.349 19:04:59 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:42.349 19:04:59 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:42.609 { 00:14:42.609 "subsystems": [ 00:14:42.609 { 00:14:42.609 "subsystem": "bdev", 00:14:42.609 "config": [ 00:14:42.609 { 00:14:42.609 "params": { 00:14:42.609 "io_mechanism": "io_uring_cmd", 00:14:42.609 "conserve_cpu": true, 00:14:42.609 "filename": "/dev/ng0n1", 00:14:42.609 "name": "xnvme_bdev" 00:14:42.609 }, 00:14:42.609 "method": "bdev_xnvme_create" 00:14:42.609 }, 00:14:42.609 { 00:14:42.609 "method": "bdev_wait_for_examine" 00:14:42.609 } 00:14:42.609 ] 00:14:42.609 } 00:14:42.609 ] 00:14:42.609 } 00:14:42.609 [2024-12-05 19:04:59.956998] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:14:42.609 [2024-12-05 19:04:59.957138] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82567 ] 00:14:42.609 [2024-12-05 19:05:00.103992] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:42.609 [2024-12-05 19:05:00.132354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:42.870 Running I/O for 5 seconds... 00:14:44.758 35264.00 IOPS, 137.75 MiB/s [2024-12-05T19:05:03.261Z] 35040.00 IOPS, 136.88 MiB/s [2024-12-05T19:05:04.650Z] 35072.00 IOPS, 137.00 MiB/s [2024-12-05T19:05:05.595Z] 35328.00 IOPS, 138.00 MiB/s [2024-12-05T19:05:05.595Z] 35558.40 IOPS, 138.90 MiB/s 00:14:48.036 Latency(us) 00:14:48.036 [2024-12-05T19:05:05.595Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:48.036 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:48.036 xnvme_bdev : 5.01 35528.90 138.78 0.00 0.00 1797.01 938.93 3982.57 00:14:48.036 [2024-12-05T19:05:05.595Z] =================================================================================================================== 00:14:48.036 [2024-12-05T19:05:05.595Z] Total : 35528.90 138.78 0.00 0.00 1797.01 938.93 3982.57 00:14:48.036 19:05:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:48.036 19:05:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:48.036 19:05:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:48.036 19:05:05 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:48.036 19:05:05 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:48.036 { 00:14:48.036 "subsystems": [ 00:14:48.036 { 00:14:48.036 "subsystem": "bdev", 00:14:48.036 "config": [ 00:14:48.036 { 00:14:48.036 "params": { 00:14:48.036 "io_mechanism": "io_uring_cmd", 00:14:48.036 "conserve_cpu": true, 00:14:48.036 "filename": "/dev/ng0n1", 00:14:48.036 "name": "xnvme_bdev" 00:14:48.036 }, 00:14:48.036 "method": "bdev_xnvme_create" 00:14:48.036 }, 00:14:48.036 { 00:14:48.036 "method": "bdev_wait_for_examine" 00:14:48.036 } 00:14:48.036 ] 00:14:48.036 } 00:14:48.036 ] 00:14:48.036 } 00:14:48.036 [2024-12-05 19:05:05.511271] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:14:48.036 [2024-12-05 19:05:05.511403] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82630 ] 00:14:48.296 [2024-12-05 19:05:05.657364] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:48.296 [2024-12-05 19:05:05.686538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:48.296 Running I/O for 5 seconds... 00:14:50.248 41786.00 IOPS, 163.23 MiB/s [2024-12-05T19:05:09.193Z] 41234.00 IOPS, 161.07 MiB/s [2024-12-05T19:05:10.138Z] 39996.33 IOPS, 156.24 MiB/s [2024-12-05T19:05:11.082Z] 39308.50 IOPS, 153.55 MiB/s 00:14:53.523 Latency(us) 00:14:53.523 [2024-12-05T19:05:11.082Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:53.523 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:53.523 xnvme_bdev : 5.00 38873.54 151.85 0.00 0.00 1641.82 576.59 8872.57 00:14:53.523 [2024-12-05T19:05:11.083Z] =================================================================================================================== 00:14:53.524 [2024-12-05T19:05:11.083Z] Total : 38873.54 151.85 0.00 0.00 1641.82 576.59 8872.57 00:14:53.524 19:05:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:53.524 19:05:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:14:53.524 19:05:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:53.524 19:05:10 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:53.524 19:05:10 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:53.524 { 00:14:53.524 "subsystems": [ 00:14:53.524 { 00:14:53.524 "subsystem": "bdev", 00:14:53.524 "config": [ 00:14:53.524 { 00:14:53.524 "params": { 00:14:53.524 "io_mechanism": "io_uring_cmd", 00:14:53.524 "conserve_cpu": true, 00:14:53.524 "filename": "/dev/ng0n1", 00:14:53.524 "name": "xnvme_bdev" 00:14:53.524 }, 00:14:53.524 "method": "bdev_xnvme_create" 00:14:53.524 }, 00:14:53.524 { 00:14:53.524 "method": "bdev_wait_for_examine" 00:14:53.524 } 00:14:53.524 ] 00:14:53.524 } 00:14:53.524 ] 00:14:53.524 } 00:14:53.524 [2024-12-05 19:05:11.035622] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:14:53.524 [2024-12-05 19:05:11.035772] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82699 ] 00:14:53.785 [2024-12-05 19:05:11.182929] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:53.785 [2024-12-05 19:05:11.211911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:53.785 Running I/O for 5 seconds... 00:14:56.112 74496.00 IOPS, 291.00 MiB/s [2024-12-05T19:05:14.610Z] 73856.00 IOPS, 288.50 MiB/s [2024-12-05T19:05:15.607Z] 77866.67 IOPS, 304.17 MiB/s [2024-12-05T19:05:16.559Z] 81968.00 IOPS, 320.19 MiB/s 00:14:59.000 Latency(us) 00:14:59.000 [2024-12-05T19:05:16.559Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:59.000 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:14:59.000 xnvme_bdev : 5.00 81134.43 316.93 0.00 0.00 785.40 392.27 3150.77 00:14:59.000 [2024-12-05T19:05:16.559Z] =================================================================================================================== 00:14:59.000 [2024-12-05T19:05:16.559Z] Total : 81134.43 316.93 0.00 0.00 785.40 392.27 3150.77 00:14:59.000 19:05:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:59.000 19:05:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:14:59.000 19:05:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:59.000 19:05:16 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:59.000 19:05:16 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:59.261 { 00:14:59.261 "subsystems": [ 00:14:59.261 { 00:14:59.261 "subsystem": "bdev", 00:14:59.261 "config": [ 00:14:59.261 { 00:14:59.261 "params": { 00:14:59.261 "io_mechanism": "io_uring_cmd", 00:14:59.261 "conserve_cpu": true, 00:14:59.261 "filename": "/dev/ng0n1", 00:14:59.261 "name": "xnvme_bdev" 00:14:59.261 }, 00:14:59.261 "method": "bdev_xnvme_create" 00:14:59.261 }, 00:14:59.261 { 00:14:59.261 "method": "bdev_wait_for_examine" 00:14:59.261 } 00:14:59.261 ] 00:14:59.261 } 00:14:59.261 ] 00:14:59.261 } 00:14:59.261 [2024-12-05 19:05:16.593003] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:14:59.261 [2024-12-05 19:05:16.593128] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82763 ] 00:14:59.261 [2024-12-05 19:05:16.735420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:59.261 [2024-12-05 19:05:16.766720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:59.521 Running I/O for 5 seconds... 00:15:01.398 48476.00 IOPS, 189.36 MiB/s [2024-12-05T19:05:19.892Z] 46718.50 IOPS, 182.49 MiB/s [2024-12-05T19:05:21.270Z] 45491.33 IOPS, 177.70 MiB/s [2024-12-05T19:05:22.211Z] 43845.00 IOPS, 171.27 MiB/s 00:15:04.652 Latency(us) 00:15:04.652 [2024-12-05T19:05:22.211Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:04.652 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:15:04.652 xnvme_bdev : 5.00 42670.08 166.68 0.00 0.00 1494.62 144.94 23290.49 00:15:04.652 [2024-12-05T19:05:22.211Z] =================================================================================================================== 00:15:04.652 [2024-12-05T19:05:22.211Z] Total : 42670.08 166.68 0.00 0.00 1494.62 144.94 23290.49 00:15:04.652 00:15:04.652 real 0m22.246s 00:15:04.652 user 0m12.709s 00:15:04.652 sys 0m7.274s 00:15:04.652 19:05:22 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:04.652 ************************************ 00:15:04.652 END TEST xnvme_bdevperf 00:15:04.652 ************************************ 00:15:04.652 19:05:22 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:04.652 19:05:22 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:15:04.652 19:05:22 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:04.652 19:05:22 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:04.652 19:05:22 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:04.652 ************************************ 00:15:04.652 START TEST xnvme_fio_plugin 00:15:04.652 ************************************ 00:15:04.652 19:05:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:15:04.652 19:05:22 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:15:04.652 19:05:22 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:15:04.652 19:05:22 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:04.914 19:05:22 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:04.914 19:05:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:04.914 19:05:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:04.914 19:05:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:04.914 19:05:22 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:04.914 19:05:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:04.914 19:05:22 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:04.914 19:05:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:04.914 19:05:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:04.914 19:05:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:04.914 19:05:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:04.914 19:05:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:04.914 19:05:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:04.914 19:05:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:04.914 19:05:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:04.914 19:05:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:04.914 19:05:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:04.914 19:05:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:04.914 19:05:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:04.914 19:05:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:04.914 { 00:15:04.914 "subsystems": [ 00:15:04.914 { 00:15:04.914 "subsystem": "bdev", 00:15:04.914 "config": [ 00:15:04.914 { 00:15:04.914 "params": { 00:15:04.914 "io_mechanism": "io_uring_cmd", 00:15:04.914 "conserve_cpu": true, 00:15:04.914 "filename": "/dev/ng0n1", 00:15:04.914 "name": "xnvme_bdev" 00:15:04.914 }, 00:15:04.914 "method": "bdev_xnvme_create" 00:15:04.914 }, 00:15:04.914 { 00:15:04.914 "method": "bdev_wait_for_examine" 00:15:04.914 } 00:15:04.914 ] 00:15:04.914 } 00:15:04.914 ] 00:15:04.914 } 00:15:04.914 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:04.914 fio-3.35 00:15:04.914 Starting 1 thread 00:15:11.504 00:15:11.504 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=82865: Thu Dec 5 19:05:27 2024 00:15:11.504 read: IOPS=35.0k, BW=137MiB/s (144MB/s)(685MiB/5001msec) 00:15:11.504 slat (usec): min=2, max=101, avg= 3.80, stdev= 2.01 00:15:11.504 clat (usec): min=855, max=3585, avg=1673.43, stdev=276.41 00:15:11.504 lat (usec): min=858, max=3598, avg=1677.23, stdev=276.67 00:15:11.504 clat percentiles (usec): 00:15:11.504 | 1.00th=[ 1156], 5.00th=[ 1287], 10.00th=[ 1352], 20.00th=[ 1434], 00:15:11.504 | 30.00th=[ 1516], 40.00th=[ 1582], 50.00th=[ 1647], 60.00th=[ 1713], 00:15:11.504 | 70.00th=[ 1795], 80.00th=[ 1893], 90.00th=[ 2024], 95.00th=[ 2147], 00:15:11.504 | 99.00th=[ 2474], 99.50th=[ 2606], 99.90th=[ 3064], 99.95th=[ 3228], 00:15:11.504 | 99.99th=[ 3523] 00:15:11.505 bw ( KiB/s): min=133632, max=147968, per=99.65%, avg=139662.22, stdev=4037.81, samples=9 00:15:11.505 iops : min=33408, max=36992, avg=34915.56, stdev=1009.45, samples=9 00:15:11.505 lat (usec) : 1000=0.04% 00:15:11.505 lat (msec) : 2=88.25%, 4=11.70% 00:15:11.505 cpu : usr=55.42%, sys=41.30%, ctx=21, majf=0, minf=1063 00:15:11.505 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:15:11.505 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:11.505 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:15:11.505 issued rwts: total=175232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:11.505 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:11.505 00:15:11.505 Run status group 0 (all jobs): 00:15:11.505 READ: bw=137MiB/s (144MB/s), 137MiB/s-137MiB/s (144MB/s-144MB/s), io=685MiB (718MB), run=5001-5001msec 00:15:11.505 ----------------------------------------------------- 00:15:11.505 Suppressions used: 00:15:11.505 count bytes template 00:15:11.505 1 11 /usr/src/fio/parse.c 00:15:11.505 1 8 libtcmalloc_minimal.so 00:15:11.505 1 904 libcrypto.so 00:15:11.505 ----------------------------------------------------- 00:15:11.505 00:15:11.505 19:05:28 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:11.505 19:05:28 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:11.505 19:05:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:11.505 19:05:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:11.505 19:05:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:11.505 19:05:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:11.505 19:05:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:11.505 19:05:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:11.505 19:05:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:11.505 19:05:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:11.505 19:05:28 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:11.505 19:05:28 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:11.505 19:05:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:11.505 19:05:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:11.505 19:05:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:11.505 19:05:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:11.505 19:05:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:11.505 19:05:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:11.505 19:05:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:11.505 19:05:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:11.505 19:05:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:11.505 { 00:15:11.505 "subsystems": [ 00:15:11.505 { 00:15:11.505 "subsystem": "bdev", 00:15:11.505 "config": [ 00:15:11.505 { 00:15:11.505 "params": { 00:15:11.505 "io_mechanism": "io_uring_cmd", 00:15:11.505 "conserve_cpu": true, 00:15:11.505 "filename": "/dev/ng0n1", 00:15:11.505 "name": "xnvme_bdev" 00:15:11.505 }, 00:15:11.505 "method": "bdev_xnvme_create" 00:15:11.505 }, 00:15:11.505 { 00:15:11.505 "method": "bdev_wait_for_examine" 00:15:11.505 } 00:15:11.505 ] 00:15:11.505 } 00:15:11.505 ] 00:15:11.505 } 00:15:11.505 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:11.505 fio-3.35 00:15:11.505 Starting 1 thread 00:15:16.797 00:15:16.797 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=82950: Thu Dec 5 19:05:33 2024 00:15:16.797 write: IOPS=39.7k, BW=155MiB/s (162MB/s)(775MiB/5001msec); 0 zone resets 00:15:16.797 slat (usec): min=2, max=509, avg= 3.87, stdev= 2.78 00:15:16.797 clat (usec): min=292, max=5379, avg=1459.87, stdev=307.14 00:15:16.797 lat (usec): min=295, max=5383, avg=1463.75, stdev=307.63 00:15:16.797 clat percentiles (usec): 00:15:16.797 | 1.00th=[ 1012], 5.00th=[ 1090], 10.00th=[ 1139], 20.00th=[ 1205], 00:15:16.797 | 30.00th=[ 1270], 40.00th=[ 1336], 50.00th=[ 1418], 60.00th=[ 1483], 00:15:16.797 | 70.00th=[ 1582], 80.00th=[ 1680], 90.00th=[ 1844], 95.00th=[ 1991], 00:15:16.797 | 99.00th=[ 2409], 99.50th=[ 2638], 99.90th=[ 3294], 99.95th=[ 3752], 00:15:16.797 | 99.99th=[ 5014] 00:15:16.797 bw ( KiB/s): min=142224, max=181552, per=99.02%, avg=157060.44, stdev=13890.49, samples=9 00:15:16.797 iops : min=35556, max=45388, avg=39265.11, stdev=3472.62, samples=9 00:15:16.797 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.74% 00:15:16.797 lat (msec) : 2=94.29%, 4=4.92%, 10=0.04% 00:15:16.797 cpu : usr=60.84%, sys=33.52%, ctx=74, majf=0, minf=1064 00:15:16.797 IO depths : 1=1.5%, 2=3.0%, 4=6.1%, 8=12.4%, 16=25.0%, 32=50.4%, >=64=1.6% 00:15:16.797 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.797 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:15:16.797 issued rwts: total=0,198299,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:16.797 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:16.797 00:15:16.797 Run status group 0 (all jobs): 00:15:16.797 WRITE: bw=155MiB/s (162MB/s), 155MiB/s-155MiB/s (162MB/s-162MB/s), io=775MiB (812MB), run=5001-5001msec 00:15:17.058 ----------------------------------------------------- 00:15:17.058 Suppressions used: 00:15:17.058 count bytes template 00:15:17.058 1 11 /usr/src/fio/parse.c 00:15:17.058 1 8 libtcmalloc_minimal.so 00:15:17.058 1 904 libcrypto.so 00:15:17.058 ----------------------------------------------------- 00:15:17.058 00:15:17.058 00:15:17.058 real 0m12.234s 00:15:17.058 user 0m7.085s 00:15:17.058 sys 0m4.400s 00:15:17.058 19:05:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:17.058 ************************************ 00:15:17.058 END TEST xnvme_fio_plugin 00:15:17.058 ************************************ 00:15:17.058 19:05:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:17.058 19:05:34 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 82506 00:15:17.058 19:05:34 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 82506 ']' 00:15:17.058 19:05:34 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 82506 00:15:17.058 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (82506) - No such process 00:15:17.058 Process with pid 82506 is not found 00:15:17.058 19:05:34 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 82506 is not found' 00:15:17.058 00:15:17.058 real 2m57.949s 00:15:17.058 user 1m23.055s 00:15:17.058 sys 1m20.167s 00:15:17.058 19:05:34 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:17.058 ************************************ 00:15:17.058 END TEST nvme_xnvme 00:15:17.058 ************************************ 00:15:17.058 19:05:34 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:17.058 19:05:34 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:15:17.058 19:05:34 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:17.058 19:05:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:17.058 19:05:34 -- common/autotest_common.sh@10 -- # set +x 00:15:17.058 ************************************ 00:15:17.058 START TEST blockdev_xnvme 00:15:17.058 ************************************ 00:15:17.058 19:05:34 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:15:17.320 * Looking for test storage... 00:15:17.320 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:15:17.320 19:05:34 blockdev_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:17.320 19:05:34 blockdev_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:15:17.320 19:05:34 blockdev_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:17.320 19:05:34 blockdev_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:17.320 19:05:34 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:17.320 19:05:34 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:17.320 19:05:34 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:17.320 19:05:34 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:15:17.320 19:05:34 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:15:17.320 19:05:34 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:15:17.320 19:05:34 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:15:17.320 19:05:34 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:15:17.320 19:05:34 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:15:17.320 19:05:34 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:15:17.320 19:05:34 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:17.320 19:05:34 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:15:17.320 19:05:34 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:15:17.320 19:05:34 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:17.320 19:05:34 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:17.320 19:05:34 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:15:17.320 19:05:34 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:15:17.320 19:05:34 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:17.320 19:05:34 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:15:17.320 19:05:34 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:15:17.320 19:05:34 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:15:17.320 19:05:34 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:15:17.320 19:05:34 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:17.320 19:05:34 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:15:17.320 19:05:34 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:15:17.320 19:05:34 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:17.320 19:05:34 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:17.320 19:05:34 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:15:17.320 19:05:34 blockdev_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:17.320 19:05:34 blockdev_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:17.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:17.321 --rc genhtml_branch_coverage=1 00:15:17.321 --rc genhtml_function_coverage=1 00:15:17.321 --rc genhtml_legend=1 00:15:17.321 --rc geninfo_all_blocks=1 00:15:17.321 --rc geninfo_unexecuted_blocks=1 00:15:17.321 00:15:17.321 ' 00:15:17.321 19:05:34 blockdev_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:17.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:17.321 --rc genhtml_branch_coverage=1 00:15:17.321 --rc genhtml_function_coverage=1 00:15:17.321 --rc genhtml_legend=1 00:15:17.321 --rc geninfo_all_blocks=1 00:15:17.321 --rc geninfo_unexecuted_blocks=1 00:15:17.321 00:15:17.321 ' 00:15:17.321 19:05:34 blockdev_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:17.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:17.321 --rc genhtml_branch_coverage=1 00:15:17.321 --rc genhtml_function_coverage=1 00:15:17.321 --rc genhtml_legend=1 00:15:17.321 --rc geninfo_all_blocks=1 00:15:17.321 --rc geninfo_unexecuted_blocks=1 00:15:17.321 00:15:17.321 ' 00:15:17.321 19:05:34 blockdev_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:17.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:17.321 --rc genhtml_branch_coverage=1 00:15:17.321 --rc genhtml_function_coverage=1 00:15:17.321 --rc genhtml_legend=1 00:15:17.321 --rc geninfo_all_blocks=1 00:15:17.321 --rc geninfo_unexecuted_blocks=1 00:15:17.321 00:15:17.321 ' 00:15:17.321 19:05:34 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:15:17.321 19:05:34 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:15:17.321 19:05:34 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:15:17.321 19:05:34 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:17.321 19:05:34 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:15:17.321 19:05:34 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:15:17.321 19:05:34 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:15:17.321 19:05:34 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:15:17.321 19:05:34 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:15:17.321 19:05:34 blockdev_xnvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:15:17.321 19:05:34 blockdev_xnvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:15:17.321 19:05:34 blockdev_xnvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:15:17.321 19:05:34 blockdev_xnvme -- bdev/blockdev.sh@711 -- # uname -s 00:15:17.321 19:05:34 blockdev_xnvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:15:17.321 19:05:34 blockdev_xnvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:15:17.321 19:05:34 blockdev_xnvme -- bdev/blockdev.sh@719 -- # test_type=xnvme 00:15:17.321 19:05:34 blockdev_xnvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:15:17.321 19:05:34 blockdev_xnvme -- bdev/blockdev.sh@721 -- # dek= 00:15:17.321 19:05:34 blockdev_xnvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:15:17.321 19:05:34 blockdev_xnvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:15:17.321 19:05:34 blockdev_xnvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:15:17.321 19:05:34 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == bdev ]] 00:15:17.321 19:05:34 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == crypto_* ]] 00:15:17.321 19:05:34 blockdev_xnvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:15:17.321 19:05:34 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=83079 00:15:17.321 19:05:34 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:15:17.321 19:05:34 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 83079 00:15:17.321 19:05:34 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 83079 ']' 00:15:17.321 19:05:34 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:17.321 19:05:34 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:17.321 19:05:34 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:17.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:17.321 19:05:34 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:17.321 19:05:34 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:17.321 19:05:34 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:15:17.321 [2024-12-05 19:05:34.815504] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:15:17.321 [2024-12-05 19:05:34.815652] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83079 ] 00:15:17.583 [2024-12-05 19:05:34.962240] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:17.583 [2024-12-05 19:05:35.003412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:18.156 19:05:35 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:18.156 19:05:35 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:15:18.156 19:05:35 blockdev_xnvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:15:18.156 19:05:35 blockdev_xnvme -- bdev/blockdev.sh@766 -- # setup_xnvme_conf 00:15:18.156 19:05:35 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:15:18.156 19:05:35 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:15:18.156 19:05:35 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:18.728 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:19.301 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:15:19.301 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:15:19.301 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:15:19.301 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:15:19.301 19:05:36 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:15:19.301 19:05:36 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:15:19.301 19:05:36 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:15:19.301 19:05:36 blockdev_xnvme -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:15:19.301 19:05:36 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:15:19.301 19:05:36 blockdev_xnvme -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:15:19.301 19:05:36 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:15:19.301 19:05:36 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:15:19.301 19:05:36 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:19.301 19:05:36 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:15:19.301 19:05:36 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:15:19.301 19:05:36 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:15:19.301 19:05:36 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:19.301 19:05:36 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:19.301 19:05:36 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n2 00:15:19.301 19:05:36 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:15:19.301 19:05:36 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:15:19.301 19:05:36 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:19.301 19:05:36 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:19.301 19:05:36 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n3 00:15:19.301 19:05:36 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:15:19.301 19:05:36 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:15:19.301 19:05:36 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:19.301 19:05:36 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:15:19.301 19:05:36 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:15:19.301 19:05:36 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:19.301 19:05:36 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:15:19.301 19:05:36 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:15:19.301 19:05:36 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:15:19.301 19:05:36 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:19.301 19:05:36 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:15:19.301 19:05:36 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:15:19.301 19:05:36 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:19.301 19:05:36 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:15:19.301 19:05:36 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:15:19.301 19:05:36 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:15:19.301 19:05:36 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:19.301 19:05:36 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:15:19.301 19:05:36 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:15:19.301 19:05:36 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:19.301 19:05:36 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3c3n1 00:15:19.301 19:05:36 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:15:19.301 19:05:36 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:15:19.301 19:05:36 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:19.301 19:05:36 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:19.301 19:05:36 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:15:19.301 19:05:36 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:19.301 19:05:36 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:19.301 19:05:36 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:19.301 19:05:36 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]] 00:15:19.301 19:05:36 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:19.301 19:05:36 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:19.301 19:05:36 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:19.301 19:05:36 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]] 00:15:19.301 19:05:36 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:19.302 19:05:36 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:19.302 19:05:36 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:19.302 19:05:36 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:15:19.302 19:05:36 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:19.302 19:05:36 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:19.302 19:05:36 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:19.302 19:05:36 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:15:19.302 19:05:36 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:19.302 19:05:36 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:19.302 19:05:36 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:19.302 19:05:36 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:15:19.302 19:05:36 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:19.302 19:05:36 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:19.302 19:05:36 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:15:19.302 19:05:36 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:15:19.302 19:05:36 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.302 19:05:36 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:19.302 19:05:36 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring -c' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c' 00:15:19.302 nvme0n1 00:15:19.302 nvme0n2 00:15:19.302 nvme0n3 00:15:19.302 nvme1n1 00:15:19.302 nvme2n1 00:15:19.564 nvme3n1 00:15:19.564 19:05:36 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.564 19:05:36 blockdev_xnvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:15:19.564 19:05:36 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.564 19:05:36 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:19.564 19:05:36 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.564 19:05:36 blockdev_xnvme -- bdev/blockdev.sh@777 -- # cat 00:15:19.564 19:05:36 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:15:19.564 19:05:36 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.564 19:05:36 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:19.564 19:05:36 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.564 19:05:36 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:15:19.564 19:05:36 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.564 19:05:36 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:19.564 19:05:36 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.564 19:05:36 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:15:19.564 19:05:36 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.564 19:05:36 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:19.564 19:05:36 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.564 19:05:36 blockdev_xnvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:15:19.564 19:05:36 blockdev_xnvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:15:19.564 19:05:36 blockdev_xnvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:15:19.564 19:05:36 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.564 19:05:36 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:19.564 19:05:36 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.564 19:05:36 blockdev_xnvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:15:19.565 19:05:36 blockdev_xnvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "70864aa2-2ff0-43e0-a687-5af86ae72708"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "70864aa2-2ff0-43e0-a687-5af86ae72708",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "c78d1565-5247-4ca0-8383-c8cfa8dfc553"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c78d1565-5247-4ca0-8383-c8cfa8dfc553",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "18778814-72aa-474b-88a9-2e0a9c926c32"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "18778814-72aa-474b-88a9-2e0a9c926c32",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "6d2d86e1-7f04-47ce-b838-7927cd7aa555"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "6d2d86e1-7f04-47ce-b838-7927cd7aa555",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "a245fbc0-46e9-4aeb-9280-6b60d7c6f12d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "a245fbc0-46e9-4aeb-9280-6b60d7c6f12d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "3eb95208-e560-4b6b-922e-59a074cd9b59"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "3eb95208-e560-4b6b-922e-59a074cd9b59",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:15:19.565 19:05:36 blockdev_xnvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:15:19.565 19:05:37 blockdev_xnvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:15:19.565 19:05:37 blockdev_xnvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=nvme0n1 00:15:19.565 19:05:37 blockdev_xnvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:15:19.565 19:05:37 blockdev_xnvme -- bdev/blockdev.sh@791 -- # killprocess 83079 00:15:19.565 19:05:37 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 83079 ']' 00:15:19.565 19:05:37 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 83079 00:15:19.565 19:05:37 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:15:19.565 19:05:37 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:19.565 19:05:37 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83079 00:15:19.565 19:05:37 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:19.565 19:05:37 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:19.565 killing process with pid 83079 00:15:19.565 19:05:37 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83079' 00:15:19.565 19:05:37 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 83079 00:15:19.565 19:05:37 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 83079 00:15:19.827 19:05:37 blockdev_xnvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:19.827 19:05:37 blockdev_xnvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:15:19.827 19:05:37 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:19.827 19:05:37 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:19.827 19:05:37 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:19.827 ************************************ 00:15:19.827 START TEST bdev_hello_world 00:15:19.827 ************************************ 00:15:19.827 19:05:37 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:15:20.087 [2024-12-05 19:05:37.421855] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:15:20.087 [2024-12-05 19:05:37.421990] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83355 ] 00:15:20.087 [2024-12-05 19:05:37.569977] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:20.087 [2024-12-05 19:05:37.599381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:20.348 [2024-12-05 19:05:37.835737] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:15:20.348 [2024-12-05 19:05:37.835800] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:15:20.348 [2024-12-05 19:05:37.835824] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:15:20.348 [2024-12-05 19:05:37.838104] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:15:20.348 [2024-12-05 19:05:37.838813] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:15:20.348 [2024-12-05 19:05:37.838851] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:15:20.348 [2024-12-05 19:05:37.839405] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:15:20.348 00:15:20.348 [2024-12-05 19:05:37.839451] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:15:20.610 00:15:20.610 real 0m0.671s 00:15:20.610 user 0m0.343s 00:15:20.610 sys 0m0.183s 00:15:20.610 19:05:38 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:20.610 19:05:38 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:15:20.610 ************************************ 00:15:20.610 END TEST bdev_hello_world 00:15:20.610 ************************************ 00:15:20.610 19:05:38 blockdev_xnvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:15:20.610 19:05:38 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:20.610 19:05:38 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:20.610 19:05:38 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:20.610 ************************************ 00:15:20.610 START TEST bdev_bounds 00:15:20.610 ************************************ 00:15:20.610 19:05:38 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:15:20.610 Process bdevio pid: 83381 00:15:20.610 19:05:38 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=83381 00:15:20.610 19:05:38 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:15:20.610 19:05:38 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 83381' 00:15:20.610 19:05:38 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 83381 00:15:20.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:20.610 19:05:38 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 83381 ']' 00:15:20.610 19:05:38 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:20.610 19:05:38 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:20.610 19:05:38 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:20.610 19:05:38 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:20.610 19:05:38 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:20.610 19:05:38 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:20.872 [2024-12-05 19:05:38.169656] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:15:20.872 [2024-12-05 19:05:38.169799] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83381 ] 00:15:20.872 [2024-12-05 19:05:38.314700] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:20.872 [2024-12-05 19:05:38.350376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:20.872 [2024-12-05 19:05:38.350833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:20.872 [2024-12-05 19:05:38.354304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:21.817 19:05:39 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:21.817 19:05:39 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:15:21.817 19:05:39 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:15:21.817 I/O targets: 00:15:21.817 nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:21.817 nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:21.817 nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:21.817 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:15:21.817 nvme2n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:15:21.817 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:15:21.817 00:15:21.817 00:15:21.817 CUnit - A unit testing framework for C - Version 2.1-3 00:15:21.817 http://cunit.sourceforge.net/ 00:15:21.817 00:15:21.817 00:15:21.817 Suite: bdevio tests on: nvme3n1 00:15:21.817 Test: blockdev write read block ...passed 00:15:21.817 Test: blockdev write zeroes read block ...passed 00:15:21.817 Test: blockdev write zeroes read no split ...passed 00:15:21.817 Test: blockdev write zeroes read split ...passed 00:15:21.817 Test: blockdev write zeroes read split partial ...passed 00:15:21.817 Test: blockdev reset ...passed 00:15:21.817 Test: blockdev write read 8 blocks ...passed 00:15:21.817 Test: blockdev write read size > 128k ...passed 00:15:21.817 Test: blockdev write read invalid size ...passed 00:15:21.817 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:21.817 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:21.817 Test: blockdev write read max offset ...passed 00:15:21.817 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:21.817 Test: blockdev writev readv 8 blocks ...passed 00:15:21.817 Test: blockdev writev readv 30 x 1block ...passed 00:15:21.817 Test: blockdev writev readv block ...passed 00:15:21.817 Test: blockdev writev readv size > 128k ...passed 00:15:21.817 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:21.817 Test: blockdev comparev and writev ...passed 00:15:21.817 Test: blockdev nvme passthru rw ...passed 00:15:21.817 Test: blockdev nvme passthru vendor specific ...passed 00:15:21.817 Test: blockdev nvme admin passthru ...passed 00:15:21.817 Test: blockdev copy ...passed 00:15:21.817 Suite: bdevio tests on: nvme2n1 00:15:21.817 Test: blockdev write read block ...passed 00:15:21.817 Test: blockdev write zeroes read block ...passed 00:15:21.817 Test: blockdev write zeroes read no split ...passed 00:15:21.817 Test: blockdev write zeroes read split ...passed 00:15:21.817 Test: blockdev write zeroes read split partial ...passed 00:15:21.817 Test: blockdev reset ...passed 00:15:21.817 Test: blockdev write read 8 blocks ...passed 00:15:21.817 Test: blockdev write read size > 128k ...passed 00:15:21.817 Test: blockdev write read invalid size ...passed 00:15:21.817 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:21.817 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:21.817 Test: blockdev write read max offset ...passed 00:15:21.817 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:21.817 Test: blockdev writev readv 8 blocks ...passed 00:15:21.817 Test: blockdev writev readv 30 x 1block ...passed 00:15:21.817 Test: blockdev writev readv block ...passed 00:15:21.817 Test: blockdev writev readv size > 128k ...passed 00:15:21.817 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:21.817 Test: blockdev comparev and writev ...passed 00:15:21.817 Test: blockdev nvme passthru rw ...passed 00:15:21.817 Test: blockdev nvme passthru vendor specific ...passed 00:15:21.817 Test: blockdev nvme admin passthru ...passed 00:15:21.817 Test: blockdev copy ...passed 00:15:21.817 Suite: bdevio tests on: nvme1n1 00:15:21.817 Test: blockdev write read block ...passed 00:15:21.817 Test: blockdev write zeroes read block ...passed 00:15:21.817 Test: blockdev write zeroes read no split ...passed 00:15:21.817 Test: blockdev write zeroes read split ...passed 00:15:21.817 Test: blockdev write zeroes read split partial ...passed 00:15:21.817 Test: blockdev reset ...passed 00:15:21.817 Test: blockdev write read 8 blocks ...passed 00:15:21.817 Test: blockdev write read size > 128k ...passed 00:15:21.817 Test: blockdev write read invalid size ...passed 00:15:21.817 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:21.817 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:21.817 Test: blockdev write read max offset ...passed 00:15:21.817 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:21.817 Test: blockdev writev readv 8 blocks ...passed 00:15:21.817 Test: blockdev writev readv 30 x 1block ...passed 00:15:21.817 Test: blockdev writev readv block ...passed 00:15:21.817 Test: blockdev writev readv size > 128k ...passed 00:15:21.817 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:21.817 Test: blockdev comparev and writev ...passed 00:15:21.817 Test: blockdev nvme passthru rw ...passed 00:15:21.817 Test: blockdev nvme passthru vendor specific ...passed 00:15:21.817 Test: blockdev nvme admin passthru ...passed 00:15:21.817 Test: blockdev copy ...passed 00:15:21.817 Suite: bdevio tests on: nvme0n3 00:15:21.817 Test: blockdev write read block ...passed 00:15:21.817 Test: blockdev write zeroes read block ...passed 00:15:21.817 Test: blockdev write zeroes read no split ...passed 00:15:21.817 Test: blockdev write zeroes read split ...passed 00:15:21.817 Test: blockdev write zeroes read split partial ...passed 00:15:21.817 Test: blockdev reset ...passed 00:15:21.817 Test: blockdev write read 8 blocks ...passed 00:15:21.817 Test: blockdev write read size > 128k ...passed 00:15:21.817 Test: blockdev write read invalid size ...passed 00:15:21.817 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:21.817 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:21.817 Test: blockdev write read max offset ...passed 00:15:21.817 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:21.817 Test: blockdev writev readv 8 blocks ...passed 00:15:21.817 Test: blockdev writev readv 30 x 1block ...passed 00:15:21.817 Test: blockdev writev readv block ...passed 00:15:21.817 Test: blockdev writev readv size > 128k ...passed 00:15:21.817 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:21.817 Test: blockdev comparev and writev ...passed 00:15:21.817 Test: blockdev nvme passthru rw ...passed 00:15:21.817 Test: blockdev nvme passthru vendor specific ...passed 00:15:21.817 Test: blockdev nvme admin passthru ...passed 00:15:21.817 Test: blockdev copy ...passed 00:15:21.817 Suite: bdevio tests on: nvme0n2 00:15:21.817 Test: blockdev write read block ...passed 00:15:21.817 Test: blockdev write zeroes read block ...passed 00:15:21.817 Test: blockdev write zeroes read no split ...passed 00:15:21.817 Test: blockdev write zeroes read split ...passed 00:15:22.079 Test: blockdev write zeroes read split partial ...passed 00:15:22.079 Test: blockdev reset ...passed 00:15:22.079 Test: blockdev write read 8 blocks ...passed 00:15:22.079 Test: blockdev write read size > 128k ...passed 00:15:22.079 Test: blockdev write read invalid size ...passed 00:15:22.079 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:22.079 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:22.079 Test: blockdev write read max offset ...passed 00:15:22.079 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:22.079 Test: blockdev writev readv 8 blocks ...passed 00:15:22.079 Test: blockdev writev readv 30 x 1block ...passed 00:15:22.079 Test: blockdev writev readv block ...passed 00:15:22.079 Test: blockdev writev readv size > 128k ...passed 00:15:22.079 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:22.079 Test: blockdev comparev and writev ...passed 00:15:22.079 Test: blockdev nvme passthru rw ...passed 00:15:22.079 Test: blockdev nvme passthru vendor specific ...passed 00:15:22.079 Test: blockdev nvme admin passthru ...passed 00:15:22.079 Test: blockdev copy ...passed 00:15:22.079 Suite: bdevio tests on: nvme0n1 00:15:22.079 Test: blockdev write read block ...passed 00:15:22.079 Test: blockdev write zeroes read block ...passed 00:15:22.079 Test: blockdev write zeroes read no split ...passed 00:15:22.079 Test: blockdev write zeroes read split ...passed 00:15:22.079 Test: blockdev write zeroes read split partial ...passed 00:15:22.079 Test: blockdev reset ...passed 00:15:22.079 Test: blockdev write read 8 blocks ...passed 00:15:22.079 Test: blockdev write read size > 128k ...passed 00:15:22.079 Test: blockdev write read invalid size ...passed 00:15:22.079 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:22.079 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:22.079 Test: blockdev write read max offset ...passed 00:15:22.079 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:22.079 Test: blockdev writev readv 8 blocks ...passed 00:15:22.079 Test: blockdev writev readv 30 x 1block ...passed 00:15:22.079 Test: blockdev writev readv block ...passed 00:15:22.079 Test: blockdev writev readv size > 128k ...passed 00:15:22.079 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:22.079 Test: blockdev comparev and writev ...passed 00:15:22.079 Test: blockdev nvme passthru rw ...passed 00:15:22.079 Test: blockdev nvme passthru vendor specific ...passed 00:15:22.079 Test: blockdev nvme admin passthru ...passed 00:15:22.079 Test: blockdev copy ...passed 00:15:22.079 00:15:22.079 Run Summary: Type Total Ran Passed Failed Inactive 00:15:22.079 suites 6 6 n/a 0 0 00:15:22.079 tests 138 138 138 0 0 00:15:22.079 asserts 780 780 780 0 n/a 00:15:22.079 00:15:22.079 Elapsed time = 0.616 seconds 00:15:22.079 0 00:15:22.079 19:05:39 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 83381 00:15:22.079 19:05:39 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 83381 ']' 00:15:22.079 19:05:39 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 83381 00:15:22.079 19:05:39 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:15:22.079 19:05:39 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:22.079 19:05:39 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83381 00:15:22.079 killing process with pid 83381 00:15:22.079 19:05:39 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:22.079 19:05:39 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:22.079 19:05:39 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83381' 00:15:22.080 19:05:39 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 83381 00:15:22.080 19:05:39 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 83381 00:15:22.341 19:05:39 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:15:22.341 00:15:22.341 real 0m1.580s 00:15:22.341 user 0m3.880s 00:15:22.341 sys 0m0.378s 00:15:22.341 19:05:39 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:22.341 19:05:39 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:22.341 ************************************ 00:15:22.341 END TEST bdev_bounds 00:15:22.341 ************************************ 00:15:22.341 19:05:39 blockdev_xnvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:15:22.341 19:05:39 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:22.341 19:05:39 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:22.341 19:05:39 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:22.341 ************************************ 00:15:22.341 START TEST bdev_nbd 00:15:22.341 ************************************ 00:15:22.341 19:05:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:15:22.341 19:05:39 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:15:22.341 19:05:39 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:15:22.341 19:05:39 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:22.341 19:05:39 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:22.341 19:05:39 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:22.341 19:05:39 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:15:22.341 19:05:39 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:15:22.341 19:05:39 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:15:22.341 19:05:39 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:15:22.341 19:05:39 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:15:22.341 19:05:39 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:15:22.341 19:05:39 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:22.341 19:05:39 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:15:22.341 19:05:39 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:22.341 19:05:39 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:15:22.341 19:05:39 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=83439 00:15:22.341 19:05:39 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:15:22.341 19:05:39 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 83439 /var/tmp/spdk-nbd.sock 00:15:22.341 19:05:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 83439 ']' 00:15:22.341 19:05:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:15:22.341 19:05:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:22.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:15:22.341 19:05:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:15:22.341 19:05:39 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:22.341 19:05:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:22.341 19:05:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:22.341 [2024-12-05 19:05:39.830813] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:15:22.341 [2024-12-05 19:05:39.830951] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:22.604 [2024-12-05 19:05:39.977377] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:22.604 [2024-12-05 19:05:40.006662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:23.177 19:05:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:23.177 19:05:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:15:23.177 19:05:40 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:15:23.177 19:05:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:23.177 19:05:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:23.177 19:05:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:15:23.177 19:05:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:15:23.177 19:05:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:23.177 19:05:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:23.177 19:05:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:15:23.177 19:05:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:15:23.177 19:05:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:15:23.177 19:05:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:15:23.177 19:05:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:23.177 19:05:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:15:23.438 19:05:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:15:23.438 19:05:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:15:23.438 19:05:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:15:23.438 19:05:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:23.438 19:05:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:23.438 19:05:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:23.439 19:05:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:23.439 19:05:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:23.439 19:05:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:23.439 19:05:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:23.439 19:05:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:23.439 19:05:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:23.439 1+0 records in 00:15:23.439 1+0 records out 00:15:23.439 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000937026 s, 4.4 MB/s 00:15:23.439 19:05:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:23.439 19:05:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:23.439 19:05:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:23.439 19:05:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:23.439 19:05:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:23.439 19:05:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:23.439 19:05:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:23.439 19:05:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 00:15:23.699 19:05:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:15:23.699 19:05:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:15:23.699 19:05:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:15:23.700 19:05:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:23.700 19:05:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:23.700 19:05:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:23.700 19:05:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:23.700 19:05:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:23.700 19:05:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:23.700 19:05:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:23.700 19:05:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:23.700 19:05:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:23.700 1+0 records in 00:15:23.700 1+0 records out 00:15:23.700 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00116516 s, 3.5 MB/s 00:15:23.700 19:05:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:23.700 19:05:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:23.700 19:05:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:23.700 19:05:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:23.700 19:05:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:23.700 19:05:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:23.700 19:05:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:23.700 19:05:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 00:15:23.960 19:05:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:15:23.960 19:05:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:15:23.960 19:05:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:15:23.960 19:05:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:15:23.960 19:05:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:23.960 19:05:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:23.960 19:05:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:23.960 19:05:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:15:23.960 19:05:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:23.960 19:05:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:23.960 19:05:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:23.960 19:05:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:23.960 1+0 records in 00:15:23.960 1+0 records out 00:15:23.960 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00113383 s, 3.6 MB/s 00:15:23.960 19:05:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:23.960 19:05:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:23.960 19:05:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:23.960 19:05:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:23.960 19:05:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:23.961 19:05:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:23.961 19:05:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:23.961 19:05:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:15:24.221 19:05:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:15:24.221 19:05:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:15:24.221 19:05:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:15:24.221 19:05:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:15:24.221 19:05:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:24.221 19:05:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:24.221 19:05:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:24.221 19:05:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:15:24.221 19:05:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:24.221 19:05:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:24.221 19:05:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:24.221 19:05:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:24.221 1+0 records in 00:15:24.221 1+0 records out 00:15:24.221 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00151414 s, 2.7 MB/s 00:15:24.221 19:05:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:24.221 19:05:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:24.221 19:05:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:24.221 19:05:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:24.221 19:05:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:24.221 19:05:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:24.221 19:05:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:24.221 19:05:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:15:24.480 19:05:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:15:24.480 19:05:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:15:24.480 19:05:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:15:24.480 19:05:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:15:24.480 19:05:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:24.480 19:05:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:24.480 19:05:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:24.480 19:05:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:15:24.480 19:05:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:24.480 19:05:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:24.480 19:05:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:24.480 19:05:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:24.480 1+0 records in 00:15:24.480 1+0 records out 00:15:24.480 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00107882 s, 3.8 MB/s 00:15:24.480 19:05:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:24.480 19:05:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:24.480 19:05:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:24.480 19:05:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:24.480 19:05:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:24.480 19:05:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:24.480 19:05:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:24.480 19:05:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:15:24.739 19:05:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:15:24.739 19:05:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:15:24.739 19:05:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:15:24.739 19:05:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:15:24.739 19:05:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:24.739 19:05:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:24.739 19:05:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:24.739 19:05:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:15:24.739 19:05:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:24.739 19:05:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:24.739 19:05:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:24.739 19:05:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:24.739 1+0 records in 00:15:24.739 1+0 records out 00:15:24.739 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00100521 s, 4.1 MB/s 00:15:24.739 19:05:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:24.739 19:05:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:24.739 19:05:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:24.739 19:05:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:24.739 19:05:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:24.739 19:05:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:24.739 19:05:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:24.739 19:05:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:24.998 19:05:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:15:24.998 { 00:15:24.998 "nbd_device": "/dev/nbd0", 00:15:24.998 "bdev_name": "nvme0n1" 00:15:24.998 }, 00:15:24.998 { 00:15:24.998 "nbd_device": "/dev/nbd1", 00:15:24.998 "bdev_name": "nvme0n2" 00:15:24.998 }, 00:15:24.998 { 00:15:24.998 "nbd_device": "/dev/nbd2", 00:15:24.998 "bdev_name": "nvme0n3" 00:15:24.998 }, 00:15:24.998 { 00:15:24.998 "nbd_device": "/dev/nbd3", 00:15:24.998 "bdev_name": "nvme1n1" 00:15:24.998 }, 00:15:24.998 { 00:15:24.998 "nbd_device": "/dev/nbd4", 00:15:24.998 "bdev_name": "nvme2n1" 00:15:24.998 }, 00:15:24.998 { 00:15:24.998 "nbd_device": "/dev/nbd5", 00:15:24.998 "bdev_name": "nvme3n1" 00:15:24.998 } 00:15:24.998 ]' 00:15:24.998 19:05:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:15:24.998 19:05:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:15:24.998 19:05:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:15:24.998 { 00:15:24.998 "nbd_device": "/dev/nbd0", 00:15:24.998 "bdev_name": "nvme0n1" 00:15:24.998 }, 00:15:24.998 { 00:15:24.998 "nbd_device": "/dev/nbd1", 00:15:24.998 "bdev_name": "nvme0n2" 00:15:24.998 }, 00:15:24.998 { 00:15:24.998 "nbd_device": "/dev/nbd2", 00:15:24.998 "bdev_name": "nvme0n3" 00:15:24.998 }, 00:15:24.998 { 00:15:24.998 "nbd_device": "/dev/nbd3", 00:15:24.998 "bdev_name": "nvme1n1" 00:15:24.998 }, 00:15:24.998 { 00:15:24.998 "nbd_device": "/dev/nbd4", 00:15:24.998 "bdev_name": "nvme2n1" 00:15:24.998 }, 00:15:24.998 { 00:15:24.998 "nbd_device": "/dev/nbd5", 00:15:24.998 "bdev_name": "nvme3n1" 00:15:24.998 } 00:15:24.998 ]' 00:15:24.998 19:05:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:15:24.998 19:05:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:24.998 19:05:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:15:24.998 19:05:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:24.998 19:05:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:24.998 19:05:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:24.998 19:05:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:25.256 19:05:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:25.256 19:05:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:25.256 19:05:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:25.256 19:05:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:25.256 19:05:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:25.256 19:05:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:25.256 19:05:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:25.256 19:05:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:25.256 19:05:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:25.256 19:05:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:25.515 19:05:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:25.515 19:05:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:25.515 19:05:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:25.515 19:05:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:25.515 19:05:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:25.515 19:05:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:25.515 19:05:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:25.515 19:05:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:25.515 19:05:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:25.515 19:05:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:15:25.774 19:05:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:15:25.774 19:05:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:15:25.774 19:05:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:15:25.774 19:05:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:25.774 19:05:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:25.774 19:05:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:15:25.774 19:05:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:25.774 19:05:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:25.774 19:05:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:25.774 19:05:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:15:26.033 19:05:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:15:26.033 19:05:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:15:26.033 19:05:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:15:26.033 19:05:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:26.033 19:05:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:26.033 19:05:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:15:26.033 19:05:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:26.033 19:05:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:26.033 19:05:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:26.033 19:05:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:15:26.292 19:05:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:15:26.292 19:05:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:15:26.292 19:05:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:15:26.292 19:05:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:26.292 19:05:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:26.292 19:05:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:15:26.292 19:05:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:26.292 19:05:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:26.292 19:05:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:26.292 19:05:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:15:26.549 19:05:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:15:26.549 19:05:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:15:26.549 19:05:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:15:26.549 19:05:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:26.549 19:05:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:26.549 19:05:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:15:26.549 19:05:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:26.549 19:05:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:26.549 19:05:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:26.549 19:05:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:26.549 19:05:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:26.549 19:05:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:26.549 19:05:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:26.549 19:05:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:26.806 19:05:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:26.806 19:05:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:15:26.806 19:05:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:26.806 19:05:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:15:26.806 19:05:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:15:26.806 19:05:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:15:26.806 19:05:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:15:26.806 19:05:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:15:26.806 19:05:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:15:26.806 19:05:44 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:26.806 19:05:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:26.806 19:05:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:26.806 19:05:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:15:26.806 19:05:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:26.806 19:05:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:15:26.806 19:05:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:26.806 19:05:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:26.806 19:05:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:26.806 19:05:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:26.806 19:05:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:26.806 19:05:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:26.806 19:05:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:15:26.806 19:05:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:26.806 19:05:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:26.806 19:05:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:15:26.806 /dev/nbd0 00:15:26.806 19:05:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:26.806 19:05:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:26.806 19:05:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:26.806 19:05:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:26.806 19:05:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:26.806 19:05:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:26.806 19:05:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:26.806 19:05:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:26.806 19:05:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:26.806 19:05:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:26.806 19:05:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:26.806 1+0 records in 00:15:26.806 1+0 records out 00:15:26.806 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000626026 s, 6.5 MB/s 00:15:26.806 19:05:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:26.806 19:05:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:26.806 19:05:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:26.806 19:05:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:26.806 19:05:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:26.806 19:05:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:26.806 19:05:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:26.806 19:05:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1 00:15:27.062 /dev/nbd1 00:15:27.062 19:05:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:27.062 19:05:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:27.062 19:05:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:27.062 19:05:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:27.062 19:05:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:27.062 19:05:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:27.062 19:05:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:27.062 19:05:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:27.062 19:05:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:27.062 19:05:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:27.062 19:05:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:27.062 1+0 records in 00:15:27.062 1+0 records out 00:15:27.062 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00042104 s, 9.7 MB/s 00:15:27.062 19:05:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:27.062 19:05:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:27.062 19:05:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:27.062 19:05:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:27.062 19:05:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:27.062 19:05:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:27.062 19:05:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:27.062 19:05:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10 00:15:27.319 /dev/nbd10 00:15:27.319 19:05:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:15:27.319 19:05:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:15:27.319 19:05:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:15:27.319 19:05:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:27.319 19:05:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:27.319 19:05:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:27.319 19:05:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:15:27.319 19:05:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:27.319 19:05:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:27.319 19:05:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:27.319 19:05:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:27.319 1+0 records in 00:15:27.319 1+0 records out 00:15:27.319 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000954063 s, 4.3 MB/s 00:15:27.319 19:05:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:27.319 19:05:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:27.319 19:05:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:27.319 19:05:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:27.319 19:05:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:27.319 19:05:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:27.319 19:05:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:27.319 19:05:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11 00:15:27.576 /dev/nbd11 00:15:27.576 19:05:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:15:27.576 19:05:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:15:27.576 19:05:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:15:27.576 19:05:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:27.576 19:05:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:27.576 19:05:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:27.576 19:05:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:15:27.576 19:05:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:27.576 19:05:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:27.576 19:05:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:27.576 19:05:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:27.576 1+0 records in 00:15:27.576 1+0 records out 00:15:27.576 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000902859 s, 4.5 MB/s 00:15:27.576 19:05:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:27.576 19:05:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:27.576 19:05:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:27.576 19:05:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:27.576 19:05:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:27.576 19:05:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:27.576 19:05:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:27.576 19:05:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:15:27.832 /dev/nbd12 00:15:27.832 19:05:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:15:27.832 19:05:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:15:27.832 19:05:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:15:27.832 19:05:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:27.832 19:05:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:27.832 19:05:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:27.832 19:05:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:15:27.832 19:05:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:27.832 19:05:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:27.832 19:05:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:27.832 19:05:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:27.832 1+0 records in 00:15:27.832 1+0 records out 00:15:27.832 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000858846 s, 4.8 MB/s 00:15:27.832 19:05:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:27.832 19:05:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:27.832 19:05:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:27.832 19:05:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:27.832 19:05:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:27.832 19:05:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:27.832 19:05:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:27.832 19:05:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:15:28.091 /dev/nbd13 00:15:28.091 19:05:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:15:28.091 19:05:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:15:28.091 19:05:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:15:28.091 19:05:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:28.091 19:05:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:28.091 19:05:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:28.091 19:05:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:15:28.091 19:05:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:28.091 19:05:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:28.091 19:05:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:28.091 19:05:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:28.091 1+0 records in 00:15:28.091 1+0 records out 00:15:28.091 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00113279 s, 3.6 MB/s 00:15:28.091 19:05:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:28.091 19:05:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:28.091 19:05:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:28.091 19:05:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:28.091 19:05:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:28.091 19:05:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:28.091 19:05:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:28.091 19:05:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:28.091 19:05:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:28.091 19:05:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:28.351 19:05:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:15:28.351 { 00:15:28.351 "nbd_device": "/dev/nbd0", 00:15:28.351 "bdev_name": "nvme0n1" 00:15:28.351 }, 00:15:28.351 { 00:15:28.351 "nbd_device": "/dev/nbd1", 00:15:28.351 "bdev_name": "nvme0n2" 00:15:28.351 }, 00:15:28.351 { 00:15:28.351 "nbd_device": "/dev/nbd10", 00:15:28.351 "bdev_name": "nvme0n3" 00:15:28.351 }, 00:15:28.351 { 00:15:28.351 "nbd_device": "/dev/nbd11", 00:15:28.351 "bdev_name": "nvme1n1" 00:15:28.351 }, 00:15:28.351 { 00:15:28.351 "nbd_device": "/dev/nbd12", 00:15:28.351 "bdev_name": "nvme2n1" 00:15:28.351 }, 00:15:28.351 { 00:15:28.351 "nbd_device": "/dev/nbd13", 00:15:28.351 "bdev_name": "nvme3n1" 00:15:28.351 } 00:15:28.351 ]' 00:15:28.351 19:05:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:15:28.351 { 00:15:28.351 "nbd_device": "/dev/nbd0", 00:15:28.351 "bdev_name": "nvme0n1" 00:15:28.351 }, 00:15:28.351 { 00:15:28.351 "nbd_device": "/dev/nbd1", 00:15:28.351 "bdev_name": "nvme0n2" 00:15:28.351 }, 00:15:28.351 { 00:15:28.351 "nbd_device": "/dev/nbd10", 00:15:28.351 "bdev_name": "nvme0n3" 00:15:28.351 }, 00:15:28.351 { 00:15:28.351 "nbd_device": "/dev/nbd11", 00:15:28.351 "bdev_name": "nvme1n1" 00:15:28.351 }, 00:15:28.351 { 00:15:28.351 "nbd_device": "/dev/nbd12", 00:15:28.351 "bdev_name": "nvme2n1" 00:15:28.351 }, 00:15:28.351 { 00:15:28.351 "nbd_device": "/dev/nbd13", 00:15:28.351 "bdev_name": "nvme3n1" 00:15:28.351 } 00:15:28.351 ]' 00:15:28.351 19:05:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:28.351 19:05:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:15:28.351 /dev/nbd1 00:15:28.351 /dev/nbd10 00:15:28.351 /dev/nbd11 00:15:28.351 /dev/nbd12 00:15:28.351 /dev/nbd13' 00:15:28.351 19:05:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:15:28.351 /dev/nbd1 00:15:28.351 /dev/nbd10 00:15:28.351 /dev/nbd11 00:15:28.351 /dev/nbd12 00:15:28.351 /dev/nbd13' 00:15:28.351 19:05:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:28.351 19:05:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:15:28.351 19:05:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:15:28.351 19:05:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:15:28.351 19:05:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:15:28.351 19:05:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:15:28.351 19:05:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:28.351 19:05:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:28.351 19:05:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:15:28.351 19:05:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:28.351 19:05:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:15:28.351 19:05:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:15:28.351 256+0 records in 00:15:28.351 256+0 records out 00:15:28.351 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00748553 s, 140 MB/s 00:15:28.351 19:05:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:28.351 19:05:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:15:28.611 256+0 records in 00:15:28.611 256+0 records out 00:15:28.611 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.245455 s, 4.3 MB/s 00:15:28.611 19:05:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:28.611 19:05:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:15:28.871 256+0 records in 00:15:28.871 256+0 records out 00:15:28.871 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.163797 s, 6.4 MB/s 00:15:28.871 19:05:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:28.871 19:05:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:15:28.871 256+0 records in 00:15:28.871 256+0 records out 00:15:28.871 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.121632 s, 8.6 MB/s 00:15:28.871 19:05:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:28.871 19:05:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:15:29.129 256+0 records in 00:15:29.129 256+0 records out 00:15:29.129 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.229368 s, 4.6 MB/s 00:15:29.129 19:05:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:29.129 19:05:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:15:29.389 256+0 records in 00:15:29.389 256+0 records out 00:15:29.389 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.17101 s, 6.1 MB/s 00:15:29.389 19:05:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:29.389 19:05:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:15:29.389 256+0 records in 00:15:29.389 256+0 records out 00:15:29.389 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.167963 s, 6.2 MB/s 00:15:29.389 19:05:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:15:29.389 19:05:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:29.389 19:05:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:29.389 19:05:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:15:29.389 19:05:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:29.389 19:05:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:15:29.389 19:05:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:15:29.389 19:05:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:29.389 19:05:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:15:29.389 19:05:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:29.389 19:05:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:15:29.389 19:05:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:29.389 19:05:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:15:29.389 19:05:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:29.389 19:05:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:15:29.389 19:05:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:29.389 19:05:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:15:29.650 19:05:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:29.650 19:05:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:15:29.650 19:05:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:29.650 19:05:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:29.650 19:05:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:29.650 19:05:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:29.650 19:05:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:29.650 19:05:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:29.650 19:05:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:29.650 19:05:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:29.650 19:05:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:29.650 19:05:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:29.650 19:05:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:29.651 19:05:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:29.651 19:05:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:29.651 19:05:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:29.651 19:05:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:29.651 19:05:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:29.651 19:05:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:29.651 19:05:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:29.909 19:05:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:29.909 19:05:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:29.909 19:05:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:29.909 19:05:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:29.909 19:05:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:29.909 19:05:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:29.909 19:05:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:29.909 19:05:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:29.909 19:05:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:29.909 19:05:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:15:30.167 19:05:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:15:30.167 19:05:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:15:30.167 19:05:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:15:30.167 19:05:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:30.167 19:05:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:30.167 19:05:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:15:30.167 19:05:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:30.167 19:05:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:30.167 19:05:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:30.167 19:05:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:15:30.425 19:05:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:15:30.425 19:05:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:15:30.425 19:05:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:15:30.425 19:05:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:30.425 19:05:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:30.425 19:05:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:15:30.425 19:05:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:30.425 19:05:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:30.425 19:05:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:30.425 19:05:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:15:30.684 19:05:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:15:30.684 19:05:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:15:30.684 19:05:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:15:30.684 19:05:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:30.684 19:05:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:30.684 19:05:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:15:30.684 19:05:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:30.684 19:05:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:30.684 19:05:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:30.684 19:05:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:15:30.684 19:05:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:15:30.941 19:05:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:15:30.941 19:05:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:15:30.941 19:05:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:30.941 19:05:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:30.941 19:05:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:15:30.941 19:05:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:30.941 19:05:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:30.941 19:05:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:30.941 19:05:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:30.941 19:05:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:30.941 19:05:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:30.941 19:05:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:30.941 19:05:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:30.941 19:05:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:30.941 19:05:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:30.941 19:05:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:15:30.941 19:05:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:15:30.941 19:05:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:15:30.942 19:05:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:15:30.942 19:05:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:15:30.942 19:05:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:15:30.942 19:05:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:15:30.942 19:05:48 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:15:30.942 19:05:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:30.942 19:05:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:15:30.942 19:05:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:15:31.199 malloc_lvol_verify 00:15:31.199 19:05:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:15:31.457 54b3dacc-5203-4bb2-8d4f-3f7ee2e95ba5 00:15:31.457 19:05:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:15:31.715 ee589f3f-aaa5-4541-ad8d-82c17f976b9a 00:15:31.715 19:05:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:15:31.973 /dev/nbd0 00:15:31.973 19:05:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:15:31.973 19:05:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:15:31.973 19:05:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:15:31.973 19:05:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:15:31.973 19:05:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:15:31.973 mke2fs 1.47.0 (5-Feb-2023) 00:15:31.973 Discarding device blocks: 0/4096 done 00:15:31.973 Creating filesystem with 4096 1k blocks and 1024 inodes 00:15:31.973 00:15:31.973 Allocating group tables: 0/1 done 00:15:31.973 Writing inode tables: 0/1 done 00:15:31.973 Creating journal (1024 blocks): done 00:15:31.973 Writing superblocks and filesystem accounting information: 0/1 done 00:15:31.973 00:15:31.973 19:05:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:15:31.973 19:05:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:31.973 19:05:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:31.973 19:05:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:31.973 19:05:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:31.973 19:05:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:31.973 19:05:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:32.232 19:05:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:32.233 19:05:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:32.233 19:05:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:32.233 19:05:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:32.233 19:05:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:32.233 19:05:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:32.233 19:05:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:32.233 19:05:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:32.233 19:05:49 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 83439 00:15:32.233 19:05:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 83439 ']' 00:15:32.233 19:05:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 83439 00:15:32.233 19:05:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:15:32.233 19:05:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:32.233 19:05:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83439 00:15:32.233 19:05:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:32.233 killing process with pid 83439 00:15:32.233 19:05:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:32.233 19:05:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83439' 00:15:32.233 19:05:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 83439 00:15:32.233 19:05:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 83439 00:15:32.233 19:05:49 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:15:32.233 00:15:32.233 real 0m9.952s 00:15:32.233 user 0m13.635s 00:15:32.233 sys 0m3.714s 00:15:32.233 19:05:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:32.233 ************************************ 00:15:32.233 END TEST bdev_nbd 00:15:32.233 ************************************ 00:15:32.233 19:05:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:32.233 19:05:49 blockdev_xnvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:15:32.233 19:05:49 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = nvme ']' 00:15:32.233 19:05:49 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = gpt ']' 00:15:32.233 19:05:49 blockdev_xnvme -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:15:32.233 19:05:49 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:32.233 19:05:49 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:32.233 19:05:49 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:32.233 ************************************ 00:15:32.233 START TEST bdev_fio 00:15:32.233 ************************************ 00:15:32.233 19:05:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:15:32.233 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:15:32.233 19:05:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:15:32.233 19:05:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:15:32.233 19:05:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:15:32.233 19:05:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:15:32.233 19:05:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:15:32.233 19:05:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:15:32.233 19:05:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:15:32.233 19:05:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:32.233 19:05:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:15:32.233 19:05:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:15:32.233 19:05:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:15:32.233 19:05:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:15:32.233 19:05:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:15:32.233 19:05:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:15:32.233 19:05:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:15:32.233 19:05:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:32.233 19:05:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:15:32.233 19:05:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:15:32.233 19:05:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:15:32.233 19:05:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:15:32.233 19:05:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:15:32.493 19:05:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:15:32.493 19:05:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:15:32.493 19:05:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:32.493 19:05:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:15:32.493 19:05:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:15:32.493 19:05:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:32.493 19:05:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n2]' 00:15:32.493 19:05:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n2 00:15:32.493 19:05:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:32.493 19:05:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n3]' 00:15:32.493 19:05:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n3 00:15:32.493 19:05:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:32.493 19:05:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:15:32.493 19:05:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:15:32.493 19:05:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:32.493 19:05:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:15:32.493 19:05:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:15:32.493 19:05:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:32.493 19:05:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:15:32.493 19:05:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:15:32.493 19:05:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:15:32.493 19:05:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:32.493 19:05:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:15:32.493 19:05:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:32.493 19:05:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:15:32.493 ************************************ 00:15:32.493 START TEST bdev_fio_rw_verify 00:15:32.493 ************************************ 00:15:32.493 19:05:49 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:32.493 19:05:49 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:32.493 19:05:49 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:32.493 19:05:49 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:32.493 19:05:49 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:32.493 19:05:49 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:32.493 19:05:49 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:15:32.493 19:05:49 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:32.493 19:05:49 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:32.493 19:05:49 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:32.493 19:05:49 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:15:32.493 19:05:49 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:32.493 19:05:49 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:32.493 19:05:49 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:32.493 19:05:49 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:15:32.493 19:05:49 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:32.493 19:05:49 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:32.493 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:32.493 job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:32.493 job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:32.493 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:32.493 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:32.493 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:32.493 fio-3.35 00:15:32.493 Starting 6 threads 00:15:44.769 00:15:44.769 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=83828: Thu Dec 5 19:06:00 2024 00:15:44.769 read: IOPS=19.4k, BW=75.8MiB/s (79.5MB/s)(759MiB/10002msec) 00:15:44.769 slat (usec): min=2, max=1923, avg= 6.33, stdev=13.75 00:15:44.769 clat (usec): min=81, max=11376, avg=966.22, stdev=715.21 00:15:44.769 lat (usec): min=85, max=11393, avg=972.55, stdev=716.33 00:15:44.769 clat percentiles (usec): 00:15:44.769 | 50.000th=[ 758], 99.000th=[ 3228], 99.900th=[ 4424], 99.990th=[ 6587], 00:15:44.769 | 99.999th=[11338] 00:15:44.769 write: IOPS=19.7k, BW=76.8MiB/s (80.6MB/s)(769MiB/10002msec); 0 zone resets 00:15:44.769 slat (usec): min=10, max=3701, avg=35.38, stdev=114.18 00:15:44.769 clat (usec): min=71, max=7675, avg=1190.24, stdev=807.96 00:15:44.769 lat (usec): min=93, max=7709, avg=1225.62, stdev=821.56 00:15:44.769 clat percentiles (usec): 00:15:44.769 | 50.000th=[ 988], 99.000th=[ 3752], 99.900th=[ 5211], 99.990th=[ 6652], 00:15:44.769 | 99.999th=[ 7635] 00:15:44.769 bw ( KiB/s): min=49109, max=167584, per=100.00%, avg=79989.84, stdev=5892.76, samples=114 00:15:44.769 iops : min=12275, max=41896, avg=19996.89, stdev=1473.21, samples=114 00:15:44.769 lat (usec) : 100=0.03%, 250=6.91%, 500=18.36%, 750=18.24%, 1000=12.27% 00:15:44.769 lat (msec) : 2=32.11%, 4=11.59%, 10=0.48%, 20=0.01% 00:15:44.769 cpu : usr=42.49%, sys=33.60%, ctx=6528, majf=0, minf=19873 00:15:44.769 IO depths : 1=11.6%, 2=24.0%, 4=50.9%, 8=13.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:44.769 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:44.769 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:44.769 issued rwts: total=194201,196755,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:44.769 latency : target=0, window=0, percentile=100.00%, depth=8 00:15:44.769 00:15:44.769 Run status group 0 (all jobs): 00:15:44.769 READ: bw=75.8MiB/s (79.5MB/s), 75.8MiB/s-75.8MiB/s (79.5MB/s-79.5MB/s), io=759MiB (795MB), run=10002-10002msec 00:15:44.769 WRITE: bw=76.8MiB/s (80.6MB/s), 76.8MiB/s-76.8MiB/s (80.6MB/s-80.6MB/s), io=769MiB (806MB), run=10002-10002msec 00:15:44.769 ----------------------------------------------------- 00:15:44.769 Suppressions used: 00:15:44.769 count bytes template 00:15:44.769 6 48 /usr/src/fio/parse.c 00:15:44.769 2419 232224 /usr/src/fio/iolog.c 00:15:44.769 1 8 libtcmalloc_minimal.so 00:15:44.769 1 904 libcrypto.so 00:15:44.769 ----------------------------------------------------- 00:15:44.769 00:15:44.769 ************************************ 00:15:44.769 END TEST bdev_fio_rw_verify 00:15:44.769 ************************************ 00:15:44.769 00:15:44.769 real 0m11.090s 00:15:44.769 user 0m26.181s 00:15:44.769 sys 0m20.455s 00:15:44.769 19:06:00 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:44.769 19:06:00 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:15:44.769 19:06:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:15:44.769 19:06:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:44.769 19:06:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:15:44.769 19:06:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:44.769 19:06:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:15:44.769 19:06:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:15:44.769 19:06:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:15:44.769 19:06:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:15:44.769 19:06:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:15:44.769 19:06:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:15:44.769 19:06:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:15:44.769 19:06:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:44.769 19:06:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:15:44.769 19:06:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:15:44.769 19:06:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:15:44.769 19:06:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:15:44.769 19:06:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "70864aa2-2ff0-43e0-a687-5af86ae72708"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "70864aa2-2ff0-43e0-a687-5af86ae72708",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "c78d1565-5247-4ca0-8383-c8cfa8dfc553"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c78d1565-5247-4ca0-8383-c8cfa8dfc553",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "18778814-72aa-474b-88a9-2e0a9c926c32"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "18778814-72aa-474b-88a9-2e0a9c926c32",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "6d2d86e1-7f04-47ce-b838-7927cd7aa555"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "6d2d86e1-7f04-47ce-b838-7927cd7aa555",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "ali 19:06:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:15:44.769 ases": [' ' "a245fbc0-46e9-4aeb-9280-6b60d7c6f12d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "a245fbc0-46e9-4aeb-9280-6b60d7c6f12d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "3eb95208-e560-4b6b-922e-59a074cd9b59"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "3eb95208-e560-4b6b-922e-59a074cd9b59",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:15:44.769 19:06:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:15:44.769 19:06:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:44.769 /home/vagrant/spdk_repo/spdk 00:15:44.770 19:06:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:15:44.770 19:06:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:15:44.770 19:06:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:15:44.770 00:15:44.770 real 0m11.249s 00:15:44.770 user 0m26.253s 00:15:44.770 sys 0m20.529s 00:15:44.770 19:06:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:44.770 ************************************ 00:15:44.770 END TEST bdev_fio 00:15:44.770 ************************************ 00:15:44.770 19:06:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:15:44.770 19:06:01 blockdev_xnvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:44.770 19:06:01 blockdev_xnvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:15:44.770 19:06:01 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:15:44.770 19:06:01 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:44.770 19:06:01 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:44.770 ************************************ 00:15:44.770 START TEST bdev_verify 00:15:44.770 ************************************ 00:15:44.770 19:06:01 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:15:44.770 [2024-12-05 19:06:01.136361] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:15:44.770 [2024-12-05 19:06:01.136510] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83997 ] 00:15:44.770 [2024-12-05 19:06:01.285018] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:44.770 [2024-12-05 19:06:01.315306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:44.770 [2024-12-05 19:06:01.315390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:44.770 Running I/O for 5 seconds... 00:15:46.282 23264.00 IOPS, 90.88 MiB/s [2024-12-05T19:06:05.227Z] 23584.00 IOPS, 92.12 MiB/s [2024-12-05T19:06:05.798Z] 23680.00 IOPS, 92.50 MiB/s [2024-12-05T19:06:06.739Z] 23608.00 IOPS, 92.22 MiB/s [2024-12-05T19:06:06.739Z] 23468.80 IOPS, 91.67 MiB/s 00:15:49.180 Latency(us) 00:15:49.180 [2024-12-05T19:06:06.739Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:49.180 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:49.181 Verification LBA range: start 0x0 length 0x80000 00:15:49.181 nvme0n1 : 5.06 1794.64 7.01 0.00 0.00 71195.96 13712.15 65737.65 00:15:49.181 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:49.181 Verification LBA range: start 0x80000 length 0x80000 00:15:49.181 nvme0n1 : 5.05 1797.92 7.02 0.00 0.00 71074.72 6326.74 67754.14 00:15:49.181 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:49.181 Verification LBA range: start 0x0 length 0x80000 00:15:49.181 nvme0n2 : 5.04 1803.24 7.04 0.00 0.00 70721.24 10586.58 68157.44 00:15:49.181 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:49.181 Verification LBA range: start 0x80000 length 0x80000 00:15:49.181 nvme0n2 : 5.07 1793.01 7.00 0.00 0.00 71151.79 11998.13 68964.04 00:15:49.181 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:49.181 Verification LBA range: start 0x0 length 0x80000 00:15:49.181 nvme0n3 : 5.07 1818.76 7.10 0.00 0.00 69982.41 8620.50 69367.34 00:15:49.181 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:49.181 Verification LBA range: start 0x80000 length 0x80000 00:15:49.181 nvme0n3 : 5.03 1780.28 6.95 0.00 0.00 71540.80 12300.60 63317.86 00:15:49.181 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:49.181 Verification LBA range: start 0x0 length 0xbd0bd 00:15:49.181 nvme1n1 : 5.05 2529.61 9.88 0.00 0.00 50199.49 6956.90 58074.98 00:15:49.181 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:49.181 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:15:49.181 nvme1n1 : 5.08 2495.90 9.75 0.00 0.00 50921.17 3856.54 63721.16 00:15:49.181 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:49.181 Verification LBA range: start 0x0 length 0xa0000 00:15:49.181 nvme2n1 : 5.08 1916.51 7.49 0.00 0.00 66095.22 2003.89 66140.95 00:15:49.181 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:49.181 Verification LBA range: start 0xa0000 length 0xa0000 00:15:49.181 nvme2n1 : 5.07 1842.68 7.20 0.00 0.00 68889.41 7813.91 68560.74 00:15:49.181 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:49.181 Verification LBA range: start 0x0 length 0x20000 00:15:49.181 nvme3n1 : 5.08 1815.05 7.09 0.00 0.00 69720.75 6654.42 67350.84 00:15:49.181 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:49.181 Verification LBA range: start 0x20000 length 0x20000 00:15:49.181 nvme3n1 : 5.08 1813.91 7.09 0.00 0.00 69857.42 4461.49 61301.37 00:15:49.181 [2024-12-05T19:06:06.740Z] =================================================================================================================== 00:15:49.181 [2024-12-05T19:06:06.740Z] Total : 23201.53 90.63 0.00 0.00 65781.73 2003.89 69367.34 00:15:49.441 00:15:49.441 real 0m5.869s 00:15:49.441 user 0m9.217s 00:15:49.441 sys 0m1.596s 00:15:49.441 19:06:06 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:49.441 19:06:06 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:15:49.441 ************************************ 00:15:49.441 END TEST bdev_verify 00:15:49.441 ************************************ 00:15:49.441 19:06:06 blockdev_xnvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:15:49.441 19:06:06 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:15:49.441 19:06:06 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:49.441 19:06:06 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:49.702 ************************************ 00:15:49.702 START TEST bdev_verify_big_io 00:15:49.702 ************************************ 00:15:49.702 19:06:06 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:15:49.702 [2024-12-05 19:06:07.072361] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:15:49.702 [2024-12-05 19:06:07.072753] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84082 ] 00:15:49.702 [2024-12-05 19:06:07.219943] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:49.702 [2024-12-05 19:06:07.249942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:49.702 [2024-12-05 19:06:07.249999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:50.272 Running I/O for 5 seconds... 00:15:56.117 1880.00 IOPS, 117.50 MiB/s [2024-12-05T19:06:13.676Z] 2596.00 IOPS, 162.25 MiB/s [2024-12-05T19:06:13.938Z] 2818.67 IOPS, 176.17 MiB/s 00:15:56.379 Latency(us) 00:15:56.379 [2024-12-05T19:06:13.938Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:56.379 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:56.379 Verification LBA range: start 0x0 length 0x8000 00:15:56.379 nvme0n1 : 6.01 69.19 4.32 0.00 0.00 1774510.81 21677.29 2439149.10 00:15:56.379 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:56.379 Verification LBA range: start 0x8000 length 0x8000 00:15:56.379 nvme0n1 : 5.75 136.27 8.52 0.00 0.00 894612.44 87919.06 1006632.96 00:15:56.379 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:56.379 Verification LBA range: start 0x0 length 0x8000 00:15:56.379 nvme0n2 : 5.94 121.25 7.58 0.00 0.00 965745.78 143574.25 1226027.32 00:15:56.379 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:56.379 Verification LBA range: start 0x8000 length 0x8000 00:15:56.379 nvme0n2 : 5.84 131.40 8.21 0.00 0.00 917544.96 138734.67 735616.39 00:15:56.379 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:56.379 Verification LBA range: start 0x0 length 0x8000 00:15:56.379 nvme0n3 : 5.98 82.90 5.18 0.00 0.00 1350680.70 58478.28 2942465.58 00:15:56.379 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:56.379 Verification LBA range: start 0x8000 length 0x8000 00:15:56.379 nvme0n3 : 5.86 144.74 9.05 0.00 0.00 818822.86 101631.21 806596.92 00:15:56.379 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:56.379 Verification LBA range: start 0x0 length 0xbd0b 00:15:56.379 nvme1n1 : 5.94 107.65 6.73 0.00 0.00 1002507.97 30650.68 2168132.53 00:15:56.379 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:56.379 Verification LBA range: start 0xbd0b length 0xbd0b 00:15:56.379 nvme1n1 : 5.86 169.23 10.58 0.00 0.00 681334.66 9880.81 1329271.73 00:15:56.379 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:56.379 Verification LBA range: start 0x0 length 0xa000 00:15:56.379 nvme2n1 : 6.08 127.60 7.97 0.00 0.00 813004.54 1606.89 2439149.10 00:15:56.379 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:56.379 Verification LBA range: start 0xa000 length 0xa000 00:15:56.379 nvme2n1 : 5.86 141.87 8.87 0.00 0.00 790697.84 16131.94 864671.90 00:15:56.379 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:56.379 Verification LBA range: start 0x0 length 0x2000 00:15:56.379 nvme3n1 : 6.28 234.34 14.65 0.00 0.00 423484.44 894.82 1871304.86 00:15:56.379 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:56.379 Verification LBA range: start 0x2000 length 0x2000 00:15:56.379 nvme3n1 : 5.87 155.32 9.71 0.00 0.00 711510.16 2167.73 1309913.40 00:15:56.379 [2024-12-05T19:06:13.938Z] =================================================================================================================== 00:15:56.379 [2024-12-05T19:06:13.938Z] Total : 1621.77 101.36 0.00 0.00 837033.05 894.82 2942465.58 00:15:56.640 00:15:56.640 real 0m7.065s 00:15:56.640 user 0m12.991s 00:15:56.640 sys 0m0.464s 00:15:56.640 19:06:14 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:56.640 19:06:14 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:15:56.640 ************************************ 00:15:56.640 END TEST bdev_verify_big_io 00:15:56.640 ************************************ 00:15:56.640 19:06:14 blockdev_xnvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:56.640 19:06:14 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:15:56.640 19:06:14 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:56.640 19:06:14 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:56.640 ************************************ 00:15:56.640 START TEST bdev_write_zeroes 00:15:56.640 ************************************ 00:15:56.640 19:06:14 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:56.902 [2024-12-05 19:06:14.214141] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:15:56.902 [2024-12-05 19:06:14.214482] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84186 ] 00:15:56.902 [2024-12-05 19:06:14.356993] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:56.902 [2024-12-05 19:06:14.388556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:57.164 Running I/O for 1 seconds... 00:15:58.548 70432.00 IOPS, 275.12 MiB/s 00:15:58.548 Latency(us) 00:15:58.548 [2024-12-05T19:06:16.107Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:58.548 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:58.548 nvme0n1 : 1.02 11561.68 45.16 0.00 0.00 11059.51 7461.02 23794.61 00:15:58.548 Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:58.548 nvme0n2 : 1.02 11543.46 45.09 0.00 0.00 11067.84 7511.43 22786.36 00:15:58.548 Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:58.548 nvme0n3 : 1.02 11529.74 45.04 0.00 0.00 11066.85 7511.43 22887.19 00:15:58.548 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:58.548 nvme1n1 : 1.03 12079.11 47.18 0.00 0.00 10540.90 4537.11 21878.94 00:15:58.548 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:58.548 nvme2n1 : 1.03 11567.40 45.19 0.00 0.00 10939.02 3881.75 22887.19 00:15:58.548 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:58.548 nvme3n1 : 1.03 11470.41 44.81 0.00 0.00 11021.14 3982.57 22685.54 00:15:58.548 [2024-12-05T19:06:16.107Z] =================================================================================================================== 00:15:58.548 [2024-12-05T19:06:16.107Z] Total : 69751.79 272.47 0.00 0.00 10945.53 3881.75 23794.61 00:15:58.548 00:15:58.548 real 0m1.764s 00:15:58.548 user 0m1.076s 00:15:58.548 sys 0m0.496s 00:15:58.548 19:06:15 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:58.548 ************************************ 00:15:58.548 END TEST bdev_write_zeroes 00:15:58.548 ************************************ 00:15:58.548 19:06:15 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:15:58.548 19:06:15 blockdev_xnvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:58.548 19:06:15 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:15:58.548 19:06:15 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:58.548 19:06:15 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:58.548 ************************************ 00:15:58.548 START TEST bdev_json_nonenclosed 00:15:58.548 ************************************ 00:15:58.548 19:06:15 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:58.548 [2024-12-05 19:06:16.049962] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:15:58.548 [2024-12-05 19:06:16.050107] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84225 ] 00:15:58.809 [2024-12-05 19:06:16.198318] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:58.809 [2024-12-05 19:06:16.226942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:58.809 [2024-12-05 19:06:16.227062] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:15:58.809 [2024-12-05 19:06:16.227082] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:15:58.809 [2024-12-05 19:06:16.227096] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:58.809 ************************************ 00:15:58.809 END TEST bdev_json_nonenclosed 00:15:58.809 ************************************ 00:15:58.809 00:15:58.809 real 0m0.331s 00:15:58.809 user 0m0.131s 00:15:58.809 sys 0m0.096s 00:15:58.809 19:06:16 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:58.809 19:06:16 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:15:58.809 19:06:16 blockdev_xnvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:58.809 19:06:16 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:15:58.809 19:06:16 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:58.809 19:06:16 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:59.071 ************************************ 00:15:59.071 START TEST bdev_json_nonarray 00:15:59.071 ************************************ 00:15:59.071 19:06:16 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:59.071 [2024-12-05 19:06:16.451336] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:15:59.071 [2024-12-05 19:06:16.451467] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84250 ] 00:15:59.071 [2024-12-05 19:06:16.598690] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:59.332 [2024-12-05 19:06:16.628625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:59.332 [2024-12-05 19:06:16.628731] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:15:59.332 [2024-12-05 19:06:16.628752] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:15:59.332 [2024-12-05 19:06:16.628766] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:59.332 00:15:59.332 real 0m0.326s 00:15:59.332 user 0m0.128s 00:15:59.332 sys 0m0.092s 00:15:59.332 ************************************ 00:15:59.332 END TEST bdev_json_nonarray 00:15:59.332 ************************************ 00:15:59.332 19:06:16 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:59.332 19:06:16 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:15:59.332 19:06:16 blockdev_xnvme -- bdev/blockdev.sh@824 -- # [[ xnvme == bdev ]] 00:15:59.332 19:06:16 blockdev_xnvme -- bdev/blockdev.sh@832 -- # [[ xnvme == gpt ]] 00:15:59.332 19:06:16 blockdev_xnvme -- bdev/blockdev.sh@836 -- # [[ xnvme == crypto_sw ]] 00:15:59.332 19:06:16 blockdev_xnvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:15:59.332 19:06:16 blockdev_xnvme -- bdev/blockdev.sh@849 -- # cleanup 00:15:59.332 19:06:16 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:15:59.332 19:06:16 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:59.332 19:06:16 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:15:59.332 19:06:16 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:15:59.332 19:06:16 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:15:59.332 19:06:16 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:15:59.332 19:06:16 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:59.906 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:09.916 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:13.212 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:16:13.212 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:13.212 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:16:13.212 00:16:13.212 real 0m55.929s 00:16:13.212 user 1m11.859s 00:16:13.212 sys 1m0.308s 00:16:13.212 19:06:30 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:13.212 ************************************ 00:16:13.212 END TEST blockdev_xnvme 00:16:13.212 ************************************ 00:16:13.212 19:06:30 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:13.212 19:06:30 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:16:13.212 19:06:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:13.212 19:06:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:13.212 19:06:30 -- common/autotest_common.sh@10 -- # set +x 00:16:13.212 ************************************ 00:16:13.212 START TEST ublk 00:16:13.212 ************************************ 00:16:13.212 19:06:30 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:16:13.212 * Looking for test storage... 00:16:13.212 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:16:13.212 19:06:30 ublk -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:13.212 19:06:30 ublk -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:13.212 19:06:30 ublk -- common/autotest_common.sh@1711 -- # lcov --version 00:16:13.212 19:06:30 ublk -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:13.212 19:06:30 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:13.212 19:06:30 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:13.212 19:06:30 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:13.212 19:06:30 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:16:13.212 19:06:30 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:16:13.212 19:06:30 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:16:13.212 19:06:30 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:16:13.212 19:06:30 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:16:13.212 19:06:30 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:16:13.212 19:06:30 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:16:13.212 19:06:30 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:13.212 19:06:30 ublk -- scripts/common.sh@344 -- # case "$op" in 00:16:13.212 19:06:30 ublk -- scripts/common.sh@345 -- # : 1 00:16:13.212 19:06:30 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:13.212 19:06:30 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:13.213 19:06:30 ublk -- scripts/common.sh@365 -- # decimal 1 00:16:13.213 19:06:30 ublk -- scripts/common.sh@353 -- # local d=1 00:16:13.213 19:06:30 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:13.213 19:06:30 ublk -- scripts/common.sh@355 -- # echo 1 00:16:13.213 19:06:30 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:16:13.213 19:06:30 ublk -- scripts/common.sh@366 -- # decimal 2 00:16:13.213 19:06:30 ublk -- scripts/common.sh@353 -- # local d=2 00:16:13.213 19:06:30 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:13.213 19:06:30 ublk -- scripts/common.sh@355 -- # echo 2 00:16:13.213 19:06:30 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:16:13.213 19:06:30 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:13.213 19:06:30 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:13.213 19:06:30 ublk -- scripts/common.sh@368 -- # return 0 00:16:13.213 19:06:30 ublk -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:13.213 19:06:30 ublk -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:13.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:13.213 --rc genhtml_branch_coverage=1 00:16:13.213 --rc genhtml_function_coverage=1 00:16:13.213 --rc genhtml_legend=1 00:16:13.213 --rc geninfo_all_blocks=1 00:16:13.213 --rc geninfo_unexecuted_blocks=1 00:16:13.213 00:16:13.213 ' 00:16:13.213 19:06:30 ublk -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:13.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:13.213 --rc genhtml_branch_coverage=1 00:16:13.213 --rc genhtml_function_coverage=1 00:16:13.213 --rc genhtml_legend=1 00:16:13.213 --rc geninfo_all_blocks=1 00:16:13.213 --rc geninfo_unexecuted_blocks=1 00:16:13.213 00:16:13.213 ' 00:16:13.213 19:06:30 ublk -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:13.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:13.213 --rc genhtml_branch_coverage=1 00:16:13.213 --rc genhtml_function_coverage=1 00:16:13.213 --rc genhtml_legend=1 00:16:13.213 --rc geninfo_all_blocks=1 00:16:13.213 --rc geninfo_unexecuted_blocks=1 00:16:13.213 00:16:13.213 ' 00:16:13.213 19:06:30 ublk -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:13.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:13.213 --rc genhtml_branch_coverage=1 00:16:13.213 --rc genhtml_function_coverage=1 00:16:13.213 --rc genhtml_legend=1 00:16:13.213 --rc geninfo_all_blocks=1 00:16:13.213 --rc geninfo_unexecuted_blocks=1 00:16:13.213 00:16:13.213 ' 00:16:13.213 19:06:30 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:16:13.213 19:06:30 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:16:13.213 19:06:30 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:16:13.213 19:06:30 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:16:13.213 19:06:30 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:16:13.213 19:06:30 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:16:13.213 19:06:30 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:16:13.213 19:06:30 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:16:13.213 19:06:30 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:16:13.213 19:06:30 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:16:13.213 19:06:30 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:16:13.213 19:06:30 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:16:13.213 19:06:30 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:16:13.213 19:06:30 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:16:13.213 19:06:30 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:16:13.213 19:06:30 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:16:13.213 19:06:30 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:16:13.213 19:06:30 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:16:13.213 19:06:30 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:16:13.213 19:06:30 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:16:13.213 19:06:30 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:13.213 19:06:30 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:13.213 19:06:30 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:13.213 ************************************ 00:16:13.213 START TEST test_save_ublk_config 00:16:13.213 ************************************ 00:16:13.213 19:06:30 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:16:13.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:13.213 19:06:30 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:16:13.213 19:06:30 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=84551 00:16:13.213 19:06:30 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:16:13.213 19:06:30 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 84551 00:16:13.213 19:06:30 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 84551 ']' 00:16:13.213 19:06:30 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:13.213 19:06:30 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:13.213 19:06:30 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:13.213 19:06:30 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:13.213 19:06:30 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:16:13.213 19:06:30 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:13.473 [2024-12-05 19:06:30.844956] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:16:13.473 [2024-12-05 19:06:30.845234] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84551 ] 00:16:13.473 [2024-12-05 19:06:30.992015] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:13.473 [2024-12-05 19:06:31.016872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:14.414 19:06:31 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:14.414 19:06:31 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:16:14.414 19:06:31 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:16:14.414 19:06:31 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:16:14.414 19:06:31 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.414 19:06:31 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:14.414 [2024-12-05 19:06:31.708277] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:14.414 [2024-12-05 19:06:31.709241] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:14.414 malloc0 00:16:14.414 [2024-12-05 19:06:31.740397] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:16:14.414 [2024-12-05 19:06:31.740481] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:16:14.414 [2024-12-05 19:06:31.740494] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:14.414 [2024-12-05 19:06:31.740509] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:14.414 [2024-12-05 19:06:31.749379] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:14.414 [2024-12-05 19:06:31.749427] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:14.414 [2024-12-05 19:06:31.756297] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:14.414 [2024-12-05 19:06:31.756430] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:14.414 [2024-12-05 19:06:31.773283] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:14.414 0 00:16:14.414 19:06:31 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.414 19:06:31 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:16:14.414 19:06:31 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.414 19:06:31 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:14.674 19:06:32 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.674 19:06:32 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:16:14.674 "subsystems": [ 00:16:14.674 { 00:16:14.674 "subsystem": "fsdev", 00:16:14.674 "config": [ 00:16:14.674 { 00:16:14.674 "method": "fsdev_set_opts", 00:16:14.674 "params": { 00:16:14.674 "fsdev_io_pool_size": 65535, 00:16:14.674 "fsdev_io_cache_size": 256 00:16:14.674 } 00:16:14.674 } 00:16:14.674 ] 00:16:14.674 }, 00:16:14.674 { 00:16:14.674 "subsystem": "keyring", 00:16:14.674 "config": [] 00:16:14.674 }, 00:16:14.674 { 00:16:14.674 "subsystem": "iobuf", 00:16:14.674 "config": [ 00:16:14.674 { 00:16:14.674 "method": "iobuf_set_options", 00:16:14.674 "params": { 00:16:14.674 "small_pool_count": 8192, 00:16:14.674 "large_pool_count": 1024, 00:16:14.674 "small_bufsize": 8192, 00:16:14.674 "large_bufsize": 135168, 00:16:14.674 "enable_numa": false 00:16:14.674 } 00:16:14.674 } 00:16:14.674 ] 00:16:14.674 }, 00:16:14.674 { 00:16:14.674 "subsystem": "sock", 00:16:14.674 "config": [ 00:16:14.674 { 00:16:14.674 "method": "sock_set_default_impl", 00:16:14.674 "params": { 00:16:14.674 "impl_name": "posix" 00:16:14.674 } 00:16:14.674 }, 00:16:14.674 { 00:16:14.674 "method": "sock_impl_set_options", 00:16:14.674 "params": { 00:16:14.674 "impl_name": "ssl", 00:16:14.674 "recv_buf_size": 4096, 00:16:14.674 "send_buf_size": 4096, 00:16:14.674 "enable_recv_pipe": true, 00:16:14.674 "enable_quickack": false, 00:16:14.674 "enable_placement_id": 0, 00:16:14.674 "enable_zerocopy_send_server": true, 00:16:14.674 "enable_zerocopy_send_client": false, 00:16:14.674 "zerocopy_threshold": 0, 00:16:14.674 "tls_version": 0, 00:16:14.674 "enable_ktls": false 00:16:14.674 } 00:16:14.674 }, 00:16:14.674 { 00:16:14.674 "method": "sock_impl_set_options", 00:16:14.674 "params": { 00:16:14.674 "impl_name": "posix", 00:16:14.674 "recv_buf_size": 2097152, 00:16:14.674 "send_buf_size": 2097152, 00:16:14.674 "enable_recv_pipe": true, 00:16:14.674 "enable_quickack": false, 00:16:14.674 "enable_placement_id": 0, 00:16:14.674 "enable_zerocopy_send_server": true, 00:16:14.674 "enable_zerocopy_send_client": false, 00:16:14.674 "zerocopy_threshold": 0, 00:16:14.674 "tls_version": 0, 00:16:14.674 "enable_ktls": false 00:16:14.674 } 00:16:14.674 } 00:16:14.674 ] 00:16:14.674 }, 00:16:14.674 { 00:16:14.674 "subsystem": "vmd", 00:16:14.674 "config": [] 00:16:14.674 }, 00:16:14.674 { 00:16:14.674 "subsystem": "accel", 00:16:14.674 "config": [ 00:16:14.674 { 00:16:14.674 "method": "accel_set_options", 00:16:14.674 "params": { 00:16:14.674 "small_cache_size": 128, 00:16:14.674 "large_cache_size": 16, 00:16:14.674 "task_count": 2048, 00:16:14.674 "sequence_count": 2048, 00:16:14.674 "buf_count": 2048 00:16:14.674 } 00:16:14.674 } 00:16:14.674 ] 00:16:14.674 }, 00:16:14.674 { 00:16:14.674 "subsystem": "bdev", 00:16:14.674 "config": [ 00:16:14.674 { 00:16:14.674 "method": "bdev_set_options", 00:16:14.674 "params": { 00:16:14.674 "bdev_io_pool_size": 65535, 00:16:14.674 "bdev_io_cache_size": 256, 00:16:14.674 "bdev_auto_examine": true, 00:16:14.674 "iobuf_small_cache_size": 128, 00:16:14.674 "iobuf_large_cache_size": 16 00:16:14.674 } 00:16:14.674 }, 00:16:14.674 { 00:16:14.674 "method": "bdev_raid_set_options", 00:16:14.674 "params": { 00:16:14.674 "process_window_size_kb": 1024, 00:16:14.674 "process_max_bandwidth_mb_sec": 0 00:16:14.674 } 00:16:14.674 }, 00:16:14.674 { 00:16:14.674 "method": "bdev_iscsi_set_options", 00:16:14.674 "params": { 00:16:14.674 "timeout_sec": 30 00:16:14.674 } 00:16:14.674 }, 00:16:14.674 { 00:16:14.674 "method": "bdev_nvme_set_options", 00:16:14.674 "params": { 00:16:14.674 "action_on_timeout": "none", 00:16:14.674 "timeout_us": 0, 00:16:14.674 "timeout_admin_us": 0, 00:16:14.674 "keep_alive_timeout_ms": 10000, 00:16:14.674 "arbitration_burst": 0, 00:16:14.674 "low_priority_weight": 0, 00:16:14.674 "medium_priority_weight": 0, 00:16:14.674 "high_priority_weight": 0, 00:16:14.674 "nvme_adminq_poll_period_us": 10000, 00:16:14.674 "nvme_ioq_poll_period_us": 0, 00:16:14.674 "io_queue_requests": 0, 00:16:14.674 "delay_cmd_submit": true, 00:16:14.674 "transport_retry_count": 4, 00:16:14.674 "bdev_retry_count": 3, 00:16:14.674 "transport_ack_timeout": 0, 00:16:14.674 "ctrlr_loss_timeout_sec": 0, 00:16:14.674 "reconnect_delay_sec": 0, 00:16:14.674 "fast_io_fail_timeout_sec": 0, 00:16:14.674 "disable_auto_failback": false, 00:16:14.674 "generate_uuids": false, 00:16:14.674 "transport_tos": 0, 00:16:14.674 "nvme_error_stat": false, 00:16:14.674 "rdma_srq_size": 0, 00:16:14.674 "io_path_stat": false, 00:16:14.674 "allow_accel_sequence": false, 00:16:14.674 "rdma_max_cq_size": 0, 00:16:14.674 "rdma_cm_event_timeout_ms": 0, 00:16:14.674 "dhchap_digests": [ 00:16:14.674 "sha256", 00:16:14.674 "sha384", 00:16:14.674 "sha512" 00:16:14.674 ], 00:16:14.674 "dhchap_dhgroups": [ 00:16:14.675 "null", 00:16:14.675 "ffdhe2048", 00:16:14.675 "ffdhe3072", 00:16:14.675 "ffdhe4096", 00:16:14.675 "ffdhe6144", 00:16:14.675 "ffdhe8192" 00:16:14.675 ] 00:16:14.675 } 00:16:14.675 }, 00:16:14.675 { 00:16:14.675 "method": "bdev_nvme_set_hotplug", 00:16:14.675 "params": { 00:16:14.675 "period_us": 100000, 00:16:14.675 "enable": false 00:16:14.675 } 00:16:14.675 }, 00:16:14.675 { 00:16:14.675 "method": "bdev_malloc_create", 00:16:14.675 "params": { 00:16:14.675 "name": "malloc0", 00:16:14.675 "num_blocks": 8192, 00:16:14.675 "block_size": 4096, 00:16:14.675 "physical_block_size": 4096, 00:16:14.675 "uuid": "f40a4223-a5d6-4f49-a772-36e0d7b050aa", 00:16:14.675 "optimal_io_boundary": 0, 00:16:14.675 "md_size": 0, 00:16:14.675 "dif_type": 0, 00:16:14.675 "dif_is_head_of_md": false, 00:16:14.675 "dif_pi_format": 0 00:16:14.675 } 00:16:14.675 }, 00:16:14.675 { 00:16:14.675 "method": "bdev_wait_for_examine" 00:16:14.675 } 00:16:14.675 ] 00:16:14.675 }, 00:16:14.675 { 00:16:14.675 "subsystem": "scsi", 00:16:14.675 "config": null 00:16:14.675 }, 00:16:14.675 { 00:16:14.675 "subsystem": "scheduler", 00:16:14.675 "config": [ 00:16:14.675 { 00:16:14.675 "method": "framework_set_scheduler", 00:16:14.675 "params": { 00:16:14.675 "name": "static" 00:16:14.675 } 00:16:14.675 } 00:16:14.675 ] 00:16:14.675 }, 00:16:14.675 { 00:16:14.675 "subsystem": "vhost_scsi", 00:16:14.675 "config": [] 00:16:14.675 }, 00:16:14.675 { 00:16:14.675 "subsystem": "vhost_blk", 00:16:14.675 "config": [] 00:16:14.675 }, 00:16:14.675 { 00:16:14.675 "subsystem": "ublk", 00:16:14.675 "config": [ 00:16:14.675 { 00:16:14.675 "method": "ublk_create_target", 00:16:14.675 "params": { 00:16:14.675 "cpumask": "1" 00:16:14.675 } 00:16:14.675 }, 00:16:14.675 { 00:16:14.675 "method": "ublk_start_disk", 00:16:14.675 "params": { 00:16:14.675 "bdev_name": "malloc0", 00:16:14.675 "ublk_id": 0, 00:16:14.675 "num_queues": 1, 00:16:14.675 "queue_depth": 128 00:16:14.675 } 00:16:14.675 } 00:16:14.675 ] 00:16:14.675 }, 00:16:14.675 { 00:16:14.675 "subsystem": "nbd", 00:16:14.675 "config": [] 00:16:14.675 }, 00:16:14.675 { 00:16:14.675 "subsystem": "nvmf", 00:16:14.675 "config": [ 00:16:14.675 { 00:16:14.675 "method": "nvmf_set_config", 00:16:14.675 "params": { 00:16:14.675 "discovery_filter": "match_any", 00:16:14.675 "admin_cmd_passthru": { 00:16:14.675 "identify_ctrlr": false 00:16:14.675 }, 00:16:14.675 "dhchap_digests": [ 00:16:14.675 "sha256", 00:16:14.675 "sha384", 00:16:14.675 "sha512" 00:16:14.675 ], 00:16:14.675 "dhchap_dhgroups": [ 00:16:14.675 "null", 00:16:14.675 "ffdhe2048", 00:16:14.675 "ffdhe3072", 00:16:14.675 "ffdhe4096", 00:16:14.675 "ffdhe6144", 00:16:14.675 "ffdhe8192" 00:16:14.675 ] 00:16:14.675 } 00:16:14.675 }, 00:16:14.675 { 00:16:14.675 "method": "nvmf_set_max_subsystems", 00:16:14.675 "params": { 00:16:14.675 "max_subsystems": 1024 00:16:14.675 } 00:16:14.675 }, 00:16:14.675 { 00:16:14.675 "method": "nvmf_set_crdt", 00:16:14.675 "params": { 00:16:14.675 "crdt1": 0, 00:16:14.675 "crdt2": 0, 00:16:14.675 "crdt3": 0 00:16:14.675 } 00:16:14.675 } 00:16:14.675 ] 00:16:14.675 }, 00:16:14.675 { 00:16:14.675 "subsystem": "iscsi", 00:16:14.675 "config": [ 00:16:14.675 { 00:16:14.675 "method": "iscsi_set_options", 00:16:14.675 "params": { 00:16:14.675 "node_base": "iqn.2016-06.io.spdk", 00:16:14.675 "max_sessions": 128, 00:16:14.675 "max_connections_per_session": 2, 00:16:14.675 "max_queue_depth": 64, 00:16:14.675 "default_time2wait": 2, 00:16:14.675 "default_time2retain": 20, 00:16:14.675 "first_burst_length": 8192, 00:16:14.675 "immediate_data": true, 00:16:14.675 "allow_duplicated_isid": false, 00:16:14.675 "error_recovery_level": 0, 00:16:14.675 "nop_timeout": 60, 00:16:14.675 "nop_in_interval": 30, 00:16:14.675 "disable_chap": false, 00:16:14.675 "require_chap": false, 00:16:14.675 "mutual_chap": false, 00:16:14.675 "chap_group": 0, 00:16:14.675 "max_large_datain_per_connection": 64, 00:16:14.675 "max_r2t_per_connection": 4, 00:16:14.675 "pdu_pool_size": 36864, 00:16:14.675 "immediate_data_pool_size": 16384, 00:16:14.675 "data_out_pool_size": 2048 00:16:14.675 } 00:16:14.675 } 00:16:14.675 ] 00:16:14.675 } 00:16:14.675 ] 00:16:14.675 }' 00:16:14.675 19:06:32 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 84551 00:16:14.675 19:06:32 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 84551 ']' 00:16:14.675 19:06:32 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 84551 00:16:14.675 19:06:32 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:16:14.675 19:06:32 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:14.675 19:06:32 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84551 00:16:14.675 killing process with pid 84551 00:16:14.675 19:06:32 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:14.675 19:06:32 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:14.675 19:06:32 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84551' 00:16:14.675 19:06:32 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 84551 00:16:14.675 19:06:32 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 84551 00:16:14.934 [2024-12-05 19:06:32.372287] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:14.934 [2024-12-05 19:06:32.410293] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:14.934 [2024-12-05 19:06:32.410455] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:14.934 [2024-12-05 19:06:32.419282] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:14.934 [2024-12-05 19:06:32.419374] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:14.934 [2024-12-05 19:06:32.419384] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:14.935 [2024-12-05 19:06:32.419423] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:14.935 [2024-12-05 19:06:32.419580] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:15.507 19:06:32 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=84589 00:16:15.507 19:06:32 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 84589 00:16:15.507 19:06:32 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 84589 ']' 00:16:15.507 19:06:32 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:15.507 19:06:32 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:15.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:15.508 19:06:32 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:15.508 19:06:32 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:16:15.508 19:06:32 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:15.508 19:06:32 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:15.508 19:06:32 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:16:15.508 "subsystems": [ 00:16:15.508 { 00:16:15.508 "subsystem": "fsdev", 00:16:15.508 "config": [ 00:16:15.508 { 00:16:15.508 "method": "fsdev_set_opts", 00:16:15.508 "params": { 00:16:15.508 "fsdev_io_pool_size": 65535, 00:16:15.508 "fsdev_io_cache_size": 256 00:16:15.508 } 00:16:15.508 } 00:16:15.508 ] 00:16:15.508 }, 00:16:15.508 { 00:16:15.508 "subsystem": "keyring", 00:16:15.508 "config": [] 00:16:15.508 }, 00:16:15.508 { 00:16:15.508 "subsystem": "iobuf", 00:16:15.508 "config": [ 00:16:15.508 { 00:16:15.508 "method": "iobuf_set_options", 00:16:15.508 "params": { 00:16:15.508 "small_pool_count": 8192, 00:16:15.508 "large_pool_count": 1024, 00:16:15.508 "small_bufsize": 8192, 00:16:15.508 "large_bufsize": 135168, 00:16:15.508 "enable_numa": false 00:16:15.508 } 00:16:15.508 } 00:16:15.508 ] 00:16:15.508 }, 00:16:15.508 { 00:16:15.508 "subsystem": "sock", 00:16:15.508 "config": [ 00:16:15.508 { 00:16:15.508 "method": "sock_set_default_impl", 00:16:15.508 "params": { 00:16:15.508 "impl_name": "posix" 00:16:15.508 } 00:16:15.508 }, 00:16:15.508 { 00:16:15.508 "method": "sock_impl_set_options", 00:16:15.508 "params": { 00:16:15.508 "impl_name": "ssl", 00:16:15.508 "recv_buf_size": 4096, 00:16:15.508 "send_buf_size": 4096, 00:16:15.508 "enable_recv_pipe": true, 00:16:15.508 "enable_quickack": false, 00:16:15.508 "enable_placement_id": 0, 00:16:15.508 "enable_zerocopy_send_server": true, 00:16:15.508 "enable_zerocopy_send_client": false, 00:16:15.508 "zerocopy_threshold": 0, 00:16:15.508 "tls_version": 0, 00:16:15.508 "enable_ktls": false 00:16:15.508 } 00:16:15.508 }, 00:16:15.508 { 00:16:15.508 "method": "sock_impl_set_options", 00:16:15.508 "params": { 00:16:15.508 "impl_name": "posix", 00:16:15.508 "recv_buf_size": 2097152, 00:16:15.508 "send_buf_size": 2097152, 00:16:15.508 "enable_recv_pipe": true, 00:16:15.508 "enable_quickack": false, 00:16:15.508 "enable_placement_id": 0, 00:16:15.508 "enable_zerocopy_send_server": true, 00:16:15.508 "enable_zerocopy_send_client": false, 00:16:15.508 "zerocopy_threshold": 0, 00:16:15.508 "tls_version": 0, 00:16:15.508 "enable_ktls": false 00:16:15.508 } 00:16:15.508 } 00:16:15.508 ] 00:16:15.508 }, 00:16:15.508 { 00:16:15.508 "subsystem": "vmd", 00:16:15.508 "config": [] 00:16:15.508 }, 00:16:15.508 { 00:16:15.508 "subsystem": "accel", 00:16:15.508 "config": [ 00:16:15.508 { 00:16:15.508 "method": "accel_set_options", 00:16:15.508 "params": { 00:16:15.508 "small_cache_size": 128, 00:16:15.508 "large_cache_size": 16, 00:16:15.508 "task_count": 2048, 00:16:15.508 "sequence_count": 2048, 00:16:15.508 "buf_count": 2048 00:16:15.508 } 00:16:15.508 } 00:16:15.508 ] 00:16:15.508 }, 00:16:15.508 { 00:16:15.508 "subsystem": "bdev", 00:16:15.508 "config": [ 00:16:15.508 { 00:16:15.508 "method": "bdev_set_options", 00:16:15.508 "params": { 00:16:15.508 "bdev_io_pool_size": 65535, 00:16:15.508 "bdev_io_cache_size": 256, 00:16:15.508 "bdev_auto_examine": true, 00:16:15.508 "iobuf_small_cache_size": 128, 00:16:15.508 "iobuf_large_cache_size": 16 00:16:15.508 } 00:16:15.508 }, 00:16:15.508 { 00:16:15.508 "method": "bdev_raid_set_options", 00:16:15.508 "params": { 00:16:15.508 "process_window_size_kb": 1024, 00:16:15.508 "process_max_bandwidth_mb_sec": 0 00:16:15.508 } 00:16:15.508 }, 00:16:15.508 { 00:16:15.508 "method": "bdev_iscsi_set_options", 00:16:15.508 "params": { 00:16:15.508 "timeout_sec": 30 00:16:15.508 } 00:16:15.508 }, 00:16:15.508 { 00:16:15.508 "method": "bdev_nvme_set_options", 00:16:15.508 "params": { 00:16:15.508 "action_on_timeout": "none", 00:16:15.508 "timeout_us": 0, 00:16:15.508 "timeout_admin_us": 0, 00:16:15.508 "keep_alive_timeout_ms": 10000, 00:16:15.508 "arbitration_burst": 0, 00:16:15.508 "low_priority_weight": 0, 00:16:15.508 "medium_priority_weight": 0, 00:16:15.508 "high_priority_weight": 0, 00:16:15.508 "nvme_adminq_poll_period_us": 10000, 00:16:15.508 "nvme_ioq_poll_period_us": 0, 00:16:15.508 "io_queue_requests": 0, 00:16:15.508 "delay_cmd_submit": true, 00:16:15.508 "transport_retry_count": 4, 00:16:15.508 "bdev_retry_count": 3, 00:16:15.508 "transport_ack_timeout": 0, 00:16:15.508 "ctrlr_loss_timeout_sec": 0, 00:16:15.508 "reconnect_delay_sec": 0, 00:16:15.508 "fast_io_fail_timeout_sec": 0, 00:16:15.508 "disable_auto_failback": false, 00:16:15.508 "generate_uuids": false, 00:16:15.508 "transport_tos": 0, 00:16:15.508 "nvme_error_stat": false, 00:16:15.508 "rdma_srq_size": 0, 00:16:15.508 "io_path_stat": false, 00:16:15.508 "allow_accel_sequence": false, 00:16:15.508 "rdma_max_cq_size": 0, 00:16:15.508 "rdma_cm_event_timeout_ms": 0, 00:16:15.508 "dhchap_digests": [ 00:16:15.508 "sha256", 00:16:15.508 "sha384", 00:16:15.508 "sha512" 00:16:15.508 ], 00:16:15.508 "dhchap_dhgroups": [ 00:16:15.508 "null", 00:16:15.508 "ffdhe2048", 00:16:15.508 "ffdhe3072", 00:16:15.508 "ffdhe4096", 00:16:15.508 "ffdhe6144", 00:16:15.508 "ffdhe8192" 00:16:15.508 ] 00:16:15.508 } 00:16:15.508 }, 00:16:15.508 { 00:16:15.508 "method": "bdev_nvme_set_hotplug", 00:16:15.508 "params": { 00:16:15.508 "period_us": 100000, 00:16:15.508 "enable": false 00:16:15.508 } 00:16:15.508 }, 00:16:15.508 { 00:16:15.508 "method": "bdev_malloc_create", 00:16:15.508 "params": { 00:16:15.508 "name": "malloc0", 00:16:15.508 "num_blocks": 8192, 00:16:15.508 "block_size": 4096, 00:16:15.508 "physical_block_size": 4096, 00:16:15.508 "uuid": "f40a4223-a5d6-4f49-a772-36e0d7b050aa", 00:16:15.508 "optimal_io_boundary": 0, 00:16:15.508 "md_size": 0, 00:16:15.508 "dif_type": 0, 00:16:15.508 "dif_is_head_of_md": false, 00:16:15.508 "dif_pi_format": 0 00:16:15.508 } 00:16:15.508 }, 00:16:15.508 { 00:16:15.508 "method": "bdev_wait_for_examine" 00:16:15.508 } 00:16:15.508 ] 00:16:15.508 }, 00:16:15.508 { 00:16:15.508 "subsystem": "scsi", 00:16:15.508 "config": null 00:16:15.508 }, 00:16:15.508 { 00:16:15.508 "subsystem": "scheduler", 00:16:15.508 "config": [ 00:16:15.508 { 00:16:15.508 "method": "framework_set_scheduler", 00:16:15.508 "params": { 00:16:15.508 "name": "static" 00:16:15.508 } 00:16:15.508 } 00:16:15.508 ] 00:16:15.508 }, 00:16:15.508 { 00:16:15.508 "subsystem": "vhost_scsi", 00:16:15.508 "config": [] 00:16:15.508 }, 00:16:15.508 { 00:16:15.508 "subsystem": "vhost_blk", 00:16:15.508 "config": [] 00:16:15.508 }, 00:16:15.508 { 00:16:15.508 "subsystem": "ublk", 00:16:15.508 "config": [ 00:16:15.508 { 00:16:15.508 "method": "ublk_create_target", 00:16:15.508 "params": { 00:16:15.508 "cpumask": "1" 00:16:15.508 } 00:16:15.508 }, 00:16:15.508 { 00:16:15.508 "method": "ublk_start_disk", 00:16:15.508 "params": { 00:16:15.508 "bdev_name": "malloc0", 00:16:15.508 "ublk_id": 0, 00:16:15.508 "num_queues": 1, 00:16:15.508 "queue_depth": 128 00:16:15.508 } 00:16:15.508 } 00:16:15.508 ] 00:16:15.508 }, 00:16:15.508 { 00:16:15.508 "subsystem": "nbd", 00:16:15.508 "config": [] 00:16:15.508 }, 00:16:15.508 { 00:16:15.508 "subsystem": "nvmf", 00:16:15.508 "config": [ 00:16:15.508 { 00:16:15.508 "method": "nvmf_set_config", 00:16:15.508 "params": { 00:16:15.508 "discovery_filter": "match_any", 00:16:15.508 "admin_cmd_passthru": { 00:16:15.508 "identify_ctrlr": false 00:16:15.508 }, 00:16:15.508 "dhchap_digests": [ 00:16:15.508 "sha256", 00:16:15.508 "sha384", 00:16:15.508 "sha512" 00:16:15.508 ], 00:16:15.508 "dhchap_dhgroups": [ 00:16:15.508 "null", 00:16:15.508 "ffdhe2048", 00:16:15.508 "ffdhe3072", 00:16:15.508 "ffdhe4096", 00:16:15.508 "ffdhe6144", 00:16:15.508 "ffdhe8192" 00:16:15.508 ] 00:16:15.508 } 00:16:15.508 }, 00:16:15.508 { 00:16:15.508 "method": "nvmf_set_max_subsystems", 00:16:15.508 "params": { 00:16:15.508 "max_subsystems": 1024 00:16:15.508 } 00:16:15.508 }, 00:16:15.508 { 00:16:15.508 "method": "nvmf_set_crdt", 00:16:15.508 "params": { 00:16:15.508 "crdt1": 0, 00:16:15.509 "crdt2": 0, 00:16:15.509 "crdt3": 0 00:16:15.509 } 00:16:15.509 } 00:16:15.509 ] 00:16:15.509 }, 00:16:15.509 { 00:16:15.509 "subsystem": "iscsi", 00:16:15.509 "config": [ 00:16:15.509 { 00:16:15.509 "method": "iscsi_set_options", 00:16:15.509 "params": { 00:16:15.509 "node_base": "iqn.2016-06.io.spdk", 00:16:15.509 "max_sessions": 128, 00:16:15.509 "max_connections_per_session": 2, 00:16:15.509 "max_queue_depth": 64, 00:16:15.509 "default_time2wait": 2, 00:16:15.509 "default_time2retain": 20, 00:16:15.509 "first_burst_length": 8192, 00:16:15.509 "immediate_data": true, 00:16:15.509 "allow_duplicated_isid": false, 00:16:15.509 "error_recovery_level": 0, 00:16:15.509 "nop_timeout": 60, 00:16:15.509 "nop_in_interval": 30, 00:16:15.509 "disable_chap": false, 00:16:15.509 "require_chap": false, 00:16:15.509 "mutual_chap": false, 00:16:15.509 "chap_group": 0, 00:16:15.509 "max_large_datain_per_connection": 64, 00:16:15.509 "max_r2t_per_connection": 4, 00:16:15.509 "pdu_pool_size": 36864, 00:16:15.509 "immediate_data_pool_size": 16384, 00:16:15.509 "data_out_pool_size": 2048 00:16:15.509 } 00:16:15.509 } 00:16:15.509 ] 00:16:15.509 } 00:16:15.509 ] 00:16:15.509 }' 00:16:15.509 [2024-12-05 19:06:32.980089] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:16:15.509 [2024-12-05 19:06:32.980398] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84589 ] 00:16:15.771 [2024-12-05 19:06:33.127496] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:15.771 [2024-12-05 19:06:33.148298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:16.033 [2024-12-05 19:06:33.487271] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:16.033 [2024-12-05 19:06:33.487571] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:16.033 [2024-12-05 19:06:33.495405] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:16:16.033 [2024-12-05 19:06:33.495477] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:16:16.033 [2024-12-05 19:06:33.495484] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:16.033 [2024-12-05 19:06:33.495494] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:16.033 [2024-12-05 19:06:33.504351] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:16.033 [2024-12-05 19:06:33.504373] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:16.033 [2024-12-05 19:06:33.511285] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:16.033 [2024-12-05 19:06:33.511392] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:16.033 [2024-12-05 19:06:33.528280] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:16.293 19:06:33 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:16.293 19:06:33 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:16:16.294 19:06:33 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:16:16.294 19:06:33 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:16:16.294 19:06:33 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.294 19:06:33 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:16.294 19:06:33 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.294 19:06:33 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:16.294 19:06:33 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:16:16.294 19:06:33 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 84589 00:16:16.294 19:06:33 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 84589 ']' 00:16:16.294 19:06:33 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 84589 00:16:16.294 19:06:33 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:16:16.294 19:06:33 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:16.294 19:06:33 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84589 00:16:16.294 killing process with pid 84589 00:16:16.294 19:06:33 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:16.294 19:06:33 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:16.294 19:06:33 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84589' 00:16:16.294 19:06:33 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 84589 00:16:16.294 19:06:33 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 84589 00:16:16.866 [2024-12-05 19:06:34.143347] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:16.866 [2024-12-05 19:06:34.176388] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:16.866 [2024-12-05 19:06:34.176546] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:16.866 [2024-12-05 19:06:34.181272] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:16.867 [2024-12-05 19:06:34.181358] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:16.867 [2024-12-05 19:06:34.181372] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:16.867 [2024-12-05 19:06:34.181404] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:16.867 [2024-12-05 19:06:34.181571] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:17.138 19:06:34 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:16:17.138 00:16:17.138 real 0m3.887s 00:16:17.138 user 0m2.679s 00:16:17.138 sys 0m1.834s 00:16:17.138 19:06:34 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:17.138 19:06:34 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:17.138 ************************************ 00:16:17.138 END TEST test_save_ublk_config 00:16:17.138 ************************************ 00:16:17.399 19:06:34 ublk -- ublk/ublk.sh@139 -- # spdk_pid=84640 00:16:17.399 19:06:34 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:17.399 19:06:34 ublk -- ublk/ublk.sh@141 -- # waitforlisten 84640 00:16:17.399 19:06:34 ublk -- common/autotest_common.sh@835 -- # '[' -z 84640 ']' 00:16:17.399 19:06:34 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:17.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:17.399 19:06:34 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:17.399 19:06:34 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:17.399 19:06:34 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:17.399 19:06:34 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:17.399 19:06:34 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:17.399 [2024-12-05 19:06:34.790339] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:16:17.399 [2024-12-05 19:06:34.790499] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84640 ] 00:16:17.399 [2024-12-05 19:06:34.934866] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:17.659 [2024-12-05 19:06:34.965955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:17.659 [2024-12-05 19:06:34.966058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:18.226 19:06:35 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:18.226 19:06:35 ublk -- common/autotest_common.sh@868 -- # return 0 00:16:18.226 19:06:35 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:16:18.226 19:06:35 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:18.226 19:06:35 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:18.226 19:06:35 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:18.226 ************************************ 00:16:18.226 START TEST test_create_ublk 00:16:18.226 ************************************ 00:16:18.226 19:06:35 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:16:18.226 19:06:35 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:16:18.226 19:06:35 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.226 19:06:35 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:18.226 [2024-12-05 19:06:35.641271] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:18.226 [2024-12-05 19:06:35.642324] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:18.226 19:06:35 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.226 19:06:35 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:16:18.226 19:06:35 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:16:18.226 19:06:35 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.226 19:06:35 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:18.226 19:06:35 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.226 19:06:35 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:16:18.226 19:06:35 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:16:18.226 19:06:35 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.226 19:06:35 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:18.226 [2024-12-05 19:06:35.697395] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:16:18.226 [2024-12-05 19:06:35.697793] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:16:18.226 [2024-12-05 19:06:35.697816] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:18.226 [2024-12-05 19:06:35.697825] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:18.226 [2024-12-05 19:06:35.705284] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:18.226 [2024-12-05 19:06:35.705317] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:18.226 [2024-12-05 19:06:35.713296] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:18.226 [2024-12-05 19:06:35.713923] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:18.226 [2024-12-05 19:06:35.737277] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:18.226 19:06:35 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.226 19:06:35 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:16:18.226 19:06:35 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:16:18.226 19:06:35 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:16:18.226 19:06:35 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.226 19:06:35 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:18.226 19:06:35 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.226 19:06:35 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:16:18.226 { 00:16:18.226 "ublk_device": "/dev/ublkb0", 00:16:18.226 "id": 0, 00:16:18.226 "queue_depth": 512, 00:16:18.226 "num_queues": 4, 00:16:18.226 "bdev_name": "Malloc0" 00:16:18.226 } 00:16:18.226 ]' 00:16:18.226 19:06:35 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:16:18.485 19:06:35 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:18.485 19:06:35 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:16:18.485 19:06:35 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:16:18.485 19:06:35 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:16:18.485 19:06:35 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:16:18.485 19:06:35 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:16:18.485 19:06:35 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:16:18.485 19:06:35 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:16:18.485 19:06:35 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:16:18.485 19:06:35 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:16:18.485 19:06:35 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:16:18.485 19:06:35 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:16:18.485 19:06:35 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:16:18.485 19:06:35 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:16:18.485 19:06:35 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:16:18.485 19:06:35 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:16:18.485 19:06:35 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:16:18.485 19:06:35 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:16:18.485 19:06:35 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:16:18.485 19:06:35 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:16:18.485 19:06:35 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:16:18.485 fio: verification read phase will never start because write phase uses all of runtime 00:16:18.485 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:16:18.485 fio-3.35 00:16:18.485 Starting 1 process 00:16:30.739 00:16:30.739 fio_test: (groupid=0, jobs=1): err= 0: pid=84684: Thu Dec 5 19:06:46 2024 00:16:30.739 write: IOPS=17.4k, BW=67.9MiB/s (71.2MB/s)(679MiB/10001msec); 0 zone resets 00:16:30.739 clat (usec): min=31, max=3762, avg=56.78, stdev=80.42 00:16:30.739 lat (usec): min=31, max=3763, avg=57.19, stdev=80.43 00:16:30.739 clat percentiles (usec): 00:16:30.739 | 1.00th=[ 36], 5.00th=[ 42], 10.00th=[ 48], 20.00th=[ 51], 00:16:30.739 | 30.00th=[ 52], 40.00th=[ 53], 50.00th=[ 55], 60.00th=[ 56], 00:16:30.739 | 70.00th=[ 57], 80.00th=[ 59], 90.00th=[ 61], 95.00th=[ 64], 00:16:30.739 | 99.00th=[ 75], 99.50th=[ 80], 99.90th=[ 1254], 99.95th=[ 2474], 00:16:30.739 | 99.99th=[ 3392] 00:16:30.739 bw ( KiB/s): min=66008, max=85888, per=100.00%, avg=69680.42, stdev=4828.43, samples=19 00:16:30.739 iops : min=16502, max=21472, avg=17420.11, stdev=1207.11, samples=19 00:16:30.739 lat (usec) : 50=17.56%, 100=82.21%, 250=0.09%, 500=0.02%, 750=0.01% 00:16:30.739 lat (usec) : 1000=0.01% 00:16:30.739 lat (msec) : 2=0.04%, 4=0.07% 00:16:30.739 cpu : usr=2.81%, sys=11.51%, ctx=173958, majf=0, minf=796 00:16:30.739 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:30.739 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:30.739 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:30.739 issued rwts: total=0,173945,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:30.739 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:30.739 00:16:30.739 Run status group 0 (all jobs): 00:16:30.739 WRITE: bw=67.9MiB/s (71.2MB/s), 67.9MiB/s-67.9MiB/s (71.2MB/s-71.2MB/s), io=679MiB (712MB), run=10001-10001msec 00:16:30.739 00:16:30.739 Disk stats (read/write): 00:16:30.739 ublkb0: ios=0/172172, merge=0/0, ticks=0/8558, in_queue=8559, util=99.09% 00:16:30.739 19:06:46 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:16:30.739 19:06:46 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.739 19:06:46 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:30.739 [2024-12-05 19:06:46.159797] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:30.739 [2024-12-05 19:06:46.195308] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:30.739 [2024-12-05 19:06:46.195946] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:30.739 [2024-12-05 19:06:46.203280] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:30.739 [2024-12-05 19:06:46.203531] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:30.739 [2024-12-05 19:06:46.203544] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:30.739 19:06:46 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.739 19:06:46 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:16:30.739 19:06:46 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:16:30.739 19:06:46 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:16:30.739 19:06:46 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:30.739 19:06:46 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:30.739 19:06:46 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:30.739 19:06:46 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:30.739 19:06:46 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:16:30.740 19:06:46 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.740 19:06:46 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:30.740 [2024-12-05 19:06:46.219360] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:16:30.740 request: 00:16:30.740 { 00:16:30.740 "ublk_id": 0, 00:16:30.740 "method": "ublk_stop_disk", 00:16:30.740 "req_id": 1 00:16:30.740 } 00:16:30.740 Got JSON-RPC error response 00:16:30.740 response: 00:16:30.740 { 00:16:30.740 "code": -19, 00:16:30.740 "message": "No such device" 00:16:30.740 } 00:16:30.740 19:06:46 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:30.740 19:06:46 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:16:30.740 19:06:46 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:30.740 19:06:46 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:30.740 19:06:46 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:30.740 19:06:46 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:16:30.740 19:06:46 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.740 19:06:46 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:30.740 [2024-12-05 19:06:46.235345] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:30.740 [2024-12-05 19:06:46.237236] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:30.740 [2024-12-05 19:06:46.237273] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:30.740 19:06:46 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.740 19:06:46 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:30.740 19:06:46 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.740 19:06:46 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:30.740 19:06:46 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.740 19:06:46 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:16:30.740 19:06:46 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:16:30.740 19:06:46 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.740 19:06:46 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:30.740 19:06:46 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.740 19:06:46 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:16:30.740 19:06:46 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:16:30.740 19:06:46 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:16:30.740 19:06:46 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:16:30.740 19:06:46 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.740 19:06:46 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:30.740 19:06:46 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.740 19:06:46 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:16:30.740 19:06:46 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:16:30.740 ************************************ 00:16:30.740 END TEST test_create_ublk 00:16:30.740 ************************************ 00:16:30.740 19:06:46 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:16:30.740 00:16:30.740 real 0m10.781s 00:16:30.740 user 0m0.578s 00:16:30.740 sys 0m1.234s 00:16:30.740 19:06:46 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:30.740 19:06:46 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:30.740 19:06:46 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:16:30.740 19:06:46 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:30.740 19:06:46 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:30.740 19:06:46 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:30.740 ************************************ 00:16:30.740 START TEST test_create_multi_ublk 00:16:30.740 ************************************ 00:16:30.740 19:06:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:16:30.740 19:06:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:16:30.740 19:06:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.740 19:06:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:30.740 [2024-12-05 19:06:46.463269] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:30.740 [2024-12-05 19:06:46.464402] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:30.740 19:06:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.740 19:06:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:16:30.740 19:06:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:16:30.740 19:06:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:30.740 19:06:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:16:30.740 19:06:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.740 19:06:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:30.740 19:06:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.740 19:06:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:16:30.740 19:06:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:16:30.740 19:06:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.740 19:06:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:30.740 [2024-12-05 19:06:46.547435] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:16:30.740 [2024-12-05 19:06:46.547759] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:16:30.740 [2024-12-05 19:06:46.547773] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:30.740 [2024-12-05 19:06:46.547778] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:30.740 [2024-12-05 19:06:46.571280] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:30.740 [2024-12-05 19:06:46.571300] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:30.740 [2024-12-05 19:06:46.583277] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:30.740 [2024-12-05 19:06:46.583789] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:30.740 [2024-12-05 19:06:46.623285] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:30.740 19:06:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.740 19:06:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:16:30.740 19:06:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:30.740 19:06:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:16:30.740 19:06:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.740 19:06:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:30.740 19:06:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.740 19:06:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:16:30.740 19:06:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:16:30.740 19:06:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.740 19:06:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:30.740 [2024-12-05 19:06:46.731372] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:16:30.740 [2024-12-05 19:06:46.731685] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:16:30.740 [2024-12-05 19:06:46.731696] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:30.740 [2024-12-05 19:06:46.731702] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:16:30.740 [2024-12-05 19:06:46.743286] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:30.740 [2024-12-05 19:06:46.743308] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:30.740 [2024-12-05 19:06:46.755287] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:30.740 [2024-12-05 19:06:46.755829] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:16:30.740 [2024-12-05 19:06:46.791275] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:16:30.740 19:06:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.740 19:06:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:16:30.740 19:06:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:30.740 19:06:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:16:30.740 19:06:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.740 19:06:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:30.741 19:06:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.741 19:06:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:16:30.741 19:06:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:16:30.741 19:06:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.741 19:06:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:30.741 [2024-12-05 19:06:46.899368] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:16:30.741 [2024-12-05 19:06:46.899682] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:16:30.741 [2024-12-05 19:06:46.899695] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:16:30.741 [2024-12-05 19:06:46.899701] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:16:30.741 [2024-12-05 19:06:46.911284] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:30.741 [2024-12-05 19:06:46.911301] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:30.741 [2024-12-05 19:06:46.923281] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:30.741 [2024-12-05 19:06:46.923805] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:16:30.741 [2024-12-05 19:06:46.959277] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:16:30.741 19:06:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.741 19:06:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:16:30.741 19:06:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:30.741 19:06:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:16:30.741 19:06:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.741 19:06:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:30.741 19:06:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.741 19:06:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:16:30.741 19:06:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:16:30.741 19:06:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.741 19:06:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:30.741 [2024-12-05 19:06:47.067380] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:16:30.741 [2024-12-05 19:06:47.067705] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:16:30.741 [2024-12-05 19:06:47.067717] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:16:30.741 [2024-12-05 19:06:47.067724] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:16:30.741 [2024-12-05 19:06:47.079289] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:30.741 [2024-12-05 19:06:47.079312] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:30.741 [2024-12-05 19:06:47.091274] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:30.741 [2024-12-05 19:06:47.091793] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:16:30.741 [2024-12-05 19:06:47.104297] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:16:30.741 19:06:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.741 19:06:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:16:30.741 19:06:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:16:30.741 19:06:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.741 19:06:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:30.741 19:06:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.741 19:06:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:16:30.741 { 00:16:30.741 "ublk_device": "/dev/ublkb0", 00:16:30.741 "id": 0, 00:16:30.741 "queue_depth": 512, 00:16:30.741 "num_queues": 4, 00:16:30.741 "bdev_name": "Malloc0" 00:16:30.741 }, 00:16:30.741 { 00:16:30.741 "ublk_device": "/dev/ublkb1", 00:16:30.741 "id": 1, 00:16:30.741 "queue_depth": 512, 00:16:30.741 "num_queues": 4, 00:16:30.741 "bdev_name": "Malloc1" 00:16:30.741 }, 00:16:30.741 { 00:16:30.741 "ublk_device": "/dev/ublkb2", 00:16:30.741 "id": 2, 00:16:30.741 "queue_depth": 512, 00:16:30.741 "num_queues": 4, 00:16:30.741 "bdev_name": "Malloc2" 00:16:30.741 }, 00:16:30.741 { 00:16:30.741 "ublk_device": "/dev/ublkb3", 00:16:30.741 "id": 3, 00:16:30.741 "queue_depth": 512, 00:16:30.741 "num_queues": 4, 00:16:30.741 "bdev_name": "Malloc3" 00:16:30.741 } 00:16:30.741 ]' 00:16:30.741 19:06:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:16:30.741 19:06:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:30.741 19:06:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:16:30.741 19:06:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:30.741 19:06:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:16:30.741 19:06:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:16:30.741 19:06:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:16:30.741 19:06:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:30.741 19:06:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:16:30.741 19:06:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:30.741 19:06:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:16:30.741 19:06:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:16:30.741 19:06:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:30.741 19:06:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:16:30.741 19:06:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:16:30.741 19:06:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:16:30.741 19:06:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:16:30.741 19:06:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:16:30.741 19:06:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:30.741 19:06:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:16:30.741 19:06:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:30.741 19:06:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:16:30.741 19:06:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:16:30.741 19:06:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:30.741 19:06:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:16:30.741 19:06:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:16:30.741 19:06:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:16:30.741 19:06:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:16:30.741 19:06:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:16:30.741 19:06:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:30.741 19:06:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:16:30.741 19:06:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:30.741 19:06:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:16:30.741 19:06:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:16:30.741 19:06:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:30.741 19:06:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:16:30.741 19:06:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:16:30.741 19:06:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:16:30.741 19:06:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:16:30.741 19:06:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:16:30.741 19:06:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:30.741 19:06:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:16:30.741 19:06:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:30.741 19:06:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:16:30.741 19:06:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:16:30.741 19:06:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:16:30.741 19:06:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:16:30.741 19:06:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:30.741 19:06:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:16:30.742 19:06:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.742 19:06:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:30.742 [2024-12-05 19:06:47.767344] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:30.742 [2024-12-05 19:06:47.807268] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:30.742 [2024-12-05 19:06:47.808158] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:30.742 [2024-12-05 19:06:47.815283] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:30.742 [2024-12-05 19:06:47.815520] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:30.742 [2024-12-05 19:06:47.815532] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:30.742 19:06:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.742 19:06:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:30.742 19:06:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:16:30.742 19:06:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.742 19:06:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:30.742 [2024-12-05 19:06:47.831367] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:16:30.742 [2024-12-05 19:06:47.867325] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:30.742 [2024-12-05 19:06:47.868096] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:16:30.742 [2024-12-05 19:06:47.874306] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:30.742 [2024-12-05 19:06:47.874547] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:16:30.742 [2024-12-05 19:06:47.874558] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:16:30.742 19:06:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.742 19:06:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:30.742 19:06:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:16:30.742 19:06:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.742 19:06:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:30.742 [2024-12-05 19:06:47.891343] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:16:30.742 [2024-12-05 19:06:47.931817] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:30.742 [2024-12-05 19:06:47.932840] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:16:30.742 [2024-12-05 19:06:47.944304] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:30.742 [2024-12-05 19:06:47.944535] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:16:30.742 [2024-12-05 19:06:47.944542] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:16:30.742 19:06:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.742 19:06:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:30.742 19:06:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:16:30.742 19:06:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.742 19:06:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:30.742 [2024-12-05 19:06:47.959337] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:16:30.742 [2024-12-05 19:06:47.993737] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:30.742 [2024-12-05 19:06:47.994780] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:16:30.742 [2024-12-05 19:06:47.999282] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:30.742 [2024-12-05 19:06:47.999506] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:16:30.742 [2024-12-05 19:06:47.999516] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:16:30.742 19:06:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.742 19:06:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:16:30.742 [2024-12-05 19:06:48.151332] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:30.742 [2024-12-05 19:06:48.152620] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:30.742 [2024-12-05 19:06:48.152648] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:30.742 19:06:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:16:30.742 19:06:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:30.742 19:06:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:30.742 19:06:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.742 19:06:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:30.742 19:06:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.742 19:06:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:30.742 19:06:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:16:30.742 19:06:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.742 19:06:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:31.000 19:06:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.000 19:06:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:31.000 19:06:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:16:31.000 19:06:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.000 19:06:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:31.000 19:06:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.000 19:06:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:31.000 19:06:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:16:31.000 19:06:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.000 19:06:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:31.000 19:06:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.000 19:06:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:16:31.000 19:06:48 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:16:31.000 19:06:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.000 19:06:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:31.000 19:06:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.000 19:06:48 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:16:31.000 19:06:48 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:16:31.000 19:06:48 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:16:31.000 19:06:48 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:16:31.000 19:06:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.000 19:06:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:31.000 19:06:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.000 19:06:48 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:16:31.000 19:06:48 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:16:31.258 19:06:48 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:16:31.258 00:16:31.258 real 0m2.113s 00:16:31.258 user 0m0.763s 00:16:31.258 sys 0m0.143s 00:16:31.258 19:06:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:31.258 19:06:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:31.258 ************************************ 00:16:31.258 END TEST test_create_multi_ublk 00:16:31.258 ************************************ 00:16:31.258 19:06:48 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:16:31.258 19:06:48 ublk -- ublk/ublk.sh@147 -- # cleanup 00:16:31.258 19:06:48 ublk -- ublk/ublk.sh@130 -- # killprocess 84640 00:16:31.258 19:06:48 ublk -- common/autotest_common.sh@954 -- # '[' -z 84640 ']' 00:16:31.258 19:06:48 ublk -- common/autotest_common.sh@958 -- # kill -0 84640 00:16:31.258 19:06:48 ublk -- common/autotest_common.sh@959 -- # uname 00:16:31.258 19:06:48 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:31.258 19:06:48 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84640 00:16:31.258 19:06:48 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:31.258 killing process with pid 84640 00:16:31.258 19:06:48 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:31.258 19:06:48 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84640' 00:16:31.258 19:06:48 ublk -- common/autotest_common.sh@973 -- # kill 84640 00:16:31.258 19:06:48 ublk -- common/autotest_common.sh@978 -- # wait 84640 00:16:31.518 [2024-12-05 19:06:48.844704] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:31.518 [2024-12-05 19:06:48.844780] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:31.778 00:16:31.778 real 0m18.640s 00:16:31.778 user 0m28.682s 00:16:31.778 sys 0m7.588s 00:16:31.778 19:06:49 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:31.778 ************************************ 00:16:31.778 END TEST ublk 00:16:31.778 ************************************ 00:16:31.778 19:06:49 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:31.778 19:06:49 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:16:31.778 19:06:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:31.778 19:06:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:31.778 19:06:49 -- common/autotest_common.sh@10 -- # set +x 00:16:31.778 ************************************ 00:16:31.778 START TEST ublk_recovery 00:16:31.778 ************************************ 00:16:31.778 19:06:49 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:16:31.778 * Looking for test storage... 00:16:31.778 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:16:31.778 19:06:49 ublk_recovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:31.778 19:06:49 ublk_recovery -- common/autotest_common.sh@1711 -- # lcov --version 00:16:31.778 19:06:49 ublk_recovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:32.039 19:06:49 ublk_recovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:32.039 19:06:49 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:32.039 19:06:49 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:32.039 19:06:49 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:32.039 19:06:49 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:16:32.039 19:06:49 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:16:32.039 19:06:49 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:16:32.039 19:06:49 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:16:32.039 19:06:49 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:16:32.039 19:06:49 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:16:32.039 19:06:49 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:16:32.039 19:06:49 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:32.039 19:06:49 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:16:32.039 19:06:49 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:16:32.039 19:06:49 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:32.039 19:06:49 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:32.039 19:06:49 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:16:32.039 19:06:49 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:16:32.039 19:06:49 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:32.039 19:06:49 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:16:32.039 19:06:49 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:16:32.039 19:06:49 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:16:32.039 19:06:49 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:16:32.039 19:06:49 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:32.039 19:06:49 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:16:32.039 19:06:49 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:16:32.039 19:06:49 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:32.039 19:06:49 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:32.039 19:06:49 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:16:32.039 19:06:49 ublk_recovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:32.039 19:06:49 ublk_recovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:32.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:32.039 --rc genhtml_branch_coverage=1 00:16:32.039 --rc genhtml_function_coverage=1 00:16:32.039 --rc genhtml_legend=1 00:16:32.039 --rc geninfo_all_blocks=1 00:16:32.039 --rc geninfo_unexecuted_blocks=1 00:16:32.039 00:16:32.039 ' 00:16:32.039 19:06:49 ublk_recovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:32.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:32.039 --rc genhtml_branch_coverage=1 00:16:32.039 --rc genhtml_function_coverage=1 00:16:32.039 --rc genhtml_legend=1 00:16:32.039 --rc geninfo_all_blocks=1 00:16:32.039 --rc geninfo_unexecuted_blocks=1 00:16:32.039 00:16:32.039 ' 00:16:32.039 19:06:49 ublk_recovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:32.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:32.039 --rc genhtml_branch_coverage=1 00:16:32.039 --rc genhtml_function_coverage=1 00:16:32.039 --rc genhtml_legend=1 00:16:32.039 --rc geninfo_all_blocks=1 00:16:32.039 --rc geninfo_unexecuted_blocks=1 00:16:32.039 00:16:32.039 ' 00:16:32.039 19:06:49 ublk_recovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:32.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:32.039 --rc genhtml_branch_coverage=1 00:16:32.039 --rc genhtml_function_coverage=1 00:16:32.039 --rc genhtml_legend=1 00:16:32.039 --rc geninfo_all_blocks=1 00:16:32.039 --rc geninfo_unexecuted_blocks=1 00:16:32.039 00:16:32.039 ' 00:16:32.039 19:06:49 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:16:32.039 19:06:49 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:16:32.039 19:06:49 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:16:32.039 19:06:49 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:16:32.039 19:06:49 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:16:32.039 19:06:49 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:16:32.039 19:06:49 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:16:32.040 19:06:49 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:16:32.040 19:06:49 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:16:32.040 19:06:49 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:16:32.040 19:06:49 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=85005 00:16:32.040 19:06:49 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:32.040 19:06:49 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 85005 00:16:32.040 19:06:49 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 85005 ']' 00:16:32.040 19:06:49 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:32.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:32.040 19:06:49 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:32.040 19:06:49 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:32.040 19:06:49 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:32.040 19:06:49 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:32.040 19:06:49 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:32.040 [2024-12-05 19:06:49.466452] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:16:32.040 [2024-12-05 19:06:49.466569] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85005 ] 00:16:32.298 [2024-12-05 19:06:49.608150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:32.298 [2024-12-05 19:06:49.631935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:32.298 [2024-12-05 19:06:49.631990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:32.864 19:06:50 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:32.864 19:06:50 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:16:32.864 19:06:50 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:16:32.864 19:06:50 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.864 19:06:50 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:32.864 [2024-12-05 19:06:50.260272] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:32.864 [2024-12-05 19:06:50.261558] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:32.864 19:06:50 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.864 19:06:50 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:16:32.864 19:06:50 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.864 19:06:50 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:32.864 malloc0 00:16:32.864 19:06:50 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.864 19:06:50 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:16:32.864 19:06:50 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.864 19:06:50 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:32.864 [2024-12-05 19:06:50.300373] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:16:32.864 [2024-12-05 19:06:50.300461] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:16:32.864 [2024-12-05 19:06:50.300468] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:32.864 [2024-12-05 19:06:50.300476] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:16:32.864 [2024-12-05 19:06:50.309370] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:32.864 [2024-12-05 19:06:50.309394] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:32.864 [2024-12-05 19:06:50.316273] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:32.864 [2024-12-05 19:06:50.316394] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:16:32.864 [2024-12-05 19:06:50.331278] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:16:32.864 1 00:16:32.864 19:06:50 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.864 19:06:50 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:16:33.798 19:06:51 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=85038 00:16:33.798 19:06:51 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:16:33.798 19:06:51 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:16:34.056 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:34.056 fio-3.35 00:16:34.056 Starting 1 process 00:16:39.318 19:06:56 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 85005 00:16:39.318 19:06:56 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:16:44.604 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 85005 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:16:44.604 19:07:01 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=85151 00:16:44.604 19:07:01 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:44.604 19:07:01 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:44.604 19:07:01 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 85151 00:16:44.604 19:07:01 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 85151 ']' 00:16:44.604 19:07:01 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:44.604 19:07:01 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:44.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:44.604 19:07:01 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:44.604 19:07:01 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:44.604 19:07:01 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:44.604 [2024-12-05 19:07:01.432415] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:16:44.604 [2024-12-05 19:07:01.432556] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85151 ] 00:16:44.604 [2024-12-05 19:07:01.575995] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:44.604 [2024-12-05 19:07:01.601782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:44.604 [2024-12-05 19:07:01.601885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:44.862 19:07:02 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:44.862 19:07:02 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:16:44.862 19:07:02 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:16:44.862 19:07:02 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.862 19:07:02 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:44.862 [2024-12-05 19:07:02.276270] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:44.862 [2024-12-05 19:07:02.277201] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:44.862 19:07:02 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.862 19:07:02 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:16:44.862 19:07:02 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.862 19:07:02 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:44.862 malloc0 00:16:44.862 19:07:02 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.862 19:07:02 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:16:44.862 19:07:02 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.862 19:07:02 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:44.862 [2024-12-05 19:07:02.308374] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:16:44.862 [2024-12-05 19:07:02.308412] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:44.862 [2024-12-05 19:07:02.308419] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:16:44.862 [2024-12-05 19:07:02.316308] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:16:44.862 [2024-12-05 19:07:02.316325] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 2 00:16:44.863 [2024-12-05 19:07:02.316351] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:16:44.863 [2024-12-05 19:07:02.316402] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:16:44.863 1 00:16:44.863 19:07:02 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.863 19:07:02 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 85038 00:16:44.863 [2024-12-05 19:07:02.324278] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:16:44.863 [2024-12-05 19:07:02.330610] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:16:44.863 [2024-12-05 19:07:02.338444] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:16:44.863 [2024-12-05 19:07:02.338461] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:17:41.091 00:17:41.091 fio_test: (groupid=0, jobs=1): err= 0: pid=85041: Thu Dec 5 19:07:51 2024 00:17:41.091 read: IOPS=25.7k, BW=100MiB/s (105MB/s)(6019MiB/60002msec) 00:17:41.091 slat (nsec): min=965, max=347160, avg=5070.59, stdev=1677.87 00:17:41.091 clat (usec): min=593, max=6001.8k, avg=2485.13, stdev=41290.96 00:17:41.091 lat (usec): min=598, max=6001.8k, avg=2490.20, stdev=41290.96 00:17:41.091 clat percentiles (usec): 00:17:41.091 | 1.00th=[ 1762], 5.00th=[ 1876], 10.00th=[ 1893], 20.00th=[ 1926], 00:17:41.091 | 30.00th=[ 1942], 40.00th=[ 1958], 50.00th=[ 1975], 60.00th=[ 2008], 00:17:41.091 | 70.00th=[ 2073], 80.00th=[ 2442], 90.00th=[ 2573], 95.00th=[ 3195], 00:17:41.091 | 99.00th=[ 5276], 99.50th=[ 5735], 99.90th=[ 7898], 99.95th=[12256], 00:17:41.091 | 99.99th=[13173] 00:17:41.091 bw ( KiB/s): min=24440, max=125768, per=100.00%, avg=113075.04, stdev=17443.77, samples=108 00:17:41.091 iops : min= 6110, max=31442, avg=28268.76, stdev=4360.94, samples=108 00:17:41.091 write: IOPS=25.6k, BW=100MiB/s (105MB/s)(6012MiB/60002msec); 0 zone resets 00:17:41.091 slat (nsec): min=977, max=1081.4k, avg=5109.70, stdev=1899.45 00:17:41.091 clat (usec): min=614, max=6001.9k, avg=2491.03, stdev=35863.03 00:17:41.091 lat (usec): min=618, max=6001.9k, avg=2496.14, stdev=35863.03 00:17:41.091 clat percentiles (usec): 00:17:41.091 | 1.00th=[ 1795], 5.00th=[ 1958], 10.00th=[ 1991], 20.00th=[ 2008], 00:17:41.091 | 30.00th=[ 2024], 40.00th=[ 2057], 50.00th=[ 2073], 60.00th=[ 2089], 00:17:41.091 | 70.00th=[ 2147], 80.00th=[ 2507], 90.00th=[ 2671], 95.00th=[ 3097], 00:17:41.091 | 99.00th=[ 5342], 99.50th=[ 5800], 99.90th=[ 7767], 99.95th=[11994], 00:17:41.091 | 99.99th=[13304] 00:17:41.091 bw ( KiB/s): min=24104, max=125608, per=100.00%, avg=112933.19, stdev=17381.75, samples=108 00:17:41.091 iops : min= 6026, max=31402, avg=28233.30, stdev=4345.44, samples=108 00:17:41.091 lat (usec) : 750=0.01% 00:17:41.091 lat (msec) : 2=37.64%, 4=59.22%, 10=3.08%, 20=0.05%, >=2000=0.01% 00:17:41.091 cpu : usr=5.83%, sys=27.15%, ctx=104236, majf=0, minf=14 00:17:41.091 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:17:41.091 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:41.091 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:41.091 issued rwts: total=1540849,1538993,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:41.091 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:41.091 00:17:41.091 Run status group 0 (all jobs): 00:17:41.091 READ: bw=100MiB/s (105MB/s), 100MiB/s-100MiB/s (105MB/s-105MB/s), io=6019MiB (6311MB), run=60002-60002msec 00:17:41.091 WRITE: bw=100MiB/s (105MB/s), 100MiB/s-100MiB/s (105MB/s-105MB/s), io=6012MiB (6304MB), run=60002-60002msec 00:17:41.091 00:17:41.091 Disk stats (read/write): 00:17:41.091 ublkb1: ios=1537563/1535614, merge=0/0, ticks=3723894/3602136, in_queue=7326031, util=99.89% 00:17:41.091 19:07:51 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:17:41.091 19:07:51 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.091 19:07:51 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:41.091 [2024-12-05 19:07:51.601072] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:17:41.091 [2024-12-05 19:07:51.635386] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:41.091 [2024-12-05 19:07:51.635536] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:17:41.091 [2024-12-05 19:07:51.638269] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:41.091 [2024-12-05 19:07:51.638372] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:17:41.091 [2024-12-05 19:07:51.638383] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:17:41.091 19:07:51 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.091 19:07:51 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:17:41.091 19:07:51 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.091 19:07:51 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:41.091 [2024-12-05 19:07:51.646346] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:41.091 [2024-12-05 19:07:51.647539] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:41.091 [2024-12-05 19:07:51.647570] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:17:41.091 19:07:51 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.091 19:07:51 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:17:41.091 19:07:51 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:17:41.091 19:07:51 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 85151 00:17:41.091 19:07:51 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 85151 ']' 00:17:41.091 19:07:51 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 85151 00:17:41.091 19:07:51 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:17:41.091 19:07:51 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:41.091 19:07:51 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85151 00:17:41.091 killing process with pid 85151 00:17:41.091 19:07:51 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:41.091 19:07:51 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:41.091 19:07:51 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85151' 00:17:41.091 19:07:51 ublk_recovery -- common/autotest_common.sh@973 -- # kill 85151 00:17:41.091 19:07:51 ublk_recovery -- common/autotest_common.sh@978 -- # wait 85151 00:17:41.091 [2024-12-05 19:07:51.846380] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:41.091 [2024-12-05 19:07:51.846427] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:41.091 00:17:41.091 real 1m2.877s 00:17:41.091 user 1m41.644s 00:17:41.091 sys 0m33.309s 00:17:41.091 19:07:52 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:41.091 19:07:52 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:41.091 ************************************ 00:17:41.091 END TEST ublk_recovery 00:17:41.091 ************************************ 00:17:41.091 19:07:52 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:17:41.091 19:07:52 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:17:41.091 19:07:52 -- spdk/autotest.sh@260 -- # timing_exit lib 00:17:41.091 19:07:52 -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:41.091 19:07:52 -- common/autotest_common.sh@10 -- # set +x 00:17:41.091 19:07:52 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:17:41.091 19:07:52 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:17:41.091 19:07:52 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:17:41.091 19:07:52 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:17:41.091 19:07:52 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:17:41.091 19:07:52 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:17:41.091 19:07:52 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:17:41.091 19:07:52 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:17:41.091 19:07:52 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:17:41.091 19:07:52 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:17:41.091 19:07:52 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:41.091 19:07:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:41.091 19:07:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:41.091 19:07:52 -- common/autotest_common.sh@10 -- # set +x 00:17:41.092 ************************************ 00:17:41.092 START TEST ftl 00:17:41.092 ************************************ 00:17:41.092 19:07:52 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:41.092 * Looking for test storage... 00:17:41.092 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:41.092 19:07:52 ftl -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:41.092 19:07:52 ftl -- common/autotest_common.sh@1711 -- # lcov --version 00:17:41.092 19:07:52 ftl -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:41.092 19:07:52 ftl -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:41.092 19:07:52 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:41.092 19:07:52 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:41.092 19:07:52 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:41.092 19:07:52 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:17:41.092 19:07:52 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:17:41.092 19:07:52 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:17:41.092 19:07:52 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:17:41.092 19:07:52 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:17:41.092 19:07:52 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:17:41.092 19:07:52 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:17:41.092 19:07:52 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:41.092 19:07:52 ftl -- scripts/common.sh@344 -- # case "$op" in 00:17:41.092 19:07:52 ftl -- scripts/common.sh@345 -- # : 1 00:17:41.092 19:07:52 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:41.092 19:07:52 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:41.092 19:07:52 ftl -- scripts/common.sh@365 -- # decimal 1 00:17:41.092 19:07:52 ftl -- scripts/common.sh@353 -- # local d=1 00:17:41.092 19:07:52 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:41.092 19:07:52 ftl -- scripts/common.sh@355 -- # echo 1 00:17:41.092 19:07:52 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:17:41.092 19:07:52 ftl -- scripts/common.sh@366 -- # decimal 2 00:17:41.092 19:07:52 ftl -- scripts/common.sh@353 -- # local d=2 00:17:41.092 19:07:52 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:41.092 19:07:52 ftl -- scripts/common.sh@355 -- # echo 2 00:17:41.092 19:07:52 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:17:41.092 19:07:52 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:41.092 19:07:52 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:41.092 19:07:52 ftl -- scripts/common.sh@368 -- # return 0 00:17:41.092 19:07:52 ftl -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:41.092 19:07:52 ftl -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:41.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:41.092 --rc genhtml_branch_coverage=1 00:17:41.092 --rc genhtml_function_coverage=1 00:17:41.092 --rc genhtml_legend=1 00:17:41.092 --rc geninfo_all_blocks=1 00:17:41.092 --rc geninfo_unexecuted_blocks=1 00:17:41.092 00:17:41.092 ' 00:17:41.092 19:07:52 ftl -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:41.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:41.092 --rc genhtml_branch_coverage=1 00:17:41.092 --rc genhtml_function_coverage=1 00:17:41.092 --rc genhtml_legend=1 00:17:41.092 --rc geninfo_all_blocks=1 00:17:41.092 --rc geninfo_unexecuted_blocks=1 00:17:41.092 00:17:41.092 ' 00:17:41.092 19:07:52 ftl -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:41.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:41.092 --rc genhtml_branch_coverage=1 00:17:41.092 --rc genhtml_function_coverage=1 00:17:41.092 --rc genhtml_legend=1 00:17:41.092 --rc geninfo_all_blocks=1 00:17:41.092 --rc geninfo_unexecuted_blocks=1 00:17:41.092 00:17:41.092 ' 00:17:41.092 19:07:52 ftl -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:41.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:41.092 --rc genhtml_branch_coverage=1 00:17:41.092 --rc genhtml_function_coverage=1 00:17:41.092 --rc genhtml_legend=1 00:17:41.092 --rc geninfo_all_blocks=1 00:17:41.092 --rc geninfo_unexecuted_blocks=1 00:17:41.092 00:17:41.092 ' 00:17:41.092 19:07:52 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:41.092 19:07:52 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:41.092 19:07:52 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:41.092 19:07:52 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:41.092 19:07:52 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:41.092 19:07:52 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:41.092 19:07:52 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:41.092 19:07:52 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:41.092 19:07:52 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:41.092 19:07:52 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:41.092 19:07:52 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:41.092 19:07:52 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:41.092 19:07:52 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:41.092 19:07:52 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:41.092 19:07:52 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:41.092 19:07:52 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:41.092 19:07:52 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:41.092 19:07:52 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:41.092 19:07:52 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:41.092 19:07:52 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:41.092 19:07:52 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:41.092 19:07:52 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:41.092 19:07:52 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:41.092 19:07:52 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:41.092 19:07:52 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:41.092 19:07:52 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:41.092 19:07:52 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:41.092 19:07:52 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:41.092 19:07:52 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:41.092 19:07:52 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:41.092 19:07:52 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:17:41.092 19:07:52 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:17:41.092 19:07:52 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:17:41.092 19:07:52 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:17:41.092 19:07:52 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:41.092 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:41.092 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:41.092 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:41.092 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:41.092 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:41.092 19:07:52 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=85947 00:17:41.092 19:07:52 ftl -- ftl/ftl.sh@38 -- # waitforlisten 85947 00:17:41.093 19:07:52 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:17:41.093 19:07:52 ftl -- common/autotest_common.sh@835 -- # '[' -z 85947 ']' 00:17:41.093 19:07:52 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:41.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:41.093 19:07:52 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:41.093 19:07:52 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:41.093 19:07:52 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:41.093 19:07:52 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:41.093 [2024-12-05 19:07:52.932373] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:17:41.093 [2024-12-05 19:07:52.932495] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85947 ] 00:17:41.093 [2024-12-05 19:07:53.077400] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:41.093 [2024-12-05 19:07:53.097419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:41.093 19:07:53 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:41.093 19:07:53 ftl -- common/autotest_common.sh@868 -- # return 0 00:17:41.093 19:07:53 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:17:41.093 19:07:53 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:17:41.093 19:07:54 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:17:41.093 19:07:54 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:17:41.093 19:07:54 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:17:41.093 19:07:54 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:17:41.093 19:07:54 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:17:41.093 19:07:55 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:17:41.093 19:07:55 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:17:41.093 19:07:55 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:17:41.093 19:07:55 ftl -- ftl/ftl.sh@50 -- # break 00:17:41.093 19:07:55 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:17:41.093 19:07:55 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:17:41.093 19:07:55 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:17:41.093 19:07:55 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:17:41.093 19:07:55 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:17:41.093 19:07:55 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:17:41.093 19:07:55 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:17:41.093 19:07:55 ftl -- ftl/ftl.sh@63 -- # break 00:17:41.093 19:07:55 ftl -- ftl/ftl.sh@66 -- # killprocess 85947 00:17:41.093 19:07:55 ftl -- common/autotest_common.sh@954 -- # '[' -z 85947 ']' 00:17:41.093 19:07:55 ftl -- common/autotest_common.sh@958 -- # kill -0 85947 00:17:41.093 19:07:55 ftl -- common/autotest_common.sh@959 -- # uname 00:17:41.093 19:07:55 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:41.093 19:07:55 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85947 00:17:41.093 killing process with pid 85947 00:17:41.093 19:07:55 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:41.093 19:07:55 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:41.093 19:07:55 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85947' 00:17:41.093 19:07:55 ftl -- common/autotest_common.sh@973 -- # kill 85947 00:17:41.093 19:07:55 ftl -- common/autotest_common.sh@978 -- # wait 85947 00:17:41.093 19:07:55 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:17:41.093 19:07:55 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:17:41.093 19:07:55 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:41.093 19:07:55 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:41.093 19:07:55 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:41.093 ************************************ 00:17:41.093 START TEST ftl_fio_basic 00:17:41.093 ************************************ 00:17:41.093 19:07:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:17:41.093 * Looking for test storage... 00:17:41.093 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:41.093 19:07:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:41.093 19:07:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:41.093 19:07:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # lcov --version 00:17:41.093 19:07:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:41.093 19:07:55 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:41.093 19:07:55 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:41.093 19:07:55 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:41.093 19:07:55 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:17:41.093 19:07:55 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:17:41.093 19:07:55 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:17:41.093 19:07:55 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:17:41.093 19:07:55 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:17:41.093 19:07:55 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:17:41.093 19:07:55 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:17:41.093 19:07:55 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:41.093 19:07:55 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:17:41.093 19:07:55 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:17:41.093 19:07:55 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:41.093 19:07:55 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:41.093 19:07:55 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:17:41.093 19:07:55 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:17:41.093 19:07:55 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:41.093 19:07:55 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:17:41.093 19:07:55 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:17:41.093 19:07:55 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:17:41.093 19:07:55 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:17:41.093 19:07:55 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:41.093 19:07:55 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:17:41.093 19:07:55 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:17:41.093 19:07:55 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:41.093 19:07:55 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:41.093 19:07:55 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:17:41.093 19:07:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:41.093 19:07:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:41.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:41.093 --rc genhtml_branch_coverage=1 00:17:41.093 --rc genhtml_function_coverage=1 00:17:41.093 --rc genhtml_legend=1 00:17:41.093 --rc geninfo_all_blocks=1 00:17:41.093 --rc geninfo_unexecuted_blocks=1 00:17:41.093 00:17:41.093 ' 00:17:41.093 19:07:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:41.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:41.093 --rc genhtml_branch_coverage=1 00:17:41.093 --rc genhtml_function_coverage=1 00:17:41.093 --rc genhtml_legend=1 00:17:41.093 --rc geninfo_all_blocks=1 00:17:41.093 --rc geninfo_unexecuted_blocks=1 00:17:41.093 00:17:41.093 ' 00:17:41.093 19:07:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:41.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:41.094 --rc genhtml_branch_coverage=1 00:17:41.094 --rc genhtml_function_coverage=1 00:17:41.094 --rc genhtml_legend=1 00:17:41.094 --rc geninfo_all_blocks=1 00:17:41.094 --rc geninfo_unexecuted_blocks=1 00:17:41.094 00:17:41.094 ' 00:17:41.094 19:07:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:41.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:41.094 --rc genhtml_branch_coverage=1 00:17:41.094 --rc genhtml_function_coverage=1 00:17:41.094 --rc genhtml_legend=1 00:17:41.094 --rc geninfo_all_blocks=1 00:17:41.094 --rc geninfo_unexecuted_blocks=1 00:17:41.094 00:17:41.094 ' 00:17:41.094 19:07:55 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:41.094 19:07:55 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:17:41.094 19:07:55 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:41.094 19:07:55 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:41.094 19:07:55 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:41.094 19:07:55 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:41.094 19:07:55 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:41.094 19:07:55 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:41.094 19:07:55 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:41.094 19:07:55 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:41.094 19:07:55 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:41.094 19:07:55 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:41.094 19:07:55 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:41.094 19:07:55 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:41.094 19:07:55 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:41.094 19:07:55 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:41.094 19:07:55 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:41.094 19:07:55 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:41.094 19:07:55 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:41.094 19:07:55 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:41.094 19:07:55 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:41.094 19:07:55 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:41.094 19:07:55 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:41.094 19:07:55 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:41.094 19:07:55 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:41.094 19:07:55 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:41.094 19:07:55 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:41.094 19:07:55 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:41.094 19:07:55 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:41.094 19:07:55 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:17:41.094 19:07:55 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:17:41.094 19:07:55 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:17:41.094 19:07:55 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:17:41.094 19:07:55 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:41.094 19:07:55 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:17:41.094 19:07:55 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:17:41.094 19:07:55 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:17:41.094 19:07:55 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:17:41.094 19:07:55 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:17:41.094 19:07:55 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:17:41.094 19:07:55 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:17:41.094 19:07:55 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:17:41.094 19:07:55 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:17:41.094 19:07:55 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:41.094 19:07:55 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:41.094 19:07:55 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:17:41.094 19:07:55 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=86063 00:17:41.094 19:07:55 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 86063 00:17:41.094 19:07:55 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:17:41.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:41.094 19:07:55 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 86063 ']' 00:17:41.094 19:07:55 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:41.094 19:07:55 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:41.094 19:07:55 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:41.094 19:07:55 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:41.094 19:07:55 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:41.094 [2024-12-05 19:07:55.967824] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:17:41.094 [2024-12-05 19:07:55.968123] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86063 ] 00:17:41.094 [2024-12-05 19:07:56.114402] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:41.094 [2024-12-05 19:07:56.147111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:41.094 [2024-12-05 19:07:56.147507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:41.094 [2024-12-05 19:07:56.147448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:41.094 19:07:56 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:41.094 19:07:56 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:17:41.094 19:07:56 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:41.094 19:07:56 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:17:41.094 19:07:56 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:41.094 19:07:56 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:17:41.094 19:07:56 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:17:41.094 19:07:56 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:41.094 19:07:57 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:41.094 19:07:57 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:17:41.095 19:07:57 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:41.095 19:07:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:17:41.095 19:07:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:41.095 19:07:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:17:41.095 19:07:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:17:41.095 19:07:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:41.095 19:07:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:41.095 { 00:17:41.095 "name": "nvme0n1", 00:17:41.095 "aliases": [ 00:17:41.095 "5a55a55c-ac4b-443f-b744-65f8d1f03f7f" 00:17:41.095 ], 00:17:41.095 "product_name": "NVMe disk", 00:17:41.095 "block_size": 4096, 00:17:41.095 "num_blocks": 1310720, 00:17:41.095 "uuid": "5a55a55c-ac4b-443f-b744-65f8d1f03f7f", 00:17:41.095 "numa_id": -1, 00:17:41.095 "assigned_rate_limits": { 00:17:41.095 "rw_ios_per_sec": 0, 00:17:41.095 "rw_mbytes_per_sec": 0, 00:17:41.095 "r_mbytes_per_sec": 0, 00:17:41.095 "w_mbytes_per_sec": 0 00:17:41.095 }, 00:17:41.095 "claimed": false, 00:17:41.095 "zoned": false, 00:17:41.095 "supported_io_types": { 00:17:41.095 "read": true, 00:17:41.095 "write": true, 00:17:41.095 "unmap": true, 00:17:41.095 "flush": true, 00:17:41.095 "reset": true, 00:17:41.095 "nvme_admin": true, 00:17:41.095 "nvme_io": true, 00:17:41.095 "nvme_io_md": false, 00:17:41.095 "write_zeroes": true, 00:17:41.095 "zcopy": false, 00:17:41.095 "get_zone_info": false, 00:17:41.095 "zone_management": false, 00:17:41.095 "zone_append": false, 00:17:41.095 "compare": true, 00:17:41.095 "compare_and_write": false, 00:17:41.095 "abort": true, 00:17:41.095 "seek_hole": false, 00:17:41.095 "seek_data": false, 00:17:41.095 "copy": true, 00:17:41.095 "nvme_iov_md": false 00:17:41.095 }, 00:17:41.095 "driver_specific": { 00:17:41.095 "nvme": [ 00:17:41.095 { 00:17:41.095 "pci_address": "0000:00:11.0", 00:17:41.095 "trid": { 00:17:41.095 "trtype": "PCIe", 00:17:41.095 "traddr": "0000:00:11.0" 00:17:41.095 }, 00:17:41.095 "ctrlr_data": { 00:17:41.095 "cntlid": 0, 00:17:41.095 "vendor_id": "0x1b36", 00:17:41.095 "model_number": "QEMU NVMe Ctrl", 00:17:41.095 "serial_number": "12341", 00:17:41.095 "firmware_revision": "8.0.0", 00:17:41.095 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:41.095 "oacs": { 00:17:41.095 "security": 0, 00:17:41.095 "format": 1, 00:17:41.095 "firmware": 0, 00:17:41.095 "ns_manage": 1 00:17:41.095 }, 00:17:41.095 "multi_ctrlr": false, 00:17:41.095 "ana_reporting": false 00:17:41.095 }, 00:17:41.095 "vs": { 00:17:41.095 "nvme_version": "1.4" 00:17:41.095 }, 00:17:41.095 "ns_data": { 00:17:41.095 "id": 1, 00:17:41.095 "can_share": false 00:17:41.095 } 00:17:41.095 } 00:17:41.095 ], 00:17:41.095 "mp_policy": "active_passive" 00:17:41.095 } 00:17:41.095 } 00:17:41.095 ]' 00:17:41.095 19:07:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:41.095 19:07:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:17:41.095 19:07:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:41.095 19:07:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:17:41.095 19:07:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:17:41.095 19:07:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:17:41.095 19:07:57 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:17:41.095 19:07:57 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:41.095 19:07:57 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:17:41.095 19:07:57 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:41.095 19:07:57 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:41.095 19:07:57 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:17:41.095 19:07:57 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:41.095 19:07:57 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=176315a6-7f27-4ebd-abc8-2cb891891970 00:17:41.095 19:07:57 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 176315a6-7f27-4ebd-abc8-2cb891891970 00:17:41.095 19:07:58 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=29c92119-de4e-47c7-b89f-833249c9f4f4 00:17:41.095 19:07:58 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 29c92119-de4e-47c7-b89f-833249c9f4f4 00:17:41.095 19:07:58 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:17:41.095 19:07:58 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:41.095 19:07:58 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=29c92119-de4e-47c7-b89f-833249c9f4f4 00:17:41.095 19:07:58 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:17:41.095 19:07:58 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 29c92119-de4e-47c7-b89f-833249c9f4f4 00:17:41.095 19:07:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=29c92119-de4e-47c7-b89f-833249c9f4f4 00:17:41.095 19:07:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:41.095 19:07:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:17:41.095 19:07:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:17:41.095 19:07:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 29c92119-de4e-47c7-b89f-833249c9f4f4 00:17:41.095 19:07:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:41.095 { 00:17:41.095 "name": "29c92119-de4e-47c7-b89f-833249c9f4f4", 00:17:41.095 "aliases": [ 00:17:41.095 "lvs/nvme0n1p0" 00:17:41.095 ], 00:17:41.095 "product_name": "Logical Volume", 00:17:41.095 "block_size": 4096, 00:17:41.095 "num_blocks": 26476544, 00:17:41.095 "uuid": "29c92119-de4e-47c7-b89f-833249c9f4f4", 00:17:41.095 "assigned_rate_limits": { 00:17:41.095 "rw_ios_per_sec": 0, 00:17:41.095 "rw_mbytes_per_sec": 0, 00:17:41.095 "r_mbytes_per_sec": 0, 00:17:41.095 "w_mbytes_per_sec": 0 00:17:41.095 }, 00:17:41.095 "claimed": false, 00:17:41.095 "zoned": false, 00:17:41.095 "supported_io_types": { 00:17:41.095 "read": true, 00:17:41.095 "write": true, 00:17:41.095 "unmap": true, 00:17:41.095 "flush": false, 00:17:41.095 "reset": true, 00:17:41.095 "nvme_admin": false, 00:17:41.095 "nvme_io": false, 00:17:41.096 "nvme_io_md": false, 00:17:41.096 "write_zeroes": true, 00:17:41.096 "zcopy": false, 00:17:41.096 "get_zone_info": false, 00:17:41.096 "zone_management": false, 00:17:41.096 "zone_append": false, 00:17:41.096 "compare": false, 00:17:41.096 "compare_and_write": false, 00:17:41.096 "abort": false, 00:17:41.096 "seek_hole": true, 00:17:41.096 "seek_data": true, 00:17:41.096 "copy": false, 00:17:41.096 "nvme_iov_md": false 00:17:41.096 }, 00:17:41.096 "driver_specific": { 00:17:41.096 "lvol": { 00:17:41.096 "lvol_store_uuid": "176315a6-7f27-4ebd-abc8-2cb891891970", 00:17:41.096 "base_bdev": "nvme0n1", 00:17:41.096 "thin_provision": true, 00:17:41.096 "num_allocated_clusters": 0, 00:17:41.096 "snapshot": false, 00:17:41.096 "clone": false, 00:17:41.096 "esnap_clone": false 00:17:41.096 } 00:17:41.096 } 00:17:41.096 } 00:17:41.096 ]' 00:17:41.096 19:07:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:41.096 19:07:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:17:41.096 19:07:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:41.096 19:07:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:41.096 19:07:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:41.096 19:07:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:17:41.096 19:07:58 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:17:41.096 19:07:58 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:17:41.096 19:07:58 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:41.096 19:07:58 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:41.096 19:07:58 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:41.096 19:07:58 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 29c92119-de4e-47c7-b89f-833249c9f4f4 00:17:41.096 19:07:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=29c92119-de4e-47c7-b89f-833249c9f4f4 00:17:41.096 19:07:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:41.096 19:07:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:17:41.096 19:07:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:17:41.096 19:07:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 29c92119-de4e-47c7-b89f-833249c9f4f4 00:17:41.356 19:07:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:41.356 { 00:17:41.356 "name": "29c92119-de4e-47c7-b89f-833249c9f4f4", 00:17:41.356 "aliases": [ 00:17:41.356 "lvs/nvme0n1p0" 00:17:41.356 ], 00:17:41.356 "product_name": "Logical Volume", 00:17:41.356 "block_size": 4096, 00:17:41.356 "num_blocks": 26476544, 00:17:41.356 "uuid": "29c92119-de4e-47c7-b89f-833249c9f4f4", 00:17:41.356 "assigned_rate_limits": { 00:17:41.357 "rw_ios_per_sec": 0, 00:17:41.357 "rw_mbytes_per_sec": 0, 00:17:41.357 "r_mbytes_per_sec": 0, 00:17:41.357 "w_mbytes_per_sec": 0 00:17:41.357 }, 00:17:41.357 "claimed": false, 00:17:41.357 "zoned": false, 00:17:41.357 "supported_io_types": { 00:17:41.357 "read": true, 00:17:41.357 "write": true, 00:17:41.357 "unmap": true, 00:17:41.357 "flush": false, 00:17:41.357 "reset": true, 00:17:41.357 "nvme_admin": false, 00:17:41.357 "nvme_io": false, 00:17:41.357 "nvme_io_md": false, 00:17:41.357 "write_zeroes": true, 00:17:41.357 "zcopy": false, 00:17:41.357 "get_zone_info": false, 00:17:41.357 "zone_management": false, 00:17:41.357 "zone_append": false, 00:17:41.357 "compare": false, 00:17:41.357 "compare_and_write": false, 00:17:41.357 "abort": false, 00:17:41.357 "seek_hole": true, 00:17:41.357 "seek_data": true, 00:17:41.357 "copy": false, 00:17:41.357 "nvme_iov_md": false 00:17:41.357 }, 00:17:41.357 "driver_specific": { 00:17:41.357 "lvol": { 00:17:41.357 "lvol_store_uuid": "176315a6-7f27-4ebd-abc8-2cb891891970", 00:17:41.357 "base_bdev": "nvme0n1", 00:17:41.357 "thin_provision": true, 00:17:41.357 "num_allocated_clusters": 0, 00:17:41.357 "snapshot": false, 00:17:41.357 "clone": false, 00:17:41.357 "esnap_clone": false 00:17:41.357 } 00:17:41.357 } 00:17:41.357 } 00:17:41.357 ]' 00:17:41.357 19:07:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:41.357 19:07:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:17:41.357 19:07:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:41.357 19:07:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:41.357 19:07:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:41.357 19:07:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:17:41.357 19:07:58 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:17:41.357 19:07:58 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:41.615 19:07:59 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:17:41.615 19:07:59 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:17:41.615 19:07:59 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:17:41.615 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:17:41.615 19:07:59 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 29c92119-de4e-47c7-b89f-833249c9f4f4 00:17:41.615 19:07:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=29c92119-de4e-47c7-b89f-833249c9f4f4 00:17:41.615 19:07:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:41.615 19:07:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:17:41.615 19:07:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:17:41.615 19:07:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 29c92119-de4e-47c7-b89f-833249c9f4f4 00:17:41.874 19:07:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:41.874 { 00:17:41.874 "name": "29c92119-de4e-47c7-b89f-833249c9f4f4", 00:17:41.874 "aliases": [ 00:17:41.874 "lvs/nvme0n1p0" 00:17:41.874 ], 00:17:41.874 "product_name": "Logical Volume", 00:17:41.874 "block_size": 4096, 00:17:41.874 "num_blocks": 26476544, 00:17:41.874 "uuid": "29c92119-de4e-47c7-b89f-833249c9f4f4", 00:17:41.874 "assigned_rate_limits": { 00:17:41.874 "rw_ios_per_sec": 0, 00:17:41.875 "rw_mbytes_per_sec": 0, 00:17:41.875 "r_mbytes_per_sec": 0, 00:17:41.875 "w_mbytes_per_sec": 0 00:17:41.875 }, 00:17:41.875 "claimed": false, 00:17:41.875 "zoned": false, 00:17:41.875 "supported_io_types": { 00:17:41.875 "read": true, 00:17:41.875 "write": true, 00:17:41.875 "unmap": true, 00:17:41.875 "flush": false, 00:17:41.875 "reset": true, 00:17:41.875 "nvme_admin": false, 00:17:41.875 "nvme_io": false, 00:17:41.875 "nvme_io_md": false, 00:17:41.875 "write_zeroes": true, 00:17:41.875 "zcopy": false, 00:17:41.875 "get_zone_info": false, 00:17:41.875 "zone_management": false, 00:17:41.875 "zone_append": false, 00:17:41.875 "compare": false, 00:17:41.875 "compare_and_write": false, 00:17:41.875 "abort": false, 00:17:41.875 "seek_hole": true, 00:17:41.875 "seek_data": true, 00:17:41.875 "copy": false, 00:17:41.875 "nvme_iov_md": false 00:17:41.875 }, 00:17:41.875 "driver_specific": { 00:17:41.875 "lvol": { 00:17:41.875 "lvol_store_uuid": "176315a6-7f27-4ebd-abc8-2cb891891970", 00:17:41.875 "base_bdev": "nvme0n1", 00:17:41.875 "thin_provision": true, 00:17:41.875 "num_allocated_clusters": 0, 00:17:41.875 "snapshot": false, 00:17:41.875 "clone": false, 00:17:41.875 "esnap_clone": false 00:17:41.875 } 00:17:41.875 } 00:17:41.875 } 00:17:41.875 ]' 00:17:41.875 19:07:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:41.875 19:07:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:17:41.875 19:07:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:41.875 19:07:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:41.875 19:07:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:41.875 19:07:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:17:41.875 19:07:59 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:17:41.875 19:07:59 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:17:41.875 19:07:59 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 29c92119-de4e-47c7-b89f-833249c9f4f4 -c nvc0n1p0 --l2p_dram_limit 60 00:17:42.135 [2024-12-05 19:07:59.452825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.135 [2024-12-05 19:07:59.452956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:42.135 [2024-12-05 19:07:59.452973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:42.135 [2024-12-05 19:07:59.452982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.135 [2024-12-05 19:07:59.453040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.135 [2024-12-05 19:07:59.453049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:42.135 [2024-12-05 19:07:59.453056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:17:42.135 [2024-12-05 19:07:59.453065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.135 [2024-12-05 19:07:59.453085] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:42.135 [2024-12-05 19:07:59.453323] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:42.135 [2024-12-05 19:07:59.453335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.135 [2024-12-05 19:07:59.453343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:42.135 [2024-12-05 19:07:59.453350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.255 ms 00:17:42.135 [2024-12-05 19:07:59.453357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.135 [2024-12-05 19:07:59.453409] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID c5819e88-0c9f-4060-a2b6-c4e0ef67ff4e 00:17:42.135 [2024-12-05 19:07:59.454444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.135 [2024-12-05 19:07:59.454464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:42.135 [2024-12-05 19:07:59.454474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:17:42.135 [2024-12-05 19:07:59.454480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.135 [2024-12-05 19:07:59.459730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.135 [2024-12-05 19:07:59.459848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:42.135 [2024-12-05 19:07:59.459862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.183 ms 00:17:42.135 [2024-12-05 19:07:59.459870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.135 [2024-12-05 19:07:59.459950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.135 [2024-12-05 19:07:59.459961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:42.135 [2024-12-05 19:07:59.459970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:17:42.135 [2024-12-05 19:07:59.459983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.135 [2024-12-05 19:07:59.460026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.135 [2024-12-05 19:07:59.460037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:42.135 [2024-12-05 19:07:59.460046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:42.135 [2024-12-05 19:07:59.460051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.135 [2024-12-05 19:07:59.460077] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:42.135 [2024-12-05 19:07:59.461394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.135 [2024-12-05 19:07:59.461413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:42.135 [2024-12-05 19:07:59.461420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.324 ms 00:17:42.135 [2024-12-05 19:07:59.461427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.135 [2024-12-05 19:07:59.461456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.135 [2024-12-05 19:07:59.461464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:42.135 [2024-12-05 19:07:59.461478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:42.135 [2024-12-05 19:07:59.461493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.135 [2024-12-05 19:07:59.461526] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:42.135 [2024-12-05 19:07:59.461645] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:42.135 [2024-12-05 19:07:59.461657] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:42.135 [2024-12-05 19:07:59.461668] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:42.135 [2024-12-05 19:07:59.461676] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:42.135 [2024-12-05 19:07:59.461685] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:42.135 [2024-12-05 19:07:59.461691] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:42.135 [2024-12-05 19:07:59.461699] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:42.135 [2024-12-05 19:07:59.461704] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:42.135 [2024-12-05 19:07:59.461711] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:42.135 [2024-12-05 19:07:59.461717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.135 [2024-12-05 19:07:59.461724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:42.135 [2024-12-05 19:07:59.461733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.192 ms 00:17:42.135 [2024-12-05 19:07:59.461740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.135 [2024-12-05 19:07:59.461814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.135 [2024-12-05 19:07:59.461823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:42.135 [2024-12-05 19:07:59.461831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:17:42.135 [2024-12-05 19:07:59.461837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.135 [2024-12-05 19:07:59.461925] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:42.135 [2024-12-05 19:07:59.461933] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:42.136 [2024-12-05 19:07:59.461940] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:42.136 [2024-12-05 19:07:59.461948] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:42.136 [2024-12-05 19:07:59.461954] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:42.136 [2024-12-05 19:07:59.461960] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:42.136 [2024-12-05 19:07:59.461965] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:42.136 [2024-12-05 19:07:59.461972] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:42.136 [2024-12-05 19:07:59.461977] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:42.136 [2024-12-05 19:07:59.461983] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:42.136 [2024-12-05 19:07:59.461989] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:42.136 [2024-12-05 19:07:59.461995] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:42.136 [2024-12-05 19:07:59.462000] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:42.136 [2024-12-05 19:07:59.462008] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:42.136 [2024-12-05 19:07:59.462013] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:17:42.136 [2024-12-05 19:07:59.462020] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:42.136 [2024-12-05 19:07:59.462025] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:42.136 [2024-12-05 19:07:59.462031] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:17:42.136 [2024-12-05 19:07:59.462036] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:42.136 [2024-12-05 19:07:59.462042] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:42.136 [2024-12-05 19:07:59.462048] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:42.136 [2024-12-05 19:07:59.462055] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:42.136 [2024-12-05 19:07:59.462061] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:42.136 [2024-12-05 19:07:59.462068] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:42.136 [2024-12-05 19:07:59.462074] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:42.136 [2024-12-05 19:07:59.462081] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:42.136 [2024-12-05 19:07:59.462087] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:42.136 [2024-12-05 19:07:59.462094] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:42.136 [2024-12-05 19:07:59.462100] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:42.136 [2024-12-05 19:07:59.462109] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:17:42.136 [2024-12-05 19:07:59.462115] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:42.136 [2024-12-05 19:07:59.462123] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:42.136 [2024-12-05 19:07:59.462128] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:17:42.136 [2024-12-05 19:07:59.462135] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:42.136 [2024-12-05 19:07:59.462141] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:42.136 [2024-12-05 19:07:59.462148] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:17:42.136 [2024-12-05 19:07:59.462154] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:42.136 [2024-12-05 19:07:59.462161] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:42.136 [2024-12-05 19:07:59.462167] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:17:42.136 [2024-12-05 19:07:59.462174] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:42.136 [2024-12-05 19:07:59.462184] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:42.136 [2024-12-05 19:07:59.462192] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:17:42.136 [2024-12-05 19:07:59.462197] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:42.136 [2024-12-05 19:07:59.462204] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:42.136 [2024-12-05 19:07:59.462218] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:42.136 [2024-12-05 19:07:59.462227] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:42.136 [2024-12-05 19:07:59.462234] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:42.136 [2024-12-05 19:07:59.462243] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:42.136 [2024-12-05 19:07:59.462266] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:42.136 [2024-12-05 19:07:59.462274] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:42.136 [2024-12-05 19:07:59.462280] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:42.136 [2024-12-05 19:07:59.462288] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:42.136 [2024-12-05 19:07:59.462294] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:42.136 [2024-12-05 19:07:59.462304] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:42.136 [2024-12-05 19:07:59.462312] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:42.136 [2024-12-05 19:07:59.462320] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:42.136 [2024-12-05 19:07:59.462327] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:17:42.136 [2024-12-05 19:07:59.462334] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:17:42.136 [2024-12-05 19:07:59.462340] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:17:42.136 [2024-12-05 19:07:59.462348] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:17:42.136 [2024-12-05 19:07:59.462354] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:17:42.136 [2024-12-05 19:07:59.462362] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:17:42.136 [2024-12-05 19:07:59.462369] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:17:42.136 [2024-12-05 19:07:59.462376] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:17:42.136 [2024-12-05 19:07:59.462383] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:17:42.136 [2024-12-05 19:07:59.462390] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:17:42.136 [2024-12-05 19:07:59.462396] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:17:42.136 [2024-12-05 19:07:59.462403] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:17:42.136 [2024-12-05 19:07:59.462410] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:17:42.136 [2024-12-05 19:07:59.462417] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:42.136 [2024-12-05 19:07:59.462424] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:42.136 [2024-12-05 19:07:59.462433] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:42.137 [2024-12-05 19:07:59.462441] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:42.137 [2024-12-05 19:07:59.462449] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:42.137 [2024-12-05 19:07:59.462455] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:42.137 [2024-12-05 19:07:59.462471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.137 [2024-12-05 19:07:59.462477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:42.137 [2024-12-05 19:07:59.462487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.595 ms 00:17:42.137 [2024-12-05 19:07:59.462498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.137 [2024-12-05 19:07:59.462553] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:17:42.137 [2024-12-05 19:07:59.462562] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:17:44.037 [2024-12-05 19:08:01.491323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.037 [2024-12-05 19:08:01.491373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:44.037 [2024-12-05 19:08:01.491389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2028.756 ms 00:17:44.037 [2024-12-05 19:08:01.491408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.037 [2024-12-05 19:08:01.500063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.037 [2024-12-05 19:08:01.500099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:44.037 [2024-12-05 19:08:01.500115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.568 ms 00:17:44.037 [2024-12-05 19:08:01.500123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.037 [2024-12-05 19:08:01.500243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.037 [2024-12-05 19:08:01.500277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:44.037 [2024-12-05 19:08:01.500289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:44.037 [2024-12-05 19:08:01.500297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.037 [2024-12-05 19:08:01.519511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.037 [2024-12-05 19:08:01.519749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:44.037 [2024-12-05 19:08:01.519779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.165 ms 00:17:44.037 [2024-12-05 19:08:01.519787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.037 [2024-12-05 19:08:01.519855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.037 [2024-12-05 19:08:01.519865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:44.037 [2024-12-05 19:08:01.519876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:17:44.037 [2024-12-05 19:08:01.519883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.037 [2024-12-05 19:08:01.520244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.037 [2024-12-05 19:08:01.520275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:44.037 [2024-12-05 19:08:01.520287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.295 ms 00:17:44.037 [2024-12-05 19:08:01.520297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.037 [2024-12-05 19:08:01.520426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.037 [2024-12-05 19:08:01.520436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:44.037 [2024-12-05 19:08:01.520447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:17:44.037 [2024-12-05 19:08:01.520455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.037 [2024-12-05 19:08:01.526455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.037 [2024-12-05 19:08:01.526581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:44.037 [2024-12-05 19:08:01.526602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.967 ms 00:17:44.037 [2024-12-05 19:08:01.526615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.037 [2024-12-05 19:08:01.536016] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:44.037 [2024-12-05 19:08:01.550787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.037 [2024-12-05 19:08:01.550825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:44.037 [2024-12-05 19:08:01.550836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.072 ms 00:17:44.037 [2024-12-05 19:08:01.550846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.037 [2024-12-05 19:08:01.587589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.037 [2024-12-05 19:08:01.587641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:44.037 [2024-12-05 19:08:01.587653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.709 ms 00:17:44.037 [2024-12-05 19:08:01.587665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.037 [2024-12-05 19:08:01.587850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.037 [2024-12-05 19:08:01.587864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:44.037 [2024-12-05 19:08:01.587873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:17:44.037 [2024-12-05 19:08:01.587882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.037 [2024-12-05 19:08:01.591237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.037 [2024-12-05 19:08:01.591298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:44.037 [2024-12-05 19:08:01.591311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.313 ms 00:17:44.037 [2024-12-05 19:08:01.591321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.037 [2024-12-05 19:08:01.593828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.037 [2024-12-05 19:08:01.593961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:44.037 [2024-12-05 19:08:01.593977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.466 ms 00:17:44.037 [2024-12-05 19:08:01.593987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.037 [2024-12-05 19:08:01.594301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.037 [2024-12-05 19:08:01.594320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:44.037 [2024-12-05 19:08:01.594330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:17:44.037 [2024-12-05 19:08:01.594341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.296 [2024-12-05 19:08:01.618966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.296 [2024-12-05 19:08:01.619091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:44.296 [2024-12-05 19:08:01.619129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.594 ms 00:17:44.296 [2024-12-05 19:08:01.619159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.296 [2024-12-05 19:08:01.627528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.296 [2024-12-05 19:08:01.627852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:44.296 [2024-12-05 19:08:01.627901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.178 ms 00:17:44.296 [2024-12-05 19:08:01.627930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.296 [2024-12-05 19:08:01.632434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.296 [2024-12-05 19:08:01.632468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:17:44.296 [2024-12-05 19:08:01.632478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.411 ms 00:17:44.296 [2024-12-05 19:08:01.632486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.296 [2024-12-05 19:08:01.635866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.296 [2024-12-05 19:08:01.635901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:44.296 [2024-12-05 19:08:01.635910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.340 ms 00:17:44.296 [2024-12-05 19:08:01.635921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.296 [2024-12-05 19:08:01.635966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.296 [2024-12-05 19:08:01.635976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:44.296 [2024-12-05 19:08:01.635985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:44.296 [2024-12-05 19:08:01.635994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.296 [2024-12-05 19:08:01.636063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.296 [2024-12-05 19:08:01.636075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:44.296 [2024-12-05 19:08:01.636083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:17:44.296 [2024-12-05 19:08:01.636092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.296 [2024-12-05 19:08:01.637071] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2183.786 ms, result 0 00:17:44.296 { 00:17:44.296 "name": "ftl0", 00:17:44.296 "uuid": "c5819e88-0c9f-4060-a2b6-c4e0ef67ff4e" 00:17:44.296 } 00:17:44.296 19:08:01 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:17:44.296 19:08:01 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:17:44.296 19:08:01 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:44.296 19:08:01 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:17:44.296 19:08:01 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:44.296 19:08:01 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:44.296 19:08:01 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:44.554 19:08:01 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:17:44.554 [ 00:17:44.554 { 00:17:44.554 "name": "ftl0", 00:17:44.554 "aliases": [ 00:17:44.554 "c5819e88-0c9f-4060-a2b6-c4e0ef67ff4e" 00:17:44.554 ], 00:17:44.554 "product_name": "FTL disk", 00:17:44.554 "block_size": 4096, 00:17:44.554 "num_blocks": 20971520, 00:17:44.554 "uuid": "c5819e88-0c9f-4060-a2b6-c4e0ef67ff4e", 00:17:44.554 "assigned_rate_limits": { 00:17:44.554 "rw_ios_per_sec": 0, 00:17:44.554 "rw_mbytes_per_sec": 0, 00:17:44.554 "r_mbytes_per_sec": 0, 00:17:44.554 "w_mbytes_per_sec": 0 00:17:44.554 }, 00:17:44.554 "claimed": false, 00:17:44.554 "zoned": false, 00:17:44.554 "supported_io_types": { 00:17:44.554 "read": true, 00:17:44.554 "write": true, 00:17:44.554 "unmap": true, 00:17:44.554 "flush": true, 00:17:44.554 "reset": false, 00:17:44.554 "nvme_admin": false, 00:17:44.554 "nvme_io": false, 00:17:44.554 "nvme_io_md": false, 00:17:44.554 "write_zeroes": true, 00:17:44.554 "zcopy": false, 00:17:44.554 "get_zone_info": false, 00:17:44.554 "zone_management": false, 00:17:44.554 "zone_append": false, 00:17:44.554 "compare": false, 00:17:44.554 "compare_and_write": false, 00:17:44.554 "abort": false, 00:17:44.554 "seek_hole": false, 00:17:44.554 "seek_data": false, 00:17:44.554 "copy": false, 00:17:44.554 "nvme_iov_md": false 00:17:44.554 }, 00:17:44.554 "driver_specific": { 00:17:44.554 "ftl": { 00:17:44.554 "base_bdev": "29c92119-de4e-47c7-b89f-833249c9f4f4", 00:17:44.554 "cache": "nvc0n1p0" 00:17:44.554 } 00:17:44.554 } 00:17:44.554 } 00:17:44.554 ] 00:17:44.554 19:08:02 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:17:44.554 19:08:02 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:17:44.554 19:08:02 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:44.811 19:08:02 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:17:44.811 19:08:02 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:45.070 [2024-12-05 19:08:02.454468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.070 [2024-12-05 19:08:02.454505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:45.070 [2024-12-05 19:08:02.454517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:45.070 [2024-12-05 19:08:02.454524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.070 [2024-12-05 19:08:02.454552] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:45.070 [2024-12-05 19:08:02.454977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.070 [2024-12-05 19:08:02.454994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:45.070 [2024-12-05 19:08:02.455005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.406 ms 00:17:45.070 [2024-12-05 19:08:02.455013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.070 [2024-12-05 19:08:02.455444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.070 [2024-12-05 19:08:02.455480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:45.070 [2024-12-05 19:08:02.455489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.409 ms 00:17:45.070 [2024-12-05 19:08:02.455497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.070 [2024-12-05 19:08:02.457914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.070 [2024-12-05 19:08:02.458035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:45.070 [2024-12-05 19:08:02.458047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.396 ms 00:17:45.070 [2024-12-05 19:08:02.458057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.070 [2024-12-05 19:08:02.462921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.070 [2024-12-05 19:08:02.462946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:45.070 [2024-12-05 19:08:02.462954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.838 ms 00:17:45.070 [2024-12-05 19:08:02.462962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.070 [2024-12-05 19:08:02.464386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.070 [2024-12-05 19:08:02.464490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:45.070 [2024-12-05 19:08:02.464502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.357 ms 00:17:45.070 [2024-12-05 19:08:02.464509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.070 [2024-12-05 19:08:02.467954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.070 [2024-12-05 19:08:02.468061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:45.070 [2024-12-05 19:08:02.468075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.414 ms 00:17:45.070 [2024-12-05 19:08:02.468085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.070 [2024-12-05 19:08:02.468218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.070 [2024-12-05 19:08:02.468228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:45.070 [2024-12-05 19:08:02.468235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:17:45.070 [2024-12-05 19:08:02.468242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.070 [2024-12-05 19:08:02.469353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.070 [2024-12-05 19:08:02.469381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:45.070 [2024-12-05 19:08:02.469388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.077 ms 00:17:45.070 [2024-12-05 19:08:02.469395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.070 [2024-12-05 19:08:02.470337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.070 [2024-12-05 19:08:02.470365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:45.070 [2024-12-05 19:08:02.470372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.913 ms 00:17:45.070 [2024-12-05 19:08:02.470379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.070 [2024-12-05 19:08:02.471129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.070 [2024-12-05 19:08:02.471225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:45.070 [2024-12-05 19:08:02.471235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.720 ms 00:17:45.070 [2024-12-05 19:08:02.471242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.070 [2024-12-05 19:08:02.471978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.070 [2024-12-05 19:08:02.472004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:45.070 [2024-12-05 19:08:02.472010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.642 ms 00:17:45.070 [2024-12-05 19:08:02.472017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.070 [2024-12-05 19:08:02.472050] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:45.070 [2024-12-05 19:08:02.472062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:45.070 [2024-12-05 19:08:02.472081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:45.070 [2024-12-05 19:08:02.472088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:45.070 [2024-12-05 19:08:02.472094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:45.070 [2024-12-05 19:08:02.472103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:45.071 [2024-12-05 19:08:02.472109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:45.071 [2024-12-05 19:08:02.472116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:45.071 [2024-12-05 19:08:02.472121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:45.071 [2024-12-05 19:08:02.472128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:45.071 [2024-12-05 19:08:02.472134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:45.071 [2024-12-05 19:08:02.472141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:45.071 [2024-12-05 19:08:02.472147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:45.071 [2024-12-05 19:08:02.472154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:45.071 [2024-12-05 19:08:02.472160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:45.071 [2024-12-05 19:08:02.472166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:45.071 [2024-12-05 19:08:02.472172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:45.071 [2024-12-05 19:08:02.472179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:45.071 [2024-12-05 19:08:02.472195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:45.071 [2024-12-05 19:08:02.472202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:45.071 [2024-12-05 19:08:02.472208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:45.071 [2024-12-05 19:08:02.472217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:45.071 [2024-12-05 19:08:02.472223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:45.071 [2024-12-05 19:08:02.472230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:45.071 [2024-12-05 19:08:02.472236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:45.071 [2024-12-05 19:08:02.472245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:45.071 [2024-12-05 19:08:02.472261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:45.071 [2024-12-05 19:08:02.472269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:45.071 [2024-12-05 19:08:02.472275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:45.071 [2024-12-05 19:08:02.472282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:45.071 [2024-12-05 19:08:02.472288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:45.071 [2024-12-05 19:08:02.472295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:45.071 [2024-12-05 19:08:02.472301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:45.071 [2024-12-05 19:08:02.472309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:45.071 [2024-12-05 19:08:02.472314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:45.071 [2024-12-05 19:08:02.472324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:45.071 [2024-12-05 19:08:02.472331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:45.071 [2024-12-05 19:08:02.472341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:45.071 [2024-12-05 19:08:02.472347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:45.071 [2024-12-05 19:08:02.472354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:45.071 [2024-12-05 19:08:02.472360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:45.071 [2024-12-05 19:08:02.472366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:45.071 [2024-12-05 19:08:02.472372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:45.071 [2024-12-05 19:08:02.472379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:45.071 [2024-12-05 19:08:02.472385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:45.071 [2024-12-05 19:08:02.472391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:45.071 [2024-12-05 19:08:02.472397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:45.071 [2024-12-05 19:08:02.472404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:45.071 [2024-12-05 19:08:02.472409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:45.071 [2024-12-05 19:08:02.472417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:45.071 [2024-12-05 19:08:02.472422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:45.071 [2024-12-05 19:08:02.472429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:45.071 [2024-12-05 19:08:02.472435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:45.071 [2024-12-05 19:08:02.472445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:45.071 [2024-12-05 19:08:02.472450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:45.071 [2024-12-05 19:08:02.472457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:45.071 [2024-12-05 19:08:02.472462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:45.071 [2024-12-05 19:08:02.472469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:45.071 [2024-12-05 19:08:02.472475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:45.071 [2024-12-05 19:08:02.472481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:45.071 [2024-12-05 19:08:02.472487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:45.071 [2024-12-05 19:08:02.472494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:45.071 [2024-12-05 19:08:02.472499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:45.071 [2024-12-05 19:08:02.472506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:45.071 [2024-12-05 19:08:02.472511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:45.071 [2024-12-05 19:08:02.472518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:45.071 [2024-12-05 19:08:02.472524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:45.071 [2024-12-05 19:08:02.472532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:45.071 [2024-12-05 19:08:02.472537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:45.071 [2024-12-05 19:08:02.472548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:45.071 [2024-12-05 19:08:02.472553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:45.071 [2024-12-05 19:08:02.472560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:45.071 [2024-12-05 19:08:02.472566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:45.071 [2024-12-05 19:08:02.472573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:45.071 [2024-12-05 19:08:02.472579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:45.071 [2024-12-05 19:08:02.472586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:45.071 [2024-12-05 19:08:02.472592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:45.071 [2024-12-05 19:08:02.472599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:45.071 [2024-12-05 19:08:02.472605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:45.071 [2024-12-05 19:08:02.472613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:45.071 [2024-12-05 19:08:02.472619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:45.071 [2024-12-05 19:08:02.472626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:45.071 [2024-12-05 19:08:02.472636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:45.071 [2024-12-05 19:08:02.472643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:45.071 [2024-12-05 19:08:02.472648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:45.071 [2024-12-05 19:08:02.472656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:45.071 [2024-12-05 19:08:02.472662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:45.071 [2024-12-05 19:08:02.472669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:45.071 [2024-12-05 19:08:02.472674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:45.071 [2024-12-05 19:08:02.472681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:45.071 [2024-12-05 19:08:02.472687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:45.071 [2024-12-05 19:08:02.472694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:45.072 [2024-12-05 19:08:02.472699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:45.072 [2024-12-05 19:08:02.472706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:45.072 [2024-12-05 19:08:02.472711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:45.072 [2024-12-05 19:08:02.472719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:45.072 [2024-12-05 19:08:02.472725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:45.072 [2024-12-05 19:08:02.472731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:45.072 [2024-12-05 19:08:02.472737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:45.072 [2024-12-05 19:08:02.472745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:45.072 [2024-12-05 19:08:02.472751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:45.072 [2024-12-05 19:08:02.472767] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:45.072 [2024-12-05 19:08:02.472775] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c5819e88-0c9f-4060-a2b6-c4e0ef67ff4e 00:17:45.072 [2024-12-05 19:08:02.472784] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:45.072 [2024-12-05 19:08:02.472789] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:45.072 [2024-12-05 19:08:02.472796] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:45.072 [2024-12-05 19:08:02.472801] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:45.072 [2024-12-05 19:08:02.472809] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:45.072 [2024-12-05 19:08:02.472815] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:45.072 [2024-12-05 19:08:02.472822] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:45.072 [2024-12-05 19:08:02.472827] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:45.072 [2024-12-05 19:08:02.472833] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:45.072 [2024-12-05 19:08:02.472839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.072 [2024-12-05 19:08:02.472846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:45.072 [2024-12-05 19:08:02.472852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.790 ms 00:17:45.072 [2024-12-05 19:08:02.472859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.072 [2024-12-05 19:08:02.474281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.072 [2024-12-05 19:08:02.474303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:45.072 [2024-12-05 19:08:02.474311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.398 ms 00:17:45.072 [2024-12-05 19:08:02.474318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.072 [2024-12-05 19:08:02.474394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.072 [2024-12-05 19:08:02.474402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:45.072 [2024-12-05 19:08:02.474408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:17:45.072 [2024-12-05 19:08:02.474426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.072 [2024-12-05 19:08:02.479188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:45.072 [2024-12-05 19:08:02.479221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:45.072 [2024-12-05 19:08:02.479228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:45.072 [2024-12-05 19:08:02.479236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.072 [2024-12-05 19:08:02.479296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:45.072 [2024-12-05 19:08:02.479313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:45.072 [2024-12-05 19:08:02.479320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:45.072 [2024-12-05 19:08:02.479328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.072 [2024-12-05 19:08:02.479414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:45.072 [2024-12-05 19:08:02.479425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:45.072 [2024-12-05 19:08:02.479432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:45.072 [2024-12-05 19:08:02.479439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.072 [2024-12-05 19:08:02.479460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:45.072 [2024-12-05 19:08:02.479469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:45.072 [2024-12-05 19:08:02.479474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:45.072 [2024-12-05 19:08:02.479482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.072 [2024-12-05 19:08:02.488024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:45.072 [2024-12-05 19:08:02.488060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:45.072 [2024-12-05 19:08:02.488068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:45.072 [2024-12-05 19:08:02.488076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.072 [2024-12-05 19:08:02.495140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:45.072 [2024-12-05 19:08:02.495173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:45.072 [2024-12-05 19:08:02.495191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:45.072 [2024-12-05 19:08:02.495199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.072 [2024-12-05 19:08:02.495281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:45.072 [2024-12-05 19:08:02.495292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:45.072 [2024-12-05 19:08:02.495298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:45.072 [2024-12-05 19:08:02.495312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.072 [2024-12-05 19:08:02.495376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:45.072 [2024-12-05 19:08:02.495385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:45.072 [2024-12-05 19:08:02.495391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:45.072 [2024-12-05 19:08:02.495397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.072 [2024-12-05 19:08:02.495461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:45.072 [2024-12-05 19:08:02.495470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:45.072 [2024-12-05 19:08:02.495477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:45.072 [2024-12-05 19:08:02.495484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.072 [2024-12-05 19:08:02.495524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:45.072 [2024-12-05 19:08:02.495532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:45.072 [2024-12-05 19:08:02.495538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:45.072 [2024-12-05 19:08:02.495545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.072 [2024-12-05 19:08:02.495584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:45.072 [2024-12-05 19:08:02.495603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:45.072 [2024-12-05 19:08:02.495610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:45.072 [2024-12-05 19:08:02.495616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.072 [2024-12-05 19:08:02.495663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:45.072 [2024-12-05 19:08:02.495672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:45.072 [2024-12-05 19:08:02.495678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:45.072 [2024-12-05 19:08:02.495684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.072 [2024-12-05 19:08:02.495816] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 41.321 ms, result 0 00:17:45.072 true 00:17:45.072 19:08:02 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 86063 00:17:45.072 19:08:02 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 86063 ']' 00:17:45.072 19:08:02 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 86063 00:17:45.072 19:08:02 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:17:45.072 19:08:02 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:45.072 19:08:02 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86063 00:17:45.072 killing process with pid 86063 00:17:45.072 19:08:02 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:45.072 19:08:02 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:45.072 19:08:02 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86063' 00:17:45.072 19:08:02 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 86063 00:17:45.072 19:08:02 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 86063 00:17:55.042 19:08:10 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:17:55.042 19:08:10 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:17:55.042 19:08:10 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:17:55.042 19:08:10 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:55.042 19:08:10 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:55.042 19:08:10 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:17:55.042 19:08:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:17:55.042 19:08:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:55.042 19:08:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:55.042 19:08:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:55.042 19:08:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:55.042 19:08:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:17:55.042 19:08:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:55.042 19:08:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:55.042 19:08:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:55.042 19:08:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:17:55.042 19:08:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:55.042 19:08:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:55.042 19:08:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:55.042 19:08:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:17:55.042 19:08:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:55.042 19:08:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:17:55.042 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:17:55.042 fio-3.35 00:17:55.042 Starting 1 thread 00:17:58.336 00:17:58.336 test: (groupid=0, jobs=1): err= 0: pid=86226: Thu Dec 5 19:08:15 2024 00:17:58.336 read: IOPS=1039, BW=69.0MiB/s (72.4MB/s)(255MiB/3688msec) 00:17:58.336 slat (nsec): min=4057, max=21899, avg=5369.66, stdev=1787.02 00:17:58.336 clat (usec): min=233, max=1444, avg=435.41, stdev=181.27 00:17:58.336 lat (usec): min=237, max=1448, avg=440.78, stdev=181.72 00:17:58.336 clat percentiles (usec): 00:17:58.336 | 1.00th=[ 262], 5.00th=[ 289], 10.00th=[ 293], 20.00th=[ 297], 00:17:58.336 | 30.00th=[ 302], 40.00th=[ 306], 50.00th=[ 330], 60.00th=[ 461], 00:17:58.336 | 70.00th=[ 523], 80.00th=[ 537], 90.00th=[ 758], 95.00th=[ 832], 00:17:58.336 | 99.00th=[ 996], 99.50th=[ 1074], 99.90th=[ 1270], 99.95th=[ 1352], 00:17:58.336 | 99.99th=[ 1450] 00:17:58.336 write: IOPS=1046, BW=69.5MiB/s (72.8MB/s)(256MiB/3686msec); 0 zone resets 00:17:58.336 slat (nsec): min=14722, max=53662, avg=21745.76, stdev=3852.06 00:17:58.336 clat (usec): min=257, max=1613, avg=484.16, stdev=210.92 00:17:58.336 lat (usec): min=281, max=1635, avg=505.90, stdev=209.75 00:17:58.336 clat percentiles (usec): 00:17:58.336 | 1.00th=[ 289], 5.00th=[ 306], 10.00th=[ 310], 20.00th=[ 314], 00:17:58.336 | 30.00th=[ 318], 40.00th=[ 326], 50.00th=[ 379], 60.00th=[ 529], 00:17:58.336 | 70.00th=[ 562], 80.00th=[ 611], 90.00th=[ 865], 95.00th=[ 914], 00:17:58.336 | 99.00th=[ 1123], 99.50th=[ 1205], 99.90th=[ 1450], 99.95th=[ 1516], 00:17:58.336 | 99.99th=[ 1614] 00:17:58.336 bw ( KiB/s): min=38624, max=101864, per=97.80%, avg=69573.71, stdev=24168.22, samples=7 00:17:58.336 iops : min= 568, max= 1498, avg=1023.14, stdev=355.41, samples=7 00:17:58.336 lat (usec) : 250=0.05%, 500=61.75%, 750=26.56%, 1000=10.18% 00:17:58.336 lat (msec) : 2=1.46% 00:17:58.336 cpu : usr=99.32%, sys=0.05%, ctx=3, majf=0, minf=1326 00:17:58.336 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:58.336 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:58.336 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:58.336 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:58.336 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:58.336 00:17:58.336 Run status group 0 (all jobs): 00:17:58.336 READ: bw=69.0MiB/s (72.4MB/s), 69.0MiB/s-69.0MiB/s (72.4MB/s-72.4MB/s), io=255MiB (267MB), run=3688-3688msec 00:17:58.336 WRITE: bw=69.5MiB/s (72.8MB/s), 69.5MiB/s-69.5MiB/s (72.8MB/s-72.8MB/s), io=256MiB (269MB), run=3686-3686msec 00:17:58.910 ----------------------------------------------------- 00:17:58.910 Suppressions used: 00:17:58.910 count bytes template 00:17:58.910 1 5 /usr/src/fio/parse.c 00:17:58.910 1 8 libtcmalloc_minimal.so 00:17:58.910 1 904 libcrypto.so 00:17:58.910 ----------------------------------------------------- 00:17:58.910 00:17:58.910 19:08:16 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:17:58.910 19:08:16 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:58.910 19:08:16 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:58.910 19:08:16 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:17:58.910 19:08:16 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:17:58.910 19:08:16 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:58.910 19:08:16 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:58.910 19:08:16 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:17:58.910 19:08:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:17:58.910 19:08:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:58.910 19:08:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:58.910 19:08:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:58.910 19:08:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:58.910 19:08:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:17:58.910 19:08:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:58.910 19:08:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:58.910 19:08:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:58.910 19:08:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:58.910 19:08:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:17:58.910 19:08:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:58.910 19:08:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:58.910 19:08:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:17:58.910 19:08:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:58.910 19:08:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:17:59.172 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:17:59.172 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:17:59.172 fio-3.35 00:17:59.172 Starting 2 threads 00:18:25.797 00:18:25.797 first_half: (groupid=0, jobs=1): err= 0: pid=86318: Thu Dec 5 19:08:40 2024 00:18:25.797 read: IOPS=2823, BW=11.0MiB/s (11.6MB/s)(255MiB/23109msec) 00:18:25.797 slat (nsec): min=3165, max=28542, avg=5070.39, stdev=1135.71 00:18:25.797 clat (usec): min=547, max=482958, avg=33492.76, stdev=18779.50 00:18:25.797 lat (usec): min=553, max=482964, avg=33497.83, stdev=18779.56 00:18:25.797 clat percentiles (msec): 00:18:25.797 | 1.00th=[ 3], 5.00th=[ 26], 10.00th=[ 30], 20.00th=[ 31], 00:18:25.797 | 30.00th=[ 31], 40.00th=[ 32], 50.00th=[ 32], 60.00th=[ 32], 00:18:25.797 | 70.00th=[ 32], 80.00th=[ 33], 90.00th=[ 37], 95.00th=[ 42], 00:18:25.797 | 99.00th=[ 116], 99.50th=[ 142], 99.90th=[ 326], 99.95th=[ 414], 00:18:25.797 | 99.99th=[ 472] 00:18:25.797 write: IOPS=3887, BW=15.2MiB/s (15.9MB/s)(256MiB/16858msec); 0 zone resets 00:18:25.797 slat (usec): min=4, max=2310, avg= 7.18, stdev=17.38 00:18:25.797 clat (usec): min=350, max=77480, avg=11773.69, stdev=18394.03 00:18:25.797 lat (usec): min=356, max=77485, avg=11780.87, stdev=18394.08 00:18:25.797 clat percentiles (usec): 00:18:25.797 | 1.00th=[ 644], 5.00th=[ 725], 10.00th=[ 791], 20.00th=[ 922], 00:18:25.797 | 30.00th=[ 1106], 40.00th=[ 2704], 50.00th=[ 3949], 60.00th=[ 5014], 00:18:25.797 | 70.00th=[ 7439], 80.00th=[16909], 90.00th=[54789], 95.00th=[59507], 00:18:25.797 | 99.00th=[64750], 99.50th=[68682], 99.90th=[72877], 99.95th=[73925], 00:18:25.797 | 99.99th=[77071] 00:18:25.797 bw ( KiB/s): min= 1200, max=47992, per=80.32%, avg=22795.13, stdev=12910.63, samples=23 00:18:25.797 iops : min= 300, max=11998, avg=5698.78, stdev=3227.66, samples=23 00:18:25.797 lat (usec) : 500=0.01%, 750=3.51%, 1000=9.13% 00:18:25.797 lat (msec) : 2=6.38%, 4=6.80%, 10=11.30%, 20=6.54%, 50=49.09% 00:18:25.797 lat (msec) : 100=6.57%, 250=0.60%, 500=0.06% 00:18:25.797 cpu : usr=99.27%, sys=0.20%, ctx=34, majf=0, minf=5549 00:18:25.797 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:18:25.797 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:25.797 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:25.797 issued rwts: total=65239,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:25.797 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:25.797 second_half: (groupid=0, jobs=1): err= 0: pid=86319: Thu Dec 5 19:08:40 2024 00:18:25.797 read: IOPS=2804, BW=11.0MiB/s (11.5MB/s)(255MiB/23250msec) 00:18:25.797 slat (nsec): min=3092, max=55196, avg=4518.74, stdev=1314.24 00:18:25.797 clat (usec): min=587, max=490537, avg=33374.22, stdev=16369.13 00:18:25.797 lat (usec): min=593, max=490544, avg=33378.74, stdev=16369.23 00:18:25.797 clat percentiles (msec): 00:18:25.797 | 1.00th=[ 5], 5.00th=[ 26], 10.00th=[ 29], 20.00th=[ 31], 00:18:25.797 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 32], 60.00th=[ 32], 00:18:25.797 | 70.00th=[ 32], 80.00th=[ 33], 90.00th=[ 38], 95.00th=[ 42], 00:18:25.797 | 99.00th=[ 117], 99.50th=[ 148], 99.90th=[ 178], 99.95th=[ 213], 00:18:25.797 | 99.99th=[ 485] 00:18:25.797 write: IOPS=3547, BW=13.9MiB/s (14.5MB/s)(256MiB/18474msec); 0 zone resets 00:18:25.797 slat (usec): min=3, max=3006, avg= 6.94, stdev=17.96 00:18:25.797 clat (usec): min=334, max=77150, avg=12187.73, stdev=18586.03 00:18:25.797 lat (usec): min=342, max=77155, avg=12194.67, stdev=18586.29 00:18:25.797 clat percentiles (usec): 00:18:25.797 | 1.00th=[ 594], 5.00th=[ 693], 10.00th=[ 742], 20.00th=[ 857], 00:18:25.797 | 30.00th=[ 1045], 40.00th=[ 1778], 50.00th=[ 3654], 60.00th=[ 4817], 00:18:25.797 | 70.00th=[11076], 80.00th=[18482], 90.00th=[55313], 95.00th=[59507], 00:18:25.797 | 99.00th=[64750], 99.50th=[68682], 99.90th=[71828], 99.95th=[72877], 00:18:25.797 | 99.99th=[77071] 00:18:25.797 bw ( KiB/s): min= 144, max=46232, per=83.97%, avg=23831.27, stdev=13752.99, samples=22 00:18:25.797 iops : min= 36, max=11558, avg=5957.82, stdev=3438.25, samples=22 00:18:25.797 lat (usec) : 500=0.02%, 750=5.25%, 1000=8.94% 00:18:25.797 lat (msec) : 2=6.58%, 4=6.28%, 10=8.01%, 20=7.87%, 50=49.51% 00:18:25.797 lat (msec) : 100=6.82%, 250=0.69%, 500=0.02% 00:18:25.797 cpu : usr=99.21%, sys=0.15%, ctx=43, majf=0, minf=5575 00:18:25.797 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:18:25.797 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:25.797 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:25.797 issued rwts: total=65198,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:25.797 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:25.797 00:18:25.797 Run status group 0 (all jobs): 00:18:25.797 READ: bw=21.9MiB/s (23.0MB/s), 11.0MiB/s-11.0MiB/s (11.5MB/s-11.6MB/s), io=510MiB (534MB), run=23109-23250msec 00:18:25.797 WRITE: bw=27.7MiB/s (29.1MB/s), 13.9MiB/s-15.2MiB/s (14.5MB/s-15.9MB/s), io=512MiB (537MB), run=16858-18474msec 00:18:25.797 ----------------------------------------------------- 00:18:25.797 Suppressions used: 00:18:25.797 count bytes template 00:18:25.797 2 10 /usr/src/fio/parse.c 00:18:25.797 1 96 /usr/src/fio/iolog.c 00:18:25.797 1 8 libtcmalloc_minimal.so 00:18:25.797 1 904 libcrypto.so 00:18:25.797 ----------------------------------------------------- 00:18:25.797 00:18:25.797 19:08:42 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:18:25.797 19:08:42 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:25.797 19:08:42 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:25.797 19:08:42 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:25.797 19:08:42 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:18:25.797 19:08:42 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:25.797 19:08:42 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:25.797 19:08:42 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:25.797 19:08:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:25.797 19:08:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:25.797 19:08:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:25.797 19:08:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:25.797 19:08:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:25.797 19:08:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:18:25.797 19:08:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:25.797 19:08:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:25.797 19:08:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:25.797 19:08:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:25.797 19:08:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:18:25.797 19:08:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:25.797 19:08:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:25.797 19:08:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:18:25.798 19:08:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:25.798 19:08:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:25.798 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:25.798 fio-3.35 00:18:25.798 Starting 1 thread 00:18:40.693 00:18:40.693 test: (groupid=0, jobs=1): err= 0: pid=86620: Thu Dec 5 19:08:57 2024 00:18:40.693 read: IOPS=7643, BW=29.9MiB/s (31.3MB/s)(255MiB/8530msec) 00:18:40.693 slat (nsec): min=3136, max=33372, avg=4969.95, stdev=1082.89 00:18:40.693 clat (usec): min=537, max=39066, avg=16737.02, stdev=2710.85 00:18:40.693 lat (usec): min=541, max=39071, avg=16741.99, stdev=2710.84 00:18:40.693 clat percentiles (usec): 00:18:40.693 | 1.00th=[13566], 5.00th=[13829], 10.00th=[13960], 20.00th=[14353], 00:18:40.693 | 30.00th=[15533], 40.00th=[15795], 50.00th=[16057], 60.00th=[16188], 00:18:40.693 | 70.00th=[16712], 80.00th=[18744], 90.00th=[20579], 95.00th=[22152], 00:18:40.693 | 99.00th=[25560], 99.50th=[26346], 99.90th=[32375], 99.95th=[34341], 00:18:40.693 | 99.99th=[38536] 00:18:40.693 write: IOPS=11.8k, BW=45.9MiB/s (48.1MB/s)(256MiB/5575msec); 0 zone resets 00:18:40.693 slat (usec): min=4, max=2098, avg= 7.19, stdev= 9.25 00:18:40.693 clat (usec): min=435, max=62733, avg=10835.57, stdev=13299.46 00:18:40.693 lat (usec): min=441, max=62738, avg=10842.76, stdev=13299.46 00:18:40.693 clat percentiles (usec): 00:18:40.693 | 1.00th=[ 766], 5.00th=[ 1004], 10.00th=[ 1139], 20.00th=[ 1319], 00:18:40.693 | 30.00th=[ 1598], 40.00th=[ 2409], 50.00th=[ 6390], 60.00th=[ 8455], 00:18:40.693 | 70.00th=[10683], 80.00th=[14222], 90.00th=[36439], 95.00th=[39584], 00:18:40.693 | 99.00th=[52167], 99.50th=[55837], 99.90th=[59507], 99.95th=[60556], 00:18:40.693 | 99.99th=[61604] 00:18:40.693 bw ( KiB/s): min= 5680, max=77952, per=92.92%, avg=43690.67, stdev=16124.92, samples=12 00:18:40.693 iops : min= 1420, max=19488, avg=10922.67, stdev=4031.23, samples=12 00:18:40.693 lat (usec) : 500=0.01%, 750=0.43%, 1000=2.05% 00:18:40.693 lat (msec) : 2=15.80%, 4=2.91%, 10=12.67%, 20=51.77%, 50=13.58% 00:18:40.693 lat (msec) : 100=0.79% 00:18:40.693 cpu : usr=99.09%, sys=0.19%, ctx=26, majf=0, minf=5577 00:18:40.693 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:18:40.693 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:40.693 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:40.693 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:40.693 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:40.693 00:18:40.693 Run status group 0 (all jobs): 00:18:40.693 READ: bw=29.9MiB/s (31.3MB/s), 29.9MiB/s-29.9MiB/s (31.3MB/s-31.3MB/s), io=255MiB (267MB), run=8530-8530msec 00:18:40.693 WRITE: bw=45.9MiB/s (48.1MB/s), 45.9MiB/s-45.9MiB/s (48.1MB/s-48.1MB/s), io=256MiB (268MB), run=5575-5575msec 00:18:40.954 ----------------------------------------------------- 00:18:40.954 Suppressions used: 00:18:40.954 count bytes template 00:18:40.954 1 5 /usr/src/fio/parse.c 00:18:40.954 2 192 /usr/src/fio/iolog.c 00:18:40.954 1 8 libtcmalloc_minimal.so 00:18:40.954 1 904 libcrypto.so 00:18:40.954 ----------------------------------------------------- 00:18:40.954 00:18:40.954 19:08:58 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:18:40.954 19:08:58 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:40.954 19:08:58 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:40.954 19:08:58 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:40.954 Remove shared memory files 00:18:40.954 19:08:58 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:18:40.954 19:08:58 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:18:40.954 19:08:58 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:18:40.954 19:08:58 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:18:40.954 19:08:58 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid69059 /dev/shm/spdk_tgt_trace.pid85005 00:18:40.954 19:08:58 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:18:40.954 19:08:58 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:18:40.954 ************************************ 00:18:40.954 END TEST ftl_fio_basic 00:18:40.954 ************************************ 00:18:40.954 00:18:40.954 real 1m2.643s 00:18:40.954 user 2m21.584s 00:18:40.954 sys 0m2.863s 00:18:40.954 19:08:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:40.954 19:08:58 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:40.954 19:08:58 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:18:40.954 19:08:58 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:40.955 19:08:58 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:40.955 19:08:58 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:40.955 ************************************ 00:18:40.955 START TEST ftl_bdevperf 00:18:40.955 ************************************ 00:18:40.955 19:08:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:18:40.955 * Looking for test storage... 00:18:40.955 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:40.955 19:08:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:40.955 19:08:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:40.955 19:08:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:18:41.216 19:08:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:41.216 19:08:58 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:41.216 19:08:58 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:41.216 19:08:58 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:41.216 19:08:58 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:18:41.216 19:08:58 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:18:41.216 19:08:58 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:18:41.216 19:08:58 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:18:41.216 19:08:58 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:18:41.216 19:08:58 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:18:41.216 19:08:58 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:18:41.216 19:08:58 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:41.216 19:08:58 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:18:41.216 19:08:58 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:18:41.216 19:08:58 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:41.216 19:08:58 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:41.216 19:08:58 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:18:41.216 19:08:58 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:18:41.216 19:08:58 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:41.216 19:08:58 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:18:41.216 19:08:58 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:18:41.216 19:08:58 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:18:41.216 19:08:58 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:18:41.216 19:08:58 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:41.216 19:08:58 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:18:41.216 19:08:58 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:18:41.216 19:08:58 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:41.216 19:08:58 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:41.216 19:08:58 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:18:41.216 19:08:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:41.216 19:08:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:41.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:41.216 --rc genhtml_branch_coverage=1 00:18:41.216 --rc genhtml_function_coverage=1 00:18:41.216 --rc genhtml_legend=1 00:18:41.216 --rc geninfo_all_blocks=1 00:18:41.216 --rc geninfo_unexecuted_blocks=1 00:18:41.216 00:18:41.216 ' 00:18:41.216 19:08:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:41.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:41.216 --rc genhtml_branch_coverage=1 00:18:41.216 --rc genhtml_function_coverage=1 00:18:41.216 --rc genhtml_legend=1 00:18:41.216 --rc geninfo_all_blocks=1 00:18:41.216 --rc geninfo_unexecuted_blocks=1 00:18:41.216 00:18:41.216 ' 00:18:41.216 19:08:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:41.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:41.216 --rc genhtml_branch_coverage=1 00:18:41.216 --rc genhtml_function_coverage=1 00:18:41.216 --rc genhtml_legend=1 00:18:41.216 --rc geninfo_all_blocks=1 00:18:41.216 --rc geninfo_unexecuted_blocks=1 00:18:41.216 00:18:41.216 ' 00:18:41.216 19:08:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:41.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:41.216 --rc genhtml_branch_coverage=1 00:18:41.216 --rc genhtml_function_coverage=1 00:18:41.216 --rc genhtml_legend=1 00:18:41.216 --rc geninfo_all_blocks=1 00:18:41.216 --rc geninfo_unexecuted_blocks=1 00:18:41.216 00:18:41.216 ' 00:18:41.216 19:08:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:41.216 19:08:58 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:18:41.216 19:08:58 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:41.216 19:08:58 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:41.216 19:08:58 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:41.216 19:08:58 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:41.216 19:08:58 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:41.216 19:08:58 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:41.217 19:08:58 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:41.217 19:08:58 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:41.217 19:08:58 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:41.217 19:08:58 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:41.217 19:08:58 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:41.217 19:08:58 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:41.217 19:08:58 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:41.217 19:08:58 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:41.217 19:08:58 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:41.217 19:08:58 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:41.217 19:08:58 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:41.217 19:08:58 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:41.217 19:08:58 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:41.217 19:08:58 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:41.217 19:08:58 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:41.217 19:08:58 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:41.217 19:08:58 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:41.217 19:08:58 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:41.217 19:08:58 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:41.217 19:08:58 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:41.217 19:08:58 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:41.217 19:08:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:18:41.217 19:08:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:18:41.217 19:08:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:18:41.217 19:08:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:41.217 19:08:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:18:41.217 19:08:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=86859 00:18:41.217 19:08:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:18:41.217 19:08:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:18:41.217 19:08:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 86859 00:18:41.217 19:08:58 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 86859 ']' 00:18:41.217 19:08:58 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:41.217 19:08:58 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:41.217 19:08:58 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:41.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:41.217 19:08:58 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:41.217 19:08:58 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:41.217 [2024-12-05 19:08:58.660054] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:18:41.217 [2024-12-05 19:08:58.660420] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86859 ] 00:18:41.478 [2024-12-05 19:08:58.814665] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:41.478 [2024-12-05 19:08:58.834720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:42.051 19:08:59 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:42.051 19:08:59 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:18:42.051 19:08:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:42.051 19:08:59 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:18:42.051 19:08:59 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:42.051 19:08:59 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:18:42.051 19:08:59 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:18:42.051 19:08:59 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:42.312 19:08:59 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:42.312 19:08:59 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:18:42.312 19:08:59 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:42.312 19:08:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:18:42.312 19:08:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:42.312 19:08:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:18:42.312 19:08:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:18:42.312 19:08:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:42.574 19:09:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:42.574 { 00:18:42.574 "name": "nvme0n1", 00:18:42.574 "aliases": [ 00:18:42.574 "d62f703d-c8ec-40a3-a1c2-64e915a28c39" 00:18:42.574 ], 00:18:42.574 "product_name": "NVMe disk", 00:18:42.574 "block_size": 4096, 00:18:42.574 "num_blocks": 1310720, 00:18:42.574 "uuid": "d62f703d-c8ec-40a3-a1c2-64e915a28c39", 00:18:42.574 "numa_id": -1, 00:18:42.574 "assigned_rate_limits": { 00:18:42.574 "rw_ios_per_sec": 0, 00:18:42.574 "rw_mbytes_per_sec": 0, 00:18:42.574 "r_mbytes_per_sec": 0, 00:18:42.574 "w_mbytes_per_sec": 0 00:18:42.574 }, 00:18:42.574 "claimed": true, 00:18:42.574 "claim_type": "read_many_write_one", 00:18:42.574 "zoned": false, 00:18:42.574 "supported_io_types": { 00:18:42.574 "read": true, 00:18:42.574 "write": true, 00:18:42.574 "unmap": true, 00:18:42.574 "flush": true, 00:18:42.574 "reset": true, 00:18:42.574 "nvme_admin": true, 00:18:42.574 "nvme_io": true, 00:18:42.574 "nvme_io_md": false, 00:18:42.574 "write_zeroes": true, 00:18:42.574 "zcopy": false, 00:18:42.574 "get_zone_info": false, 00:18:42.574 "zone_management": false, 00:18:42.574 "zone_append": false, 00:18:42.574 "compare": true, 00:18:42.574 "compare_and_write": false, 00:18:42.574 "abort": true, 00:18:42.574 "seek_hole": false, 00:18:42.574 "seek_data": false, 00:18:42.574 "copy": true, 00:18:42.574 "nvme_iov_md": false 00:18:42.574 }, 00:18:42.574 "driver_specific": { 00:18:42.574 "nvme": [ 00:18:42.574 { 00:18:42.574 "pci_address": "0000:00:11.0", 00:18:42.574 "trid": { 00:18:42.574 "trtype": "PCIe", 00:18:42.574 "traddr": "0000:00:11.0" 00:18:42.574 }, 00:18:42.574 "ctrlr_data": { 00:18:42.574 "cntlid": 0, 00:18:42.574 "vendor_id": "0x1b36", 00:18:42.574 "model_number": "QEMU NVMe Ctrl", 00:18:42.574 "serial_number": "12341", 00:18:42.574 "firmware_revision": "8.0.0", 00:18:42.574 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:42.574 "oacs": { 00:18:42.574 "security": 0, 00:18:42.574 "format": 1, 00:18:42.574 "firmware": 0, 00:18:42.574 "ns_manage": 1 00:18:42.574 }, 00:18:42.574 "multi_ctrlr": false, 00:18:42.574 "ana_reporting": false 00:18:42.574 }, 00:18:42.574 "vs": { 00:18:42.574 "nvme_version": "1.4" 00:18:42.574 }, 00:18:42.574 "ns_data": { 00:18:42.574 "id": 1, 00:18:42.574 "can_share": false 00:18:42.574 } 00:18:42.574 } 00:18:42.574 ], 00:18:42.574 "mp_policy": "active_passive" 00:18:42.574 } 00:18:42.574 } 00:18:42.574 ]' 00:18:42.574 19:09:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:42.574 19:09:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:18:42.574 19:09:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:42.574 19:09:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:18:42.574 19:09:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:18:42.574 19:09:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:18:42.574 19:09:00 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:18:42.574 19:09:00 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:42.574 19:09:00 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:18:42.574 19:09:00 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:42.574 19:09:00 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:42.836 19:09:00 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=176315a6-7f27-4ebd-abc8-2cb891891970 00:18:42.836 19:09:00 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:18:42.836 19:09:00 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 176315a6-7f27-4ebd-abc8-2cb891891970 00:18:43.098 19:09:00 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:43.360 19:09:00 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=bbc61751-165f-4e99-8350-a0a35923f718 00:18:43.360 19:09:00 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u bbc61751-165f-4e99-8350-a0a35923f718 00:18:43.360 19:09:00 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=8d5cdf6e-5d2c-4814-a26d-99cd01e625f9 00:18:43.360 19:09:00 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 8d5cdf6e-5d2c-4814-a26d-99cd01e625f9 00:18:43.360 19:09:00 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:18:43.360 19:09:00 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:43.360 19:09:00 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=8d5cdf6e-5d2c-4814-a26d-99cd01e625f9 00:18:43.360 19:09:00 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:18:43.360 19:09:00 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 8d5cdf6e-5d2c-4814-a26d-99cd01e625f9 00:18:43.360 19:09:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=8d5cdf6e-5d2c-4814-a26d-99cd01e625f9 00:18:43.360 19:09:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:43.360 19:09:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:18:43.360 19:09:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:18:43.360 19:09:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8d5cdf6e-5d2c-4814-a26d-99cd01e625f9 00:18:43.622 19:09:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:43.622 { 00:18:43.622 "name": "8d5cdf6e-5d2c-4814-a26d-99cd01e625f9", 00:18:43.622 "aliases": [ 00:18:43.622 "lvs/nvme0n1p0" 00:18:43.622 ], 00:18:43.622 "product_name": "Logical Volume", 00:18:43.622 "block_size": 4096, 00:18:43.622 "num_blocks": 26476544, 00:18:43.622 "uuid": "8d5cdf6e-5d2c-4814-a26d-99cd01e625f9", 00:18:43.622 "assigned_rate_limits": { 00:18:43.622 "rw_ios_per_sec": 0, 00:18:43.622 "rw_mbytes_per_sec": 0, 00:18:43.622 "r_mbytes_per_sec": 0, 00:18:43.622 "w_mbytes_per_sec": 0 00:18:43.622 }, 00:18:43.622 "claimed": false, 00:18:43.622 "zoned": false, 00:18:43.622 "supported_io_types": { 00:18:43.622 "read": true, 00:18:43.622 "write": true, 00:18:43.622 "unmap": true, 00:18:43.622 "flush": false, 00:18:43.622 "reset": true, 00:18:43.622 "nvme_admin": false, 00:18:43.622 "nvme_io": false, 00:18:43.622 "nvme_io_md": false, 00:18:43.622 "write_zeroes": true, 00:18:43.622 "zcopy": false, 00:18:43.622 "get_zone_info": false, 00:18:43.622 "zone_management": false, 00:18:43.622 "zone_append": false, 00:18:43.622 "compare": false, 00:18:43.622 "compare_and_write": false, 00:18:43.622 "abort": false, 00:18:43.622 "seek_hole": true, 00:18:43.622 "seek_data": true, 00:18:43.622 "copy": false, 00:18:43.622 "nvme_iov_md": false 00:18:43.622 }, 00:18:43.622 "driver_specific": { 00:18:43.622 "lvol": { 00:18:43.622 "lvol_store_uuid": "bbc61751-165f-4e99-8350-a0a35923f718", 00:18:43.622 "base_bdev": "nvme0n1", 00:18:43.622 "thin_provision": true, 00:18:43.622 "num_allocated_clusters": 0, 00:18:43.622 "snapshot": false, 00:18:43.622 "clone": false, 00:18:43.622 "esnap_clone": false 00:18:43.622 } 00:18:43.622 } 00:18:43.622 } 00:18:43.622 ]' 00:18:43.622 19:09:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:43.622 19:09:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:18:43.622 19:09:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:43.622 19:09:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:43.622 19:09:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:43.622 19:09:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:18:43.622 19:09:01 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:18:43.623 19:09:01 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:18:43.623 19:09:01 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:43.884 19:09:01 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:43.884 19:09:01 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:43.884 19:09:01 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 8d5cdf6e-5d2c-4814-a26d-99cd01e625f9 00:18:43.884 19:09:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=8d5cdf6e-5d2c-4814-a26d-99cd01e625f9 00:18:43.884 19:09:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:43.884 19:09:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:18:43.884 19:09:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:18:43.884 19:09:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8d5cdf6e-5d2c-4814-a26d-99cd01e625f9 00:18:44.143 19:09:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:44.143 { 00:18:44.143 "name": "8d5cdf6e-5d2c-4814-a26d-99cd01e625f9", 00:18:44.143 "aliases": [ 00:18:44.143 "lvs/nvme0n1p0" 00:18:44.143 ], 00:18:44.143 "product_name": "Logical Volume", 00:18:44.143 "block_size": 4096, 00:18:44.143 "num_blocks": 26476544, 00:18:44.143 "uuid": "8d5cdf6e-5d2c-4814-a26d-99cd01e625f9", 00:18:44.143 "assigned_rate_limits": { 00:18:44.143 "rw_ios_per_sec": 0, 00:18:44.143 "rw_mbytes_per_sec": 0, 00:18:44.143 "r_mbytes_per_sec": 0, 00:18:44.143 "w_mbytes_per_sec": 0 00:18:44.143 }, 00:18:44.143 "claimed": false, 00:18:44.143 "zoned": false, 00:18:44.143 "supported_io_types": { 00:18:44.143 "read": true, 00:18:44.143 "write": true, 00:18:44.143 "unmap": true, 00:18:44.143 "flush": false, 00:18:44.143 "reset": true, 00:18:44.143 "nvme_admin": false, 00:18:44.143 "nvme_io": false, 00:18:44.143 "nvme_io_md": false, 00:18:44.143 "write_zeroes": true, 00:18:44.143 "zcopy": false, 00:18:44.143 "get_zone_info": false, 00:18:44.143 "zone_management": false, 00:18:44.143 "zone_append": false, 00:18:44.143 "compare": false, 00:18:44.143 "compare_and_write": false, 00:18:44.143 "abort": false, 00:18:44.143 "seek_hole": true, 00:18:44.143 "seek_data": true, 00:18:44.143 "copy": false, 00:18:44.143 "nvme_iov_md": false 00:18:44.143 }, 00:18:44.143 "driver_specific": { 00:18:44.143 "lvol": { 00:18:44.143 "lvol_store_uuid": "bbc61751-165f-4e99-8350-a0a35923f718", 00:18:44.143 "base_bdev": "nvme0n1", 00:18:44.143 "thin_provision": true, 00:18:44.143 "num_allocated_clusters": 0, 00:18:44.143 "snapshot": false, 00:18:44.143 "clone": false, 00:18:44.143 "esnap_clone": false 00:18:44.143 } 00:18:44.143 } 00:18:44.143 } 00:18:44.143 ]' 00:18:44.143 19:09:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:44.143 19:09:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:18:44.143 19:09:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:44.143 19:09:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:44.143 19:09:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:44.143 19:09:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:18:44.143 19:09:01 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:18:44.143 19:09:01 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:44.402 19:09:01 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:18:44.402 19:09:01 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 8d5cdf6e-5d2c-4814-a26d-99cd01e625f9 00:18:44.402 19:09:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=8d5cdf6e-5d2c-4814-a26d-99cd01e625f9 00:18:44.402 19:09:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:44.402 19:09:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:18:44.402 19:09:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:18:44.402 19:09:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8d5cdf6e-5d2c-4814-a26d-99cd01e625f9 00:18:44.660 19:09:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:44.660 { 00:18:44.660 "name": "8d5cdf6e-5d2c-4814-a26d-99cd01e625f9", 00:18:44.660 "aliases": [ 00:18:44.660 "lvs/nvme0n1p0" 00:18:44.660 ], 00:18:44.660 "product_name": "Logical Volume", 00:18:44.660 "block_size": 4096, 00:18:44.660 "num_blocks": 26476544, 00:18:44.660 "uuid": "8d5cdf6e-5d2c-4814-a26d-99cd01e625f9", 00:18:44.660 "assigned_rate_limits": { 00:18:44.660 "rw_ios_per_sec": 0, 00:18:44.660 "rw_mbytes_per_sec": 0, 00:18:44.660 "r_mbytes_per_sec": 0, 00:18:44.660 "w_mbytes_per_sec": 0 00:18:44.660 }, 00:18:44.660 "claimed": false, 00:18:44.660 "zoned": false, 00:18:44.660 "supported_io_types": { 00:18:44.660 "read": true, 00:18:44.660 "write": true, 00:18:44.660 "unmap": true, 00:18:44.660 "flush": false, 00:18:44.660 "reset": true, 00:18:44.660 "nvme_admin": false, 00:18:44.660 "nvme_io": false, 00:18:44.660 "nvme_io_md": false, 00:18:44.660 "write_zeroes": true, 00:18:44.660 "zcopy": false, 00:18:44.660 "get_zone_info": false, 00:18:44.660 "zone_management": false, 00:18:44.660 "zone_append": false, 00:18:44.660 "compare": false, 00:18:44.660 "compare_and_write": false, 00:18:44.660 "abort": false, 00:18:44.660 "seek_hole": true, 00:18:44.660 "seek_data": true, 00:18:44.660 "copy": false, 00:18:44.660 "nvme_iov_md": false 00:18:44.660 }, 00:18:44.660 "driver_specific": { 00:18:44.660 "lvol": { 00:18:44.660 "lvol_store_uuid": "bbc61751-165f-4e99-8350-a0a35923f718", 00:18:44.660 "base_bdev": "nvme0n1", 00:18:44.660 "thin_provision": true, 00:18:44.660 "num_allocated_clusters": 0, 00:18:44.660 "snapshot": false, 00:18:44.660 "clone": false, 00:18:44.660 "esnap_clone": false 00:18:44.660 } 00:18:44.660 } 00:18:44.660 } 00:18:44.660 ]' 00:18:44.660 19:09:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:44.660 19:09:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:18:44.660 19:09:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:44.660 19:09:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:44.660 19:09:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:44.660 19:09:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:18:44.660 19:09:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:18:44.660 19:09:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 8d5cdf6e-5d2c-4814-a26d-99cd01e625f9 -c nvc0n1p0 --l2p_dram_limit 20 00:18:44.920 [2024-12-05 19:09:02.297101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.920 [2024-12-05 19:09:02.297139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:44.920 [2024-12-05 19:09:02.297151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:44.920 [2024-12-05 19:09:02.297161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.920 [2024-12-05 19:09:02.297198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.920 [2024-12-05 19:09:02.297206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:44.920 [2024-12-05 19:09:02.297215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:18:44.920 [2024-12-05 19:09:02.297221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.920 [2024-12-05 19:09:02.297236] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:44.920 [2024-12-05 19:09:02.297434] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:44.920 [2024-12-05 19:09:02.297448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.920 [2024-12-05 19:09:02.297456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:44.920 [2024-12-05 19:09:02.297464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.214 ms 00:18:44.920 [2024-12-05 19:09:02.297470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.920 [2024-12-05 19:09:02.297491] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 8130fbd3-8b00-4c46-a60c-eeebd15f328f 00:18:44.920 [2024-12-05 19:09:02.298426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.920 [2024-12-05 19:09:02.298452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:44.920 [2024-12-05 19:09:02.298460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:18:44.920 [2024-12-05 19:09:02.298469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.920 [2024-12-05 19:09:02.303126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.920 [2024-12-05 19:09:02.303156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:44.920 [2024-12-05 19:09:02.303164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.605 ms 00:18:44.920 [2024-12-05 19:09:02.303173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.920 [2024-12-05 19:09:02.303231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.920 [2024-12-05 19:09:02.303239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:44.920 [2024-12-05 19:09:02.303247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:18:44.920 [2024-12-05 19:09:02.303268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.920 [2024-12-05 19:09:02.303294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.920 [2024-12-05 19:09:02.303302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:44.920 [2024-12-05 19:09:02.303309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:44.920 [2024-12-05 19:09:02.303316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.920 [2024-12-05 19:09:02.303330] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:44.920 [2024-12-05 19:09:02.304567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.920 [2024-12-05 19:09:02.304590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:44.920 [2024-12-05 19:09:02.304601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.238 ms 00:18:44.920 [2024-12-05 19:09:02.304610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.920 [2024-12-05 19:09:02.304636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.920 [2024-12-05 19:09:02.304643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:44.920 [2024-12-05 19:09:02.304652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:44.920 [2024-12-05 19:09:02.304660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.920 [2024-12-05 19:09:02.304672] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:44.920 [2024-12-05 19:09:02.304866] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:44.920 [2024-12-05 19:09:02.304878] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:44.920 [2024-12-05 19:09:02.304889] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:44.920 [2024-12-05 19:09:02.304898] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:44.920 [2024-12-05 19:09:02.304909] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:44.920 [2024-12-05 19:09:02.304917] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:44.920 [2024-12-05 19:09:02.304922] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:44.920 [2024-12-05 19:09:02.304929] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:44.921 [2024-12-05 19:09:02.304935] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:44.921 [2024-12-05 19:09:02.304942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.921 [2024-12-05 19:09:02.304949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:44.921 [2024-12-05 19:09:02.304960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:18:44.921 [2024-12-05 19:09:02.304965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.921 [2024-12-05 19:09:02.305031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.921 [2024-12-05 19:09:02.305037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:44.921 [2024-12-05 19:09:02.305044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:18:44.921 [2024-12-05 19:09:02.305052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.921 [2024-12-05 19:09:02.305130] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:44.921 [2024-12-05 19:09:02.305138] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:44.921 [2024-12-05 19:09:02.305145] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:44.921 [2024-12-05 19:09:02.305151] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:44.921 [2024-12-05 19:09:02.305161] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:44.921 [2024-12-05 19:09:02.305166] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:44.921 [2024-12-05 19:09:02.305172] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:44.921 [2024-12-05 19:09:02.305178] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:44.921 [2024-12-05 19:09:02.305184] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:44.921 [2024-12-05 19:09:02.305189] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:44.921 [2024-12-05 19:09:02.305196] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:44.921 [2024-12-05 19:09:02.305202] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:44.921 [2024-12-05 19:09:02.305210] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:44.921 [2024-12-05 19:09:02.305217] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:44.921 [2024-12-05 19:09:02.305225] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:44.921 [2024-12-05 19:09:02.305231] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:44.921 [2024-12-05 19:09:02.305238] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:44.921 [2024-12-05 19:09:02.305244] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:44.921 [2024-12-05 19:09:02.305277] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:44.921 [2024-12-05 19:09:02.305284] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:44.921 [2024-12-05 19:09:02.305291] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:44.921 [2024-12-05 19:09:02.305297] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:44.921 [2024-12-05 19:09:02.305305] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:44.921 [2024-12-05 19:09:02.305311] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:44.921 [2024-12-05 19:09:02.305318] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:44.921 [2024-12-05 19:09:02.305324] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:44.921 [2024-12-05 19:09:02.305331] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:44.921 [2024-12-05 19:09:02.305337] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:44.921 [2024-12-05 19:09:02.305346] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:44.921 [2024-12-05 19:09:02.305352] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:44.921 [2024-12-05 19:09:02.305359] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:44.921 [2024-12-05 19:09:02.305366] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:44.921 [2024-12-05 19:09:02.305373] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:44.921 [2024-12-05 19:09:02.305378] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:44.921 [2024-12-05 19:09:02.305386] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:44.921 [2024-12-05 19:09:02.305391] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:44.921 [2024-12-05 19:09:02.305400] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:44.921 [2024-12-05 19:09:02.305406] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:44.921 [2024-12-05 19:09:02.305413] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:44.921 [2024-12-05 19:09:02.305419] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:44.921 [2024-12-05 19:09:02.305427] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:44.921 [2024-12-05 19:09:02.305433] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:44.921 [2024-12-05 19:09:02.305440] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:44.921 [2024-12-05 19:09:02.305445] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:44.921 [2024-12-05 19:09:02.305454] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:44.921 [2024-12-05 19:09:02.305460] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:44.921 [2024-12-05 19:09:02.305468] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:44.921 [2024-12-05 19:09:02.305475] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:44.921 [2024-12-05 19:09:02.305482] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:44.921 [2024-12-05 19:09:02.305488] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:44.921 [2024-12-05 19:09:02.305498] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:44.921 [2024-12-05 19:09:02.305504] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:44.921 [2024-12-05 19:09:02.305520] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:44.921 [2024-12-05 19:09:02.305527] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:44.921 [2024-12-05 19:09:02.305537] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:44.921 [2024-12-05 19:09:02.305544] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:44.921 [2024-12-05 19:09:02.305552] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:44.921 [2024-12-05 19:09:02.305559] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:44.921 [2024-12-05 19:09:02.305567] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:44.921 [2024-12-05 19:09:02.305573] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:44.921 [2024-12-05 19:09:02.305581] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:44.921 [2024-12-05 19:09:02.305588] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:44.921 [2024-12-05 19:09:02.305600] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:44.921 [2024-12-05 19:09:02.305606] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:44.921 [2024-12-05 19:09:02.305612] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:44.921 [2024-12-05 19:09:02.305617] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:44.921 [2024-12-05 19:09:02.305624] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:44.921 [2024-12-05 19:09:02.305630] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:44.921 [2024-12-05 19:09:02.305637] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:44.921 [2024-12-05 19:09:02.305642] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:44.921 [2024-12-05 19:09:02.305651] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:44.921 [2024-12-05 19:09:02.305657] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:44.921 [2024-12-05 19:09:02.305664] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:44.921 [2024-12-05 19:09:02.305669] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:44.921 [2024-12-05 19:09:02.305676] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:44.921 [2024-12-05 19:09:02.305681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.921 [2024-12-05 19:09:02.305689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:44.921 [2024-12-05 19:09:02.305695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.603 ms 00:18:44.921 [2024-12-05 19:09:02.305701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.921 [2024-12-05 19:09:02.305725] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:44.921 [2024-12-05 19:09:02.305733] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:49.127 [2024-12-05 19:09:05.994660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.127 [2024-12-05 19:09:05.994992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:49.127 [2024-12-05 19:09:05.995023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3688.917 ms 00:18:49.127 [2024-12-05 19:09:05.995035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.127 [2024-12-05 19:09:06.008872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.127 [2024-12-05 19:09:06.008934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:49.127 [2024-12-05 19:09:06.008948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.688 ms 00:18:49.127 [2024-12-05 19:09:06.008962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.127 [2024-12-05 19:09:06.009088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.127 [2024-12-05 19:09:06.009102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:49.127 [2024-12-05 19:09:06.009116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:18:49.127 [2024-12-05 19:09:06.009127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.127 [2024-12-05 19:09:06.031481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.127 [2024-12-05 19:09:06.031558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:49.127 [2024-12-05 19:09:06.031577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.313 ms 00:18:49.127 [2024-12-05 19:09:06.031592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.127 [2024-12-05 19:09:06.031643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.127 [2024-12-05 19:09:06.031664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:49.127 [2024-12-05 19:09:06.031677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:49.127 [2024-12-05 19:09:06.031691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.127 [2024-12-05 19:09:06.032391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.127 [2024-12-05 19:09:06.032448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:49.127 [2024-12-05 19:09:06.032465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.600 ms 00:18:49.127 [2024-12-05 19:09:06.032484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.127 [2024-12-05 19:09:06.032647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.127 [2024-12-05 19:09:06.032720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:49.127 [2024-12-05 19:09:06.032741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.131 ms 00:18:49.127 [2024-12-05 19:09:06.032763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.127 [2024-12-05 19:09:06.041548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.127 [2024-12-05 19:09:06.041608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:49.128 [2024-12-05 19:09:06.041629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.721 ms 00:18:49.128 [2024-12-05 19:09:06.041642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.128 [2024-12-05 19:09:06.051757] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:18:49.128 [2024-12-05 19:09:06.059486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.128 [2024-12-05 19:09:06.059538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:49.128 [2024-12-05 19:09:06.059556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.731 ms 00:18:49.128 [2024-12-05 19:09:06.059569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.128 [2024-12-05 19:09:06.142420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.128 [2024-12-05 19:09:06.142667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:49.128 [2024-12-05 19:09:06.142699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 82.815 ms 00:18:49.128 [2024-12-05 19:09:06.142713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.128 [2024-12-05 19:09:06.143024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.128 [2024-12-05 19:09:06.143054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:49.128 [2024-12-05 19:09:06.143067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.165 ms 00:18:49.128 [2024-12-05 19:09:06.143076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.128 [2024-12-05 19:09:06.149387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.128 [2024-12-05 19:09:06.149602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:49.128 [2024-12-05 19:09:06.149629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.285 ms 00:18:49.128 [2024-12-05 19:09:06.149639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.128 [2024-12-05 19:09:06.154820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.128 [2024-12-05 19:09:06.154868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:49.128 [2024-12-05 19:09:06.154881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.064 ms 00:18:49.128 [2024-12-05 19:09:06.154889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.128 [2024-12-05 19:09:06.155210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.128 [2024-12-05 19:09:06.155221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:49.128 [2024-12-05 19:09:06.155236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:18:49.128 [2024-12-05 19:09:06.155309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.128 [2024-12-05 19:09:06.194560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.128 [2024-12-05 19:09:06.194616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:49.128 [2024-12-05 19:09:06.194632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.214 ms 00:18:49.128 [2024-12-05 19:09:06.194641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.128 [2024-12-05 19:09:06.202012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.128 [2024-12-05 19:09:06.202063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:49.128 [2024-12-05 19:09:06.202077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.305 ms 00:18:49.128 [2024-12-05 19:09:06.202086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.128 [2024-12-05 19:09:06.208217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.128 [2024-12-05 19:09:06.208284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:49.128 [2024-12-05 19:09:06.208297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.080 ms 00:18:49.128 [2024-12-05 19:09:06.208305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.128 [2024-12-05 19:09:06.214565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.128 [2024-12-05 19:09:06.214614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:49.128 [2024-12-05 19:09:06.214631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.208 ms 00:18:49.128 [2024-12-05 19:09:06.214640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.128 [2024-12-05 19:09:06.214692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.128 [2024-12-05 19:09:06.214706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:49.128 [2024-12-05 19:09:06.214719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:49.128 [2024-12-05 19:09:06.214727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.128 [2024-12-05 19:09:06.214801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.128 [2024-12-05 19:09:06.214810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:49.128 [2024-12-05 19:09:06.214821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:18:49.128 [2024-12-05 19:09:06.214830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.128 [2024-12-05 19:09:06.216034] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3918.421 ms, result 0 00:18:49.128 { 00:18:49.128 "name": "ftl0", 00:18:49.128 "uuid": "8130fbd3-8b00-4c46-a60c-eeebd15f328f" 00:18:49.128 } 00:18:49.128 19:09:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:18:49.128 19:09:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:18:49.128 19:09:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:18:49.128 19:09:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:18:49.128 [2024-12-05 19:09:06.618283] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:18:49.128 I/O size of 69632 is greater than zero copy threshold (65536). 00:18:49.128 Zero copy mechanism will not be used. 00:18:49.128 Running I/O for 4 seconds... 00:18:51.454 856.00 IOPS, 56.84 MiB/s [2024-12-05T19:09:09.953Z] 953.00 IOPS, 63.29 MiB/s [2024-12-05T19:09:10.889Z] 1107.33 IOPS, 73.53 MiB/s [2024-12-05T19:09:10.889Z] 1710.75 IOPS, 113.60 MiB/s 00:18:53.330 Latency(us) 00:18:53.330 [2024-12-05T19:09:10.889Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:53.330 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:18:53.330 ftl0 : 4.00 1710.24 113.57 0.00 0.00 615.86 150.45 2873.50 00:18:53.330 [2024-12-05T19:09:10.889Z] =================================================================================================================== 00:18:53.330 [2024-12-05T19:09:10.889Z] Total : 1710.24 113.57 0.00 0.00 615.86 150.45 2873.50 00:18:53.330 { 00:18:53.330 "results": [ 00:18:53.330 { 00:18:53.330 "job": "ftl0", 00:18:53.330 "core_mask": "0x1", 00:18:53.330 "workload": "randwrite", 00:18:53.330 "status": "finished", 00:18:53.330 "queue_depth": 1, 00:18:53.330 "io_size": 69632, 00:18:53.330 "runtime": 4.001771, 00:18:53.330 "iops": 1710.2427900047255, 00:18:53.330 "mibps": 113.5708102737513, 00:18:53.330 "io_failed": 0, 00:18:53.330 "io_timeout": 0, 00:18:53.330 "avg_latency_us": 615.8596592186306, 00:18:53.330 "min_latency_us": 150.44923076923078, 00:18:53.330 "max_latency_us": 2873.5015384615385 00:18:53.330 } 00:18:53.330 ], 00:18:53.330 "core_count": 1 00:18:53.330 } 00:18:53.330 [2024-12-05 19:09:10.627408] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:18:53.330 19:09:10 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:18:53.330 [2024-12-05 19:09:10.732185] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:18:53.330 Running I/O for 4 seconds... 00:18:55.219 6229.00 IOPS, 24.33 MiB/s [2024-12-05T19:09:14.163Z] 6062.00 IOPS, 23.68 MiB/s [2024-12-05T19:09:15.107Z] 6168.33 IOPS, 24.10 MiB/s [2024-12-05T19:09:15.107Z] 6101.00 IOPS, 23.83 MiB/s 00:18:57.548 Latency(us) 00:18:57.548 [2024-12-05T19:09:15.107Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:57.548 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:18:57.548 ftl0 : 4.03 6089.65 23.79 0.00 0.00 20950.33 201.65 43354.58 00:18:57.548 [2024-12-05T19:09:15.107Z] =================================================================================================================== 00:18:57.548 [2024-12-05T19:09:15.107Z] Total : 6089.65 23.79 0.00 0.00 20950.33 0.00 43354.58 00:18:57.548 [2024-12-05 19:09:14.768027] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:18:57.548 { 00:18:57.548 "results": [ 00:18:57.548 { 00:18:57.548 "job": "ftl0", 00:18:57.548 "core_mask": "0x1", 00:18:57.548 "workload": "randwrite", 00:18:57.548 "status": "finished", 00:18:57.548 "queue_depth": 128, 00:18:57.548 "io_size": 4096, 00:18:57.548 "runtime": 4.028474, 00:18:57.548 "iops": 6089.650820633322, 00:18:57.548 "mibps": 23.787698518098914, 00:18:57.548 "io_failed": 0, 00:18:57.548 "io_timeout": 0, 00:18:57.548 "avg_latency_us": 20950.333414441422, 00:18:57.548 "min_latency_us": 201.64923076923077, 00:18:57.548 "max_latency_us": 43354.584615384614 00:18:57.548 } 00:18:57.548 ], 00:18:57.548 "core_count": 1 00:18:57.548 } 00:18:57.548 19:09:14 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:18:57.548 [2024-12-05 19:09:14.887456] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:18:57.548 Running I/O for 4 seconds... 00:18:59.428 5009.00 IOPS, 19.57 MiB/s [2024-12-05T19:09:17.952Z] 5069.00 IOPS, 19.80 MiB/s [2024-12-05T19:09:19.335Z] 5019.33 IOPS, 19.61 MiB/s [2024-12-05T19:09:19.335Z] 4867.50 IOPS, 19.01 MiB/s 00:19:01.776 Latency(us) 00:19:01.776 [2024-12-05T19:09:19.335Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:01.776 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:01.776 Verification LBA range: start 0x0 length 0x1400000 00:19:01.776 ftl0 : 4.01 4881.25 19.07 0.00 0.00 26151.21 307.20 46782.62 00:19:01.776 [2024-12-05T19:09:19.335Z] =================================================================================================================== 00:19:01.776 [2024-12-05T19:09:19.335Z] Total : 4881.25 19.07 0.00 0.00 26151.21 0.00 46782.62 00:19:01.776 { 00:19:01.776 "results": [ 00:19:01.776 { 00:19:01.776 "job": "ftl0", 00:19:01.776 "core_mask": "0x1", 00:19:01.776 "workload": "verify", 00:19:01.776 "status": "finished", 00:19:01.776 "verify_range": { 00:19:01.776 "start": 0, 00:19:01.776 "length": 20971520 00:19:01.776 }, 00:19:01.776 "queue_depth": 128, 00:19:01.776 "io_size": 4096, 00:19:01.776 "runtime": 4.012699, 00:19:01.776 "iops": 4881.253241272271, 00:19:01.776 "mibps": 19.06739547371981, 00:19:01.776 "io_failed": 0, 00:19:01.776 "io_timeout": 0, 00:19:01.776 "avg_latency_us": 26151.212052578045, 00:19:01.776 "min_latency_us": 307.2, 00:19:01.776 "max_latency_us": 46782.621538461535 00:19:01.776 } 00:19:01.776 ], 00:19:01.776 "core_count": 1 00:19:01.776 } 00:19:01.776 [2024-12-05 19:09:18.908915] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:01.776 19:09:18 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:19:01.776 [2024-12-05 19:09:19.125295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.776 [2024-12-05 19:09:19.125355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:01.776 [2024-12-05 19:09:19.125371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:01.776 [2024-12-05 19:09:19.125381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.776 [2024-12-05 19:09:19.125408] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:01.776 [2024-12-05 19:09:19.126176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.776 [2024-12-05 19:09:19.126225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:01.776 [2024-12-05 19:09:19.126237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.752 ms 00:19:01.776 [2024-12-05 19:09:19.126266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.776 [2024-12-05 19:09:19.129070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.776 [2024-12-05 19:09:19.129125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:01.777 [2024-12-05 19:09:19.129137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.776 ms 00:19:01.777 [2024-12-05 19:09:19.129153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.040 [2024-12-05 19:09:19.347551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.040 [2024-12-05 19:09:19.347813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:02.040 [2024-12-05 19:09:19.347841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 218.376 ms 00:19:02.040 [2024-12-05 19:09:19.347853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.040 [2024-12-05 19:09:19.354086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.040 [2024-12-05 19:09:19.354138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:02.040 [2024-12-05 19:09:19.354151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.187 ms 00:19:02.040 [2024-12-05 19:09:19.354162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.040 [2024-12-05 19:09:19.357276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.040 [2024-12-05 19:09:19.357335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:02.040 [2024-12-05 19:09:19.357346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.999 ms 00:19:02.040 [2024-12-05 19:09:19.357368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.040 [2024-12-05 19:09:19.363609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.040 [2024-12-05 19:09:19.363677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:02.040 [2024-12-05 19:09:19.363690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.194 ms 00:19:02.040 [2024-12-05 19:09:19.363725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.040 [2024-12-05 19:09:19.363864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.040 [2024-12-05 19:09:19.363883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:02.040 [2024-12-05 19:09:19.363892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:19:02.040 [2024-12-05 19:09:19.363904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.040 [2024-12-05 19:09:19.367556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.040 [2024-12-05 19:09:19.367751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:02.040 [2024-12-05 19:09:19.367770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.634 ms 00:19:02.040 [2024-12-05 19:09:19.367780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.040 [2024-12-05 19:09:19.370153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.040 [2024-12-05 19:09:19.370216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:02.040 [2024-12-05 19:09:19.370226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.291 ms 00:19:02.040 [2024-12-05 19:09:19.370235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.040 [2024-12-05 19:09:19.371836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.040 [2024-12-05 19:09:19.371895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:02.040 [2024-12-05 19:09:19.371905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.536 ms 00:19:02.040 [2024-12-05 19:09:19.371918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.040 [2024-12-05 19:09:19.373920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.040 [2024-12-05 19:09:19.373984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:02.040 [2024-12-05 19:09:19.373994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.934 ms 00:19:02.040 [2024-12-05 19:09:19.374004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.040 [2024-12-05 19:09:19.374049] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:02.040 [2024-12-05 19:09:19.374071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:02.040 [2024-12-05 19:09:19.374082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:02.040 [2024-12-05 19:09:19.374093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:02.040 [2024-12-05 19:09:19.374101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:02.040 [2024-12-05 19:09:19.374112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:02.040 [2024-12-05 19:09:19.374120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:02.040 [2024-12-05 19:09:19.374130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:02.040 [2024-12-05 19:09:19.374138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:02.040 [2024-12-05 19:09:19.374148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:02.040 [2024-12-05 19:09:19.374156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:02.040 [2024-12-05 19:09:19.374169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:02.040 [2024-12-05 19:09:19.374177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:02.040 [2024-12-05 19:09:19.374186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:02.040 [2024-12-05 19:09:19.374194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:02.040 [2024-12-05 19:09:19.374204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:02.040 [2024-12-05 19:09:19.374212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:02.041 [2024-12-05 19:09:19.374221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:02.041 [2024-12-05 19:09:19.374230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:02.041 [2024-12-05 19:09:19.374240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:02.041 [2024-12-05 19:09:19.374248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:02.041 [2024-12-05 19:09:19.374282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:02.041 [2024-12-05 19:09:19.374290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:02.041 [2024-12-05 19:09:19.374301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:02.041 [2024-12-05 19:09:19.374309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:02.041 [2024-12-05 19:09:19.374318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:02.041 [2024-12-05 19:09:19.374327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:02.041 [2024-12-05 19:09:19.374341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:02.041 [2024-12-05 19:09:19.374348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:02.041 [2024-12-05 19:09:19.374358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:02.041 [2024-12-05 19:09:19.374366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:02.041 [2024-12-05 19:09:19.374377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:02.041 [2024-12-05 19:09:19.374386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:02.041 [2024-12-05 19:09:19.374398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:02.041 [2024-12-05 19:09:19.374407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:02.041 [2024-12-05 19:09:19.374417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:02.041 [2024-12-05 19:09:19.374425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:02.041 [2024-12-05 19:09:19.374435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:02.041 [2024-12-05 19:09:19.374442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:02.041 [2024-12-05 19:09:19.374452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:02.041 [2024-12-05 19:09:19.374462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:02.041 [2024-12-05 19:09:19.374496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:02.041 [2024-12-05 19:09:19.374505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:02.041 [2024-12-05 19:09:19.374517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:02.041 [2024-12-05 19:09:19.374524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:02.041 [2024-12-05 19:09:19.374534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:02.041 [2024-12-05 19:09:19.374542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:02.041 [2024-12-05 19:09:19.374553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:02.041 [2024-12-05 19:09:19.374560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:02.041 [2024-12-05 19:09:19.374571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:02.041 [2024-12-05 19:09:19.374579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:02.041 [2024-12-05 19:09:19.374589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:02.041 [2024-12-05 19:09:19.374597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:02.041 [2024-12-05 19:09:19.374607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:02.041 [2024-12-05 19:09:19.374615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:02.041 [2024-12-05 19:09:19.374626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:02.041 [2024-12-05 19:09:19.374635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:02.041 [2024-12-05 19:09:19.374692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:02.041 [2024-12-05 19:09:19.374700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:02.041 [2024-12-05 19:09:19.374711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:02.041 [2024-12-05 19:09:19.374719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:02.041 [2024-12-05 19:09:19.374728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:02.041 [2024-12-05 19:09:19.374736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:02.041 [2024-12-05 19:09:19.374746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:02.041 [2024-12-05 19:09:19.374753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:02.041 [2024-12-05 19:09:19.374764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:02.041 [2024-12-05 19:09:19.374773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:02.041 [2024-12-05 19:09:19.374783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:02.041 [2024-12-05 19:09:19.374791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:02.041 [2024-12-05 19:09:19.374801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:02.041 [2024-12-05 19:09:19.374808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:02.041 [2024-12-05 19:09:19.374818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:02.041 [2024-12-05 19:09:19.374826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:02.041 [2024-12-05 19:09:19.374835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:02.041 [2024-12-05 19:09:19.374843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:02.041 [2024-12-05 19:09:19.374857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:02.041 [2024-12-05 19:09:19.374865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:02.041 [2024-12-05 19:09:19.374875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:02.041 [2024-12-05 19:09:19.374883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:02.041 [2024-12-05 19:09:19.374893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:02.041 [2024-12-05 19:09:19.374900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:02.041 [2024-12-05 19:09:19.374910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:02.041 [2024-12-05 19:09:19.374917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:02.041 [2024-12-05 19:09:19.374927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:02.041 [2024-12-05 19:09:19.374935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:02.041 [2024-12-05 19:09:19.374944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:02.041 [2024-12-05 19:09:19.374951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:02.041 [2024-12-05 19:09:19.374961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:02.041 [2024-12-05 19:09:19.374968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:02.041 [2024-12-05 19:09:19.374978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:02.041 [2024-12-05 19:09:19.374986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:02.041 [2024-12-05 19:09:19.374997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:02.041 [2024-12-05 19:09:19.375005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:02.041 [2024-12-05 19:09:19.375015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:02.041 [2024-12-05 19:09:19.375022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:02.041 [2024-12-05 19:09:19.375032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:02.041 [2024-12-05 19:09:19.375039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:02.041 [2024-12-05 19:09:19.375050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:02.042 [2024-12-05 19:09:19.375058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:02.042 [2024-12-05 19:09:19.375068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:02.042 [2024-12-05 19:09:19.375076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:02.042 [2024-12-05 19:09:19.375094] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:02.042 [2024-12-05 19:09:19.375103] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8130fbd3-8b00-4c46-a60c-eeebd15f328f 00:19:02.042 [2024-12-05 19:09:19.375123] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:02.042 [2024-12-05 19:09:19.375130] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:02.042 [2024-12-05 19:09:19.375141] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:02.042 [2024-12-05 19:09:19.375149] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:02.042 [2024-12-05 19:09:19.375161] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:02.042 [2024-12-05 19:09:19.375169] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:02.042 [2024-12-05 19:09:19.375181] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:02.042 [2024-12-05 19:09:19.375188] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:02.042 [2024-12-05 19:09:19.375198] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:02.042 [2024-12-05 19:09:19.375205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.042 [2024-12-05 19:09:19.375221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:02.042 [2024-12-05 19:09:19.375233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.158 ms 00:19:02.042 [2024-12-05 19:09:19.375243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.042 [2024-12-05 19:09:19.377601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.042 [2024-12-05 19:09:19.377799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:02.042 [2024-12-05 19:09:19.377819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.325 ms 00:19:02.042 [2024-12-05 19:09:19.377829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.042 [2024-12-05 19:09:19.377985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.042 [2024-12-05 19:09:19.378002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:02.042 [2024-12-05 19:09:19.378012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:19:02.042 [2024-12-05 19:09:19.378025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.042 [2024-12-05 19:09:19.385963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:02.042 [2024-12-05 19:09:19.386022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:02.042 [2024-12-05 19:09:19.386035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:02.042 [2024-12-05 19:09:19.386046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.042 [2024-12-05 19:09:19.386112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:02.042 [2024-12-05 19:09:19.386127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:02.042 [2024-12-05 19:09:19.386141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:02.042 [2024-12-05 19:09:19.386151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.042 [2024-12-05 19:09:19.386224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:02.042 [2024-12-05 19:09:19.386237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:02.042 [2024-12-05 19:09:19.386302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:02.042 [2024-12-05 19:09:19.386313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.042 [2024-12-05 19:09:19.386328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:02.042 [2024-12-05 19:09:19.386339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:02.042 [2024-12-05 19:09:19.386350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:02.042 [2024-12-05 19:09:19.386363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.042 [2024-12-05 19:09:19.399869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:02.042 [2024-12-05 19:09:19.399932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:02.042 [2024-12-05 19:09:19.399944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:02.042 [2024-12-05 19:09:19.399954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.042 [2024-12-05 19:09:19.410911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:02.042 [2024-12-05 19:09:19.410970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:02.042 [2024-12-05 19:09:19.410984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:02.042 [2024-12-05 19:09:19.410995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.042 [2024-12-05 19:09:19.411063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:02.042 [2024-12-05 19:09:19.411076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:02.042 [2024-12-05 19:09:19.411084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:02.042 [2024-12-05 19:09:19.411094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.042 [2024-12-05 19:09:19.411138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:02.042 [2024-12-05 19:09:19.411149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:02.042 [2024-12-05 19:09:19.411157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:02.042 [2024-12-05 19:09:19.411172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.042 [2024-12-05 19:09:19.411245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:02.042 [2024-12-05 19:09:19.411281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:02.042 [2024-12-05 19:09:19.411289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:02.042 [2024-12-05 19:09:19.411298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.042 [2024-12-05 19:09:19.411328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:02.042 [2024-12-05 19:09:19.411340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:02.042 [2024-12-05 19:09:19.411348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:02.042 [2024-12-05 19:09:19.411357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.042 [2024-12-05 19:09:19.411404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:02.042 [2024-12-05 19:09:19.411416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:02.042 [2024-12-05 19:09:19.411424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:02.042 [2024-12-05 19:09:19.411434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.042 [2024-12-05 19:09:19.411481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:02.042 [2024-12-05 19:09:19.411494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:02.042 [2024-12-05 19:09:19.411502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:02.042 [2024-12-05 19:09:19.411532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.042 [2024-12-05 19:09:19.411663] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 286.354 ms, result 0 00:19:02.042 true 00:19:02.042 19:09:19 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 86859 00:19:02.042 19:09:19 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 86859 ']' 00:19:02.042 19:09:19 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 86859 00:19:02.042 19:09:19 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:19:02.042 19:09:19 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:02.042 19:09:19 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86859 00:19:02.042 killing process with pid 86859 00:19:02.042 Received shutdown signal, test time was about 4.000000 seconds 00:19:02.042 00:19:02.042 Latency(us) 00:19:02.042 [2024-12-05T19:09:19.601Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:02.042 [2024-12-05T19:09:19.601Z] =================================================================================================================== 00:19:02.042 [2024-12-05T19:09:19.601Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:02.042 19:09:19 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:02.042 19:09:19 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:02.042 19:09:19 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86859' 00:19:02.042 19:09:19 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 86859 00:19:02.042 19:09:19 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 86859 00:19:05.481 Remove shared memory files 00:19:05.481 19:09:22 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:05.481 19:09:22 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:19:05.481 19:09:22 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:19:05.481 19:09:22 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:19:05.481 19:09:22 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:19:05.481 19:09:22 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:19:05.481 19:09:22 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:19:05.481 19:09:22 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:19:05.481 ************************************ 00:19:05.481 END TEST ftl_bdevperf 00:19:05.481 ************************************ 00:19:05.481 00:19:05.481 real 0m23.968s 00:19:05.481 user 0m26.514s 00:19:05.481 sys 0m0.983s 00:19:05.481 19:09:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:05.481 19:09:22 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:05.481 19:09:22 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:19:05.481 19:09:22 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:05.481 19:09:22 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:05.481 19:09:22 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:05.481 ************************************ 00:19:05.481 START TEST ftl_trim 00:19:05.481 ************************************ 00:19:05.481 19:09:22 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:19:05.481 * Looking for test storage... 00:19:05.481 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:05.481 19:09:22 ftl.ftl_trim -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:05.481 19:09:22 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # lcov --version 00:19:05.481 19:09:22 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:05.481 19:09:22 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:05.481 19:09:22 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:05.481 19:09:22 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:05.481 19:09:22 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:05.481 19:09:22 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:19:05.481 19:09:22 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:19:05.481 19:09:22 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:19:05.481 19:09:22 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:19:05.481 19:09:22 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:19:05.481 19:09:22 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:19:05.481 19:09:22 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:19:05.481 19:09:22 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:05.481 19:09:22 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:19:05.481 19:09:22 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:19:05.481 19:09:22 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:05.481 19:09:22 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:05.482 19:09:22 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:19:05.482 19:09:22 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:19:05.482 19:09:22 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:05.482 19:09:22 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:19:05.482 19:09:22 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:19:05.482 19:09:22 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:19:05.482 19:09:22 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:19:05.482 19:09:22 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:05.482 19:09:22 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:19:05.482 19:09:22 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:19:05.482 19:09:22 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:05.482 19:09:22 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:05.482 19:09:22 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:19:05.482 19:09:22 ftl.ftl_trim -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:05.482 19:09:22 ftl.ftl_trim -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:05.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:05.482 --rc genhtml_branch_coverage=1 00:19:05.482 --rc genhtml_function_coverage=1 00:19:05.482 --rc genhtml_legend=1 00:19:05.482 --rc geninfo_all_blocks=1 00:19:05.482 --rc geninfo_unexecuted_blocks=1 00:19:05.482 00:19:05.482 ' 00:19:05.482 19:09:22 ftl.ftl_trim -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:05.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:05.482 --rc genhtml_branch_coverage=1 00:19:05.482 --rc genhtml_function_coverage=1 00:19:05.482 --rc genhtml_legend=1 00:19:05.482 --rc geninfo_all_blocks=1 00:19:05.482 --rc geninfo_unexecuted_blocks=1 00:19:05.482 00:19:05.482 ' 00:19:05.482 19:09:22 ftl.ftl_trim -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:05.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:05.482 --rc genhtml_branch_coverage=1 00:19:05.482 --rc genhtml_function_coverage=1 00:19:05.482 --rc genhtml_legend=1 00:19:05.482 --rc geninfo_all_blocks=1 00:19:05.482 --rc geninfo_unexecuted_blocks=1 00:19:05.482 00:19:05.482 ' 00:19:05.482 19:09:22 ftl.ftl_trim -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:05.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:05.482 --rc genhtml_branch_coverage=1 00:19:05.482 --rc genhtml_function_coverage=1 00:19:05.482 --rc genhtml_legend=1 00:19:05.482 --rc geninfo_all_blocks=1 00:19:05.482 --rc geninfo_unexecuted_blocks=1 00:19:05.482 00:19:05.482 ' 00:19:05.482 19:09:22 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:05.482 19:09:22 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:19:05.482 19:09:22 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:05.482 19:09:22 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:05.482 19:09:22 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:05.482 19:09:22 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:05.482 19:09:22 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:05.482 19:09:22 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:05.482 19:09:22 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:05.482 19:09:22 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:05.482 19:09:22 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:05.482 19:09:22 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:05.482 19:09:22 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:05.482 19:09:22 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:05.482 19:09:22 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:05.482 19:09:22 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:05.482 19:09:22 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:05.482 19:09:22 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:05.482 19:09:22 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:05.482 19:09:22 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:05.482 19:09:22 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:05.482 19:09:22 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:05.482 19:09:22 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:05.482 19:09:22 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:05.482 19:09:22 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:05.482 19:09:22 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:05.482 19:09:22 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:05.482 19:09:22 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:05.482 19:09:22 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:05.482 19:09:22 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:05.482 19:09:22 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:19:05.482 19:09:22 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:19:05.482 19:09:22 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:19:05.482 19:09:22 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:19:05.482 19:09:22 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:19:05.482 19:09:22 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:19:05.482 19:09:22 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:19:05.482 19:09:22 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:19:05.482 19:09:22 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:05.482 19:09:22 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:05.482 19:09:22 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:19:05.482 19:09:22 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=87200 00:19:05.482 19:09:22 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 87200 00:19:05.482 19:09:22 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:19:05.482 19:09:22 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 87200 ']' 00:19:05.482 19:09:22 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:05.482 19:09:22 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:05.482 19:09:22 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:05.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:05.482 19:09:22 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:05.482 19:09:22 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:05.482 [2024-12-05 19:09:22.704177] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:19:05.482 [2024-12-05 19:09:22.704600] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87200 ] 00:19:05.482 [2024-12-05 19:09:22.853414] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:05.482 [2024-12-05 19:09:22.884966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:05.482 [2024-12-05 19:09:22.885278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:05.483 [2024-12-05 19:09:22.885303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:06.056 19:09:23 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:06.056 19:09:23 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:19:06.056 19:09:23 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:06.056 19:09:23 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:19:06.056 19:09:23 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:06.056 19:09:23 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:19:06.056 19:09:23 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:19:06.056 19:09:23 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:06.317 19:09:23 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:06.317 19:09:23 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:19:06.317 19:09:23 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:06.317 19:09:23 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:19:06.317 19:09:23 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:06.317 19:09:23 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:06.317 19:09:23 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:06.317 19:09:23 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:06.578 19:09:24 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:06.578 { 00:19:06.578 "name": "nvme0n1", 00:19:06.578 "aliases": [ 00:19:06.578 "50461b71-dd39-408d-91cc-c33c4a58816f" 00:19:06.578 ], 00:19:06.578 "product_name": "NVMe disk", 00:19:06.578 "block_size": 4096, 00:19:06.578 "num_blocks": 1310720, 00:19:06.578 "uuid": "50461b71-dd39-408d-91cc-c33c4a58816f", 00:19:06.578 "numa_id": -1, 00:19:06.578 "assigned_rate_limits": { 00:19:06.578 "rw_ios_per_sec": 0, 00:19:06.578 "rw_mbytes_per_sec": 0, 00:19:06.578 "r_mbytes_per_sec": 0, 00:19:06.578 "w_mbytes_per_sec": 0 00:19:06.578 }, 00:19:06.578 "claimed": true, 00:19:06.578 "claim_type": "read_many_write_one", 00:19:06.578 "zoned": false, 00:19:06.578 "supported_io_types": { 00:19:06.578 "read": true, 00:19:06.578 "write": true, 00:19:06.578 "unmap": true, 00:19:06.578 "flush": true, 00:19:06.578 "reset": true, 00:19:06.578 "nvme_admin": true, 00:19:06.578 "nvme_io": true, 00:19:06.578 "nvme_io_md": false, 00:19:06.578 "write_zeroes": true, 00:19:06.578 "zcopy": false, 00:19:06.578 "get_zone_info": false, 00:19:06.578 "zone_management": false, 00:19:06.578 "zone_append": false, 00:19:06.578 "compare": true, 00:19:06.578 "compare_and_write": false, 00:19:06.578 "abort": true, 00:19:06.578 "seek_hole": false, 00:19:06.578 "seek_data": false, 00:19:06.578 "copy": true, 00:19:06.578 "nvme_iov_md": false 00:19:06.579 }, 00:19:06.579 "driver_specific": { 00:19:06.579 "nvme": [ 00:19:06.579 { 00:19:06.579 "pci_address": "0000:00:11.0", 00:19:06.579 "trid": { 00:19:06.579 "trtype": "PCIe", 00:19:06.579 "traddr": "0000:00:11.0" 00:19:06.579 }, 00:19:06.579 "ctrlr_data": { 00:19:06.579 "cntlid": 0, 00:19:06.579 "vendor_id": "0x1b36", 00:19:06.579 "model_number": "QEMU NVMe Ctrl", 00:19:06.579 "serial_number": "12341", 00:19:06.579 "firmware_revision": "8.0.0", 00:19:06.579 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:06.579 "oacs": { 00:19:06.579 "security": 0, 00:19:06.579 "format": 1, 00:19:06.579 "firmware": 0, 00:19:06.579 "ns_manage": 1 00:19:06.579 }, 00:19:06.579 "multi_ctrlr": false, 00:19:06.579 "ana_reporting": false 00:19:06.579 }, 00:19:06.579 "vs": { 00:19:06.579 "nvme_version": "1.4" 00:19:06.579 }, 00:19:06.579 "ns_data": { 00:19:06.579 "id": 1, 00:19:06.579 "can_share": false 00:19:06.579 } 00:19:06.579 } 00:19:06.579 ], 00:19:06.579 "mp_policy": "active_passive" 00:19:06.579 } 00:19:06.579 } 00:19:06.579 ]' 00:19:06.579 19:09:24 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:06.579 19:09:24 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:06.579 19:09:24 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:06.579 19:09:24 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:19:06.579 19:09:24 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:19:06.579 19:09:24 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:19:06.579 19:09:24 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:19:06.579 19:09:24 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:06.579 19:09:24 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:19:06.579 19:09:24 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:06.579 19:09:24 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:06.839 19:09:24 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=bbc61751-165f-4e99-8350-a0a35923f718 00:19:06.839 19:09:24 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:19:06.839 19:09:24 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u bbc61751-165f-4e99-8350-a0a35923f718 00:19:07.100 19:09:24 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:07.361 19:09:24 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=ba7c6a1d-3b74-4163-8b11-286544eeb94f 00:19:07.361 19:09:24 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u ba7c6a1d-3b74-4163-8b11-286544eeb94f 00:19:07.637 19:09:24 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=5e60adcb-ad23-47e3-a47a-b33b20d647be 00:19:07.637 19:09:24 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 5e60adcb-ad23-47e3-a47a-b33b20d647be 00:19:07.637 19:09:24 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:19:07.637 19:09:24 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:07.637 19:09:24 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=5e60adcb-ad23-47e3-a47a-b33b20d647be 00:19:07.637 19:09:24 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:19:07.638 19:09:25 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 5e60adcb-ad23-47e3-a47a-b33b20d647be 00:19:07.638 19:09:25 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=5e60adcb-ad23-47e3-a47a-b33b20d647be 00:19:07.638 19:09:25 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:07.638 19:09:25 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:07.638 19:09:25 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:07.638 19:09:25 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5e60adcb-ad23-47e3-a47a-b33b20d647be 00:19:07.900 19:09:25 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:07.900 { 00:19:07.900 "name": "5e60adcb-ad23-47e3-a47a-b33b20d647be", 00:19:07.900 "aliases": [ 00:19:07.900 "lvs/nvme0n1p0" 00:19:07.900 ], 00:19:07.900 "product_name": "Logical Volume", 00:19:07.900 "block_size": 4096, 00:19:07.900 "num_blocks": 26476544, 00:19:07.900 "uuid": "5e60adcb-ad23-47e3-a47a-b33b20d647be", 00:19:07.900 "assigned_rate_limits": { 00:19:07.900 "rw_ios_per_sec": 0, 00:19:07.900 "rw_mbytes_per_sec": 0, 00:19:07.900 "r_mbytes_per_sec": 0, 00:19:07.900 "w_mbytes_per_sec": 0 00:19:07.900 }, 00:19:07.900 "claimed": false, 00:19:07.900 "zoned": false, 00:19:07.900 "supported_io_types": { 00:19:07.900 "read": true, 00:19:07.900 "write": true, 00:19:07.900 "unmap": true, 00:19:07.900 "flush": false, 00:19:07.900 "reset": true, 00:19:07.900 "nvme_admin": false, 00:19:07.900 "nvme_io": false, 00:19:07.900 "nvme_io_md": false, 00:19:07.900 "write_zeroes": true, 00:19:07.900 "zcopy": false, 00:19:07.900 "get_zone_info": false, 00:19:07.900 "zone_management": false, 00:19:07.900 "zone_append": false, 00:19:07.900 "compare": false, 00:19:07.900 "compare_and_write": false, 00:19:07.900 "abort": false, 00:19:07.900 "seek_hole": true, 00:19:07.900 "seek_data": true, 00:19:07.900 "copy": false, 00:19:07.900 "nvme_iov_md": false 00:19:07.900 }, 00:19:07.900 "driver_specific": { 00:19:07.900 "lvol": { 00:19:07.900 "lvol_store_uuid": "ba7c6a1d-3b74-4163-8b11-286544eeb94f", 00:19:07.900 "base_bdev": "nvme0n1", 00:19:07.900 "thin_provision": true, 00:19:07.900 "num_allocated_clusters": 0, 00:19:07.900 "snapshot": false, 00:19:07.900 "clone": false, 00:19:07.900 "esnap_clone": false 00:19:07.900 } 00:19:07.900 } 00:19:07.900 } 00:19:07.900 ]' 00:19:07.900 19:09:25 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:07.900 19:09:25 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:07.900 19:09:25 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:07.900 19:09:25 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:07.900 19:09:25 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:07.900 19:09:25 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:19:07.900 19:09:25 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:19:07.900 19:09:25 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:19:07.900 19:09:25 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:08.161 19:09:25 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:08.161 19:09:25 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:08.161 19:09:25 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 5e60adcb-ad23-47e3-a47a-b33b20d647be 00:19:08.161 19:09:25 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=5e60adcb-ad23-47e3-a47a-b33b20d647be 00:19:08.161 19:09:25 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:08.161 19:09:25 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:08.161 19:09:25 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:08.161 19:09:25 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5e60adcb-ad23-47e3-a47a-b33b20d647be 00:19:08.422 19:09:25 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:08.422 { 00:19:08.422 "name": "5e60adcb-ad23-47e3-a47a-b33b20d647be", 00:19:08.422 "aliases": [ 00:19:08.422 "lvs/nvme0n1p0" 00:19:08.422 ], 00:19:08.422 "product_name": "Logical Volume", 00:19:08.422 "block_size": 4096, 00:19:08.422 "num_blocks": 26476544, 00:19:08.422 "uuid": "5e60adcb-ad23-47e3-a47a-b33b20d647be", 00:19:08.422 "assigned_rate_limits": { 00:19:08.422 "rw_ios_per_sec": 0, 00:19:08.422 "rw_mbytes_per_sec": 0, 00:19:08.422 "r_mbytes_per_sec": 0, 00:19:08.422 "w_mbytes_per_sec": 0 00:19:08.422 }, 00:19:08.422 "claimed": false, 00:19:08.422 "zoned": false, 00:19:08.422 "supported_io_types": { 00:19:08.422 "read": true, 00:19:08.422 "write": true, 00:19:08.422 "unmap": true, 00:19:08.422 "flush": false, 00:19:08.422 "reset": true, 00:19:08.422 "nvme_admin": false, 00:19:08.422 "nvme_io": false, 00:19:08.422 "nvme_io_md": false, 00:19:08.422 "write_zeroes": true, 00:19:08.422 "zcopy": false, 00:19:08.422 "get_zone_info": false, 00:19:08.422 "zone_management": false, 00:19:08.422 "zone_append": false, 00:19:08.422 "compare": false, 00:19:08.422 "compare_and_write": false, 00:19:08.422 "abort": false, 00:19:08.422 "seek_hole": true, 00:19:08.423 "seek_data": true, 00:19:08.423 "copy": false, 00:19:08.423 "nvme_iov_md": false 00:19:08.423 }, 00:19:08.423 "driver_specific": { 00:19:08.423 "lvol": { 00:19:08.423 "lvol_store_uuid": "ba7c6a1d-3b74-4163-8b11-286544eeb94f", 00:19:08.423 "base_bdev": "nvme0n1", 00:19:08.423 "thin_provision": true, 00:19:08.423 "num_allocated_clusters": 0, 00:19:08.423 "snapshot": false, 00:19:08.423 "clone": false, 00:19:08.423 "esnap_clone": false 00:19:08.423 } 00:19:08.423 } 00:19:08.423 } 00:19:08.423 ]' 00:19:08.423 19:09:25 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:08.423 19:09:25 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:08.423 19:09:25 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:08.423 19:09:25 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:08.423 19:09:25 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:08.423 19:09:25 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:19:08.423 19:09:25 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:19:08.423 19:09:25 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:08.684 19:09:25 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:19:08.684 19:09:25 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:19:08.684 19:09:26 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 5e60adcb-ad23-47e3-a47a-b33b20d647be 00:19:08.684 19:09:26 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=5e60adcb-ad23-47e3-a47a-b33b20d647be 00:19:08.684 19:09:26 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:08.684 19:09:26 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:08.684 19:09:26 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:08.684 19:09:26 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5e60adcb-ad23-47e3-a47a-b33b20d647be 00:19:08.684 19:09:26 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:08.684 { 00:19:08.684 "name": "5e60adcb-ad23-47e3-a47a-b33b20d647be", 00:19:08.684 "aliases": [ 00:19:08.684 "lvs/nvme0n1p0" 00:19:08.684 ], 00:19:08.684 "product_name": "Logical Volume", 00:19:08.684 "block_size": 4096, 00:19:08.684 "num_blocks": 26476544, 00:19:08.684 "uuid": "5e60adcb-ad23-47e3-a47a-b33b20d647be", 00:19:08.684 "assigned_rate_limits": { 00:19:08.684 "rw_ios_per_sec": 0, 00:19:08.684 "rw_mbytes_per_sec": 0, 00:19:08.684 "r_mbytes_per_sec": 0, 00:19:08.684 "w_mbytes_per_sec": 0 00:19:08.684 }, 00:19:08.684 "claimed": false, 00:19:08.684 "zoned": false, 00:19:08.684 "supported_io_types": { 00:19:08.684 "read": true, 00:19:08.684 "write": true, 00:19:08.684 "unmap": true, 00:19:08.684 "flush": false, 00:19:08.684 "reset": true, 00:19:08.684 "nvme_admin": false, 00:19:08.684 "nvme_io": false, 00:19:08.684 "nvme_io_md": false, 00:19:08.684 "write_zeroes": true, 00:19:08.684 "zcopy": false, 00:19:08.684 "get_zone_info": false, 00:19:08.684 "zone_management": false, 00:19:08.684 "zone_append": false, 00:19:08.684 "compare": false, 00:19:08.684 "compare_and_write": false, 00:19:08.684 "abort": false, 00:19:08.684 "seek_hole": true, 00:19:08.684 "seek_data": true, 00:19:08.684 "copy": false, 00:19:08.684 "nvme_iov_md": false 00:19:08.684 }, 00:19:08.684 "driver_specific": { 00:19:08.684 "lvol": { 00:19:08.684 "lvol_store_uuid": "ba7c6a1d-3b74-4163-8b11-286544eeb94f", 00:19:08.684 "base_bdev": "nvme0n1", 00:19:08.684 "thin_provision": true, 00:19:08.684 "num_allocated_clusters": 0, 00:19:08.684 "snapshot": false, 00:19:08.684 "clone": false, 00:19:08.684 "esnap_clone": false 00:19:08.684 } 00:19:08.684 } 00:19:08.684 } 00:19:08.684 ]' 00:19:08.684 19:09:26 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:08.945 19:09:26 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:08.945 19:09:26 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:08.945 19:09:26 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:08.945 19:09:26 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:08.945 19:09:26 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:19:08.945 19:09:26 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:19:08.945 19:09:26 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 5e60adcb-ad23-47e3-a47a-b33b20d647be -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:19:08.945 [2024-12-05 19:09:26.463839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.945 [2024-12-05 19:09:26.463881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:08.945 [2024-12-05 19:09:26.463892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:08.945 [2024-12-05 19:09:26.463902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.945 [2024-12-05 19:09:26.468512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.945 [2024-12-05 19:09:26.468635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:08.945 [2024-12-05 19:09:26.468648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.588 ms 00:19:08.945 [2024-12-05 19:09:26.468659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.945 [2024-12-05 19:09:26.468961] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:08.945 [2024-12-05 19:09:26.469191] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:08.945 [2024-12-05 19:09:26.469211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.945 [2024-12-05 19:09:26.469219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:08.945 [2024-12-05 19:09:26.469230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.261 ms 00:19:08.945 [2024-12-05 19:09:26.469237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.945 [2024-12-05 19:09:26.469312] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 42d50fa2-6cab-4623-bd3d-509e916d40ae 00:19:08.945 [2024-12-05 19:09:26.470288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.945 [2024-12-05 19:09:26.470314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:08.945 [2024-12-05 19:09:26.470323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:19:08.945 [2024-12-05 19:09:26.470330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.945 [2024-12-05 19:09:26.475479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.945 [2024-12-05 19:09:26.475583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:08.945 [2024-12-05 19:09:26.475597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.067 ms 00:19:08.945 [2024-12-05 19:09:26.475604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.945 [2024-12-05 19:09:26.475703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.945 [2024-12-05 19:09:26.475711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:08.945 [2024-12-05 19:09:26.475721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:19:08.945 [2024-12-05 19:09:26.475726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.945 [2024-12-05 19:09:26.475762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.945 [2024-12-05 19:09:26.475769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:08.945 [2024-12-05 19:09:26.475776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:08.945 [2024-12-05 19:09:26.475782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.945 [2024-12-05 19:09:26.475821] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:08.946 [2024-12-05 19:09:26.477091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.946 [2024-12-05 19:09:26.477117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:08.946 [2024-12-05 19:09:26.477126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.275 ms 00:19:08.946 [2024-12-05 19:09:26.477134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.946 [2024-12-05 19:09:26.477177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.946 [2024-12-05 19:09:26.477185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:08.946 [2024-12-05 19:09:26.477192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:08.946 [2024-12-05 19:09:26.477200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.946 [2024-12-05 19:09:26.477227] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:08.946 [2024-12-05 19:09:26.477363] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:08.946 [2024-12-05 19:09:26.477372] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:08.946 [2024-12-05 19:09:26.477382] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:08.946 [2024-12-05 19:09:26.477390] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:08.946 [2024-12-05 19:09:26.477398] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:08.946 [2024-12-05 19:09:26.477405] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:08.946 [2024-12-05 19:09:26.477411] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:08.946 [2024-12-05 19:09:26.477417] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:08.946 [2024-12-05 19:09:26.477426] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:08.946 [2024-12-05 19:09:26.477431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.946 [2024-12-05 19:09:26.477438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:08.946 [2024-12-05 19:09:26.477445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.206 ms 00:19:08.946 [2024-12-05 19:09:26.477452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.946 [2024-12-05 19:09:26.477539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.946 [2024-12-05 19:09:26.477552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:08.946 [2024-12-05 19:09:26.477558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:19:08.946 [2024-12-05 19:09:26.477565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.946 [2024-12-05 19:09:26.477662] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:08.946 [2024-12-05 19:09:26.477671] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:08.946 [2024-12-05 19:09:26.477685] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:08.946 [2024-12-05 19:09:26.477694] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:08.946 [2024-12-05 19:09:26.477700] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:08.946 [2024-12-05 19:09:26.477706] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:08.946 [2024-12-05 19:09:26.477712] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:08.946 [2024-12-05 19:09:26.477718] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:08.946 [2024-12-05 19:09:26.477724] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:08.946 [2024-12-05 19:09:26.477730] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:08.946 [2024-12-05 19:09:26.477735] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:08.946 [2024-12-05 19:09:26.477741] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:08.946 [2024-12-05 19:09:26.477747] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:08.946 [2024-12-05 19:09:26.477756] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:08.946 [2024-12-05 19:09:26.477762] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:08.946 [2024-12-05 19:09:26.477769] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:08.946 [2024-12-05 19:09:26.477774] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:08.946 [2024-12-05 19:09:26.477782] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:08.946 [2024-12-05 19:09:26.477788] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:08.946 [2024-12-05 19:09:26.477795] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:08.946 [2024-12-05 19:09:26.477801] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:08.946 [2024-12-05 19:09:26.477817] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:08.946 [2024-12-05 19:09:26.477823] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:08.946 [2024-12-05 19:09:26.477830] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:08.946 [2024-12-05 19:09:26.477836] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:08.946 [2024-12-05 19:09:26.477843] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:08.946 [2024-12-05 19:09:26.477849] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:08.946 [2024-12-05 19:09:26.477855] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:08.946 [2024-12-05 19:09:26.477863] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:08.946 [2024-12-05 19:09:26.477873] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:08.946 [2024-12-05 19:09:26.477878] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:08.946 [2024-12-05 19:09:26.477886] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:08.946 [2024-12-05 19:09:26.477892] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:08.946 [2024-12-05 19:09:26.477899] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:08.946 [2024-12-05 19:09:26.477905] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:08.946 [2024-12-05 19:09:26.477912] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:08.946 [2024-12-05 19:09:26.477918] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:08.946 [2024-12-05 19:09:26.477926] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:08.946 [2024-12-05 19:09:26.477931] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:08.946 [2024-12-05 19:09:26.477938] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:08.946 [2024-12-05 19:09:26.477944] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:08.946 [2024-12-05 19:09:26.477951] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:08.946 [2024-12-05 19:09:26.477956] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:08.946 [2024-12-05 19:09:26.477963] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:08.946 [2024-12-05 19:09:26.477970] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:08.946 [2024-12-05 19:09:26.477979] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:08.946 [2024-12-05 19:09:26.477985] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:08.946 [2024-12-05 19:09:26.477993] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:08.946 [2024-12-05 19:09:26.477999] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:08.946 [2024-12-05 19:09:26.478006] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:08.946 [2024-12-05 19:09:26.478012] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:08.946 [2024-12-05 19:09:26.478019] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:08.946 [2024-12-05 19:09:26.478024] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:08.946 [2024-12-05 19:09:26.478033] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:08.946 [2024-12-05 19:09:26.478040] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:08.946 [2024-12-05 19:09:26.478050] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:08.946 [2024-12-05 19:09:26.478057] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:08.946 [2024-12-05 19:09:26.478064] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:08.946 [2024-12-05 19:09:26.478070] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:08.946 [2024-12-05 19:09:26.478078] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:08.946 [2024-12-05 19:09:26.478085] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:08.946 [2024-12-05 19:09:26.478094] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:08.946 [2024-12-05 19:09:26.478100] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:08.946 [2024-12-05 19:09:26.478108] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:08.946 [2024-12-05 19:09:26.478115] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:08.946 [2024-12-05 19:09:26.478122] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:08.946 [2024-12-05 19:09:26.478128] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:08.946 [2024-12-05 19:09:26.478135] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:08.946 [2024-12-05 19:09:26.478142] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:08.947 [2024-12-05 19:09:26.478148] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:08.947 [2024-12-05 19:09:26.478156] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:08.947 [2024-12-05 19:09:26.478163] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:08.947 [2024-12-05 19:09:26.478168] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:08.947 [2024-12-05 19:09:26.478175] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:08.947 [2024-12-05 19:09:26.478180] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:08.947 [2024-12-05 19:09:26.478187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.947 [2024-12-05 19:09:26.478192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:08.947 [2024-12-05 19:09:26.478200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.574 ms 00:19:08.947 [2024-12-05 19:09:26.478215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.947 [2024-12-05 19:09:26.478306] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:08.947 [2024-12-05 19:09:26.478315] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:11.476 [2024-12-05 19:09:28.639995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.476 [2024-12-05 19:09:28.640052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:11.476 [2024-12-05 19:09:28.640071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2161.675 ms 00:19:11.476 [2024-12-05 19:09:28.640080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.476 [2024-12-05 19:09:28.648578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.476 [2024-12-05 19:09:28.648617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:11.476 [2024-12-05 19:09:28.648630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.392 ms 00:19:11.476 [2024-12-05 19:09:28.648650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.476 [2024-12-05 19:09:28.648780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.476 [2024-12-05 19:09:28.648789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:11.476 [2024-12-05 19:09:28.648802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:19:11.476 [2024-12-05 19:09:28.648809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.476 [2024-12-05 19:09:28.675923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.476 [2024-12-05 19:09:28.675964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:11.476 [2024-12-05 19:09:28.675978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.079 ms 00:19:11.476 [2024-12-05 19:09:28.675986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.476 [2024-12-05 19:09:28.676072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.476 [2024-12-05 19:09:28.676086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:11.476 [2024-12-05 19:09:28.676096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:11.476 [2024-12-05 19:09:28.676103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.476 [2024-12-05 19:09:28.676463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.476 [2024-12-05 19:09:28.676479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:11.476 [2024-12-05 19:09:28.676490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.329 ms 00:19:11.476 [2024-12-05 19:09:28.676498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.476 [2024-12-05 19:09:28.676628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.476 [2024-12-05 19:09:28.676638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:11.476 [2024-12-05 19:09:28.676651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:19:11.476 [2024-12-05 19:09:28.676660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.476 [2024-12-05 19:09:28.682225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.476 [2024-12-05 19:09:28.682408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:11.476 [2024-12-05 19:09:28.682428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.535 ms 00:19:11.476 [2024-12-05 19:09:28.682436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.476 [2024-12-05 19:09:28.690705] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:11.476 [2024-12-05 19:09:28.705115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.476 [2024-12-05 19:09:28.705148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:11.476 [2024-12-05 19:09:28.705159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.582 ms 00:19:11.476 [2024-12-05 19:09:28.705168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.476 [2024-12-05 19:09:28.757138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.476 [2024-12-05 19:09:28.757296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:11.476 [2024-12-05 19:09:28.757312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.900 ms 00:19:11.476 [2024-12-05 19:09:28.757327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.476 [2024-12-05 19:09:28.757506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.476 [2024-12-05 19:09:28.757526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:11.476 [2024-12-05 19:09:28.757534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:19:11.476 [2024-12-05 19:09:28.757543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.476 [2024-12-05 19:09:28.760555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.476 [2024-12-05 19:09:28.760587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:11.476 [2024-12-05 19:09:28.760597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.968 ms 00:19:11.476 [2024-12-05 19:09:28.760607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.476 [2024-12-05 19:09:28.762955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.476 [2024-12-05 19:09:28.762988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:11.476 [2024-12-05 19:09:28.762999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.304 ms 00:19:11.476 [2024-12-05 19:09:28.763009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.476 [2024-12-05 19:09:28.763348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.476 [2024-12-05 19:09:28.763364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:11.476 [2024-12-05 19:09:28.763373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:19:11.476 [2024-12-05 19:09:28.763384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.476 [2024-12-05 19:09:28.788890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.476 [2024-12-05 19:09:28.788946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:11.476 [2024-12-05 19:09:28.788963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.473 ms 00:19:11.476 [2024-12-05 19:09:28.788978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.476 [2024-12-05 19:09:28.793157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.476 [2024-12-05 19:09:28.793199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:11.476 [2024-12-05 19:09:28.793211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.099 ms 00:19:11.476 [2024-12-05 19:09:28.793235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.476 [2024-12-05 19:09:28.796613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.476 [2024-12-05 19:09:28.796652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:11.476 [2024-12-05 19:09:28.796664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.319 ms 00:19:11.476 [2024-12-05 19:09:28.796676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.476 [2024-12-05 19:09:28.800223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.476 [2024-12-05 19:09:28.800393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:11.476 [2024-12-05 19:09:28.800414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.483 ms 00:19:11.476 [2024-12-05 19:09:28.800428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.476 [2024-12-05 19:09:28.800483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.476 [2024-12-05 19:09:28.800497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:11.476 [2024-12-05 19:09:28.800506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:11.476 [2024-12-05 19:09:28.800517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.476 [2024-12-05 19:09:28.800599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.476 [2024-12-05 19:09:28.800611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:11.476 [2024-12-05 19:09:28.800620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:19:11.476 [2024-12-05 19:09:28.800631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.476 [2024-12-05 19:09:28.801624] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:11.476 [2024-12-05 19:09:28.802639] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2337.385 ms, result 0 00:19:11.476 [2024-12-05 19:09:28.803307] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:11.476 { 00:19:11.476 "name": "ftl0", 00:19:11.476 "uuid": "42d50fa2-6cab-4623-bd3d-509e916d40ae" 00:19:11.476 } 00:19:11.476 19:09:28 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:19:11.476 19:09:28 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:19:11.476 19:09:28 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:11.476 19:09:28 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:19:11.476 19:09:28 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:11.476 19:09:28 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:11.476 19:09:28 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:19:11.476 19:09:29 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:19:11.735 [ 00:19:11.735 { 00:19:11.735 "name": "ftl0", 00:19:11.735 "aliases": [ 00:19:11.735 "42d50fa2-6cab-4623-bd3d-509e916d40ae" 00:19:11.735 ], 00:19:11.735 "product_name": "FTL disk", 00:19:11.735 "block_size": 4096, 00:19:11.735 "num_blocks": 23592960, 00:19:11.735 "uuid": "42d50fa2-6cab-4623-bd3d-509e916d40ae", 00:19:11.735 "assigned_rate_limits": { 00:19:11.735 "rw_ios_per_sec": 0, 00:19:11.735 "rw_mbytes_per_sec": 0, 00:19:11.735 "r_mbytes_per_sec": 0, 00:19:11.735 "w_mbytes_per_sec": 0 00:19:11.735 }, 00:19:11.735 "claimed": false, 00:19:11.735 "zoned": false, 00:19:11.735 "supported_io_types": { 00:19:11.735 "read": true, 00:19:11.735 "write": true, 00:19:11.735 "unmap": true, 00:19:11.735 "flush": true, 00:19:11.735 "reset": false, 00:19:11.735 "nvme_admin": false, 00:19:11.735 "nvme_io": false, 00:19:11.735 "nvme_io_md": false, 00:19:11.735 "write_zeroes": true, 00:19:11.735 "zcopy": false, 00:19:11.735 "get_zone_info": false, 00:19:11.735 "zone_management": false, 00:19:11.735 "zone_append": false, 00:19:11.735 "compare": false, 00:19:11.735 "compare_and_write": false, 00:19:11.735 "abort": false, 00:19:11.735 "seek_hole": false, 00:19:11.735 "seek_data": false, 00:19:11.735 "copy": false, 00:19:11.735 "nvme_iov_md": false 00:19:11.735 }, 00:19:11.735 "driver_specific": { 00:19:11.735 "ftl": { 00:19:11.735 "base_bdev": "5e60adcb-ad23-47e3-a47a-b33b20d647be", 00:19:11.735 "cache": "nvc0n1p0" 00:19:11.735 } 00:19:11.735 } 00:19:11.735 } 00:19:11.735 ] 00:19:11.735 19:09:29 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:19:11.735 19:09:29 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:19:11.735 19:09:29 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:19:11.994 19:09:29 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:19:11.994 19:09:29 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:19:12.252 19:09:29 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:19:12.252 { 00:19:12.252 "name": "ftl0", 00:19:12.252 "aliases": [ 00:19:12.252 "42d50fa2-6cab-4623-bd3d-509e916d40ae" 00:19:12.252 ], 00:19:12.252 "product_name": "FTL disk", 00:19:12.252 "block_size": 4096, 00:19:12.252 "num_blocks": 23592960, 00:19:12.252 "uuid": "42d50fa2-6cab-4623-bd3d-509e916d40ae", 00:19:12.252 "assigned_rate_limits": { 00:19:12.252 "rw_ios_per_sec": 0, 00:19:12.252 "rw_mbytes_per_sec": 0, 00:19:12.252 "r_mbytes_per_sec": 0, 00:19:12.252 "w_mbytes_per_sec": 0 00:19:12.252 }, 00:19:12.252 "claimed": false, 00:19:12.252 "zoned": false, 00:19:12.252 "supported_io_types": { 00:19:12.252 "read": true, 00:19:12.252 "write": true, 00:19:12.252 "unmap": true, 00:19:12.252 "flush": true, 00:19:12.252 "reset": false, 00:19:12.252 "nvme_admin": false, 00:19:12.252 "nvme_io": false, 00:19:12.252 "nvme_io_md": false, 00:19:12.252 "write_zeroes": true, 00:19:12.252 "zcopy": false, 00:19:12.252 "get_zone_info": false, 00:19:12.252 "zone_management": false, 00:19:12.252 "zone_append": false, 00:19:12.252 "compare": false, 00:19:12.252 "compare_and_write": false, 00:19:12.252 "abort": false, 00:19:12.252 "seek_hole": false, 00:19:12.252 "seek_data": false, 00:19:12.252 "copy": false, 00:19:12.252 "nvme_iov_md": false 00:19:12.252 }, 00:19:12.252 "driver_specific": { 00:19:12.252 "ftl": { 00:19:12.252 "base_bdev": "5e60adcb-ad23-47e3-a47a-b33b20d647be", 00:19:12.252 "cache": "nvc0n1p0" 00:19:12.252 } 00:19:12.252 } 00:19:12.252 } 00:19:12.252 ]' 00:19:12.252 19:09:29 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:19:12.252 19:09:29 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:19:12.252 19:09:29 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:19:12.512 [2024-12-05 19:09:29.851801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.512 [2024-12-05 19:09:29.851844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:12.512 [2024-12-05 19:09:29.851859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:12.512 [2024-12-05 19:09:29.851867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.512 [2024-12-05 19:09:29.851901] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:12.512 [2024-12-05 19:09:29.852361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.512 [2024-12-05 19:09:29.852386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:12.512 [2024-12-05 19:09:29.852408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.447 ms 00:19:12.512 [2024-12-05 19:09:29.852422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.512 [2024-12-05 19:09:29.852980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.512 [2024-12-05 19:09:29.853004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:12.512 [2024-12-05 19:09:29.853013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.528 ms 00:19:12.512 [2024-12-05 19:09:29.853022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.512 [2024-12-05 19:09:29.856680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.512 [2024-12-05 19:09:29.856702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:12.512 [2024-12-05 19:09:29.856712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.627 ms 00:19:12.512 [2024-12-05 19:09:29.856722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.512 [2024-12-05 19:09:29.863683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.512 [2024-12-05 19:09:29.863715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:12.512 [2024-12-05 19:09:29.863725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.916 ms 00:19:12.512 [2024-12-05 19:09:29.863736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.512 [2024-12-05 19:09:29.865408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.512 [2024-12-05 19:09:29.865444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:12.512 [2024-12-05 19:09:29.865453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.572 ms 00:19:12.512 [2024-12-05 19:09:29.865462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.512 [2024-12-05 19:09:29.868988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.512 [2024-12-05 19:09:29.869024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:12.512 [2024-12-05 19:09:29.869034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.479 ms 00:19:12.512 [2024-12-05 19:09:29.869045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.512 [2024-12-05 19:09:29.869230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.512 [2024-12-05 19:09:29.869241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:12.512 [2024-12-05 19:09:29.869249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.139 ms 00:19:12.512 [2024-12-05 19:09:29.869283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.512 [2024-12-05 19:09:29.870797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.512 [2024-12-05 19:09:29.870937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:12.512 [2024-12-05 19:09:29.870953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.478 ms 00:19:12.512 [2024-12-05 19:09:29.870964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.512 [2024-12-05 19:09:29.872498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.512 [2024-12-05 19:09:29.872528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:12.512 [2024-12-05 19:09:29.872536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.488 ms 00:19:12.512 [2024-12-05 19:09:29.872544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.512 [2024-12-05 19:09:29.873527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.512 [2024-12-05 19:09:29.873561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:12.512 [2024-12-05 19:09:29.873569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.944 ms 00:19:12.512 [2024-12-05 19:09:29.873578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.512 [2024-12-05 19:09:29.874661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.512 [2024-12-05 19:09:29.874694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:12.512 [2024-12-05 19:09:29.874702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.984 ms 00:19:12.512 [2024-12-05 19:09:29.874710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.512 [2024-12-05 19:09:29.874753] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:12.513 [2024-12-05 19:09:29.874768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:12.513 [2024-12-05 19:09:29.874777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:12.513 [2024-12-05 19:09:29.874791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:12.513 [2024-12-05 19:09:29.874798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:12.513 [2024-12-05 19:09:29.874808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:12.513 [2024-12-05 19:09:29.874815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:12.513 [2024-12-05 19:09:29.874825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:12.513 [2024-12-05 19:09:29.874832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:12.513 [2024-12-05 19:09:29.874841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:12.513 [2024-12-05 19:09:29.874848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:12.513 [2024-12-05 19:09:29.874857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:12.513 [2024-12-05 19:09:29.874865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:12.513 [2024-12-05 19:09:29.874873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:12.513 [2024-12-05 19:09:29.874880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:12.513 [2024-12-05 19:09:29.874890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:12.513 [2024-12-05 19:09:29.874897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:12.513 [2024-12-05 19:09:29.874906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:12.513 [2024-12-05 19:09:29.874913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:12.513 [2024-12-05 19:09:29.874924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:12.513 [2024-12-05 19:09:29.874931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:12.513 [2024-12-05 19:09:29.874940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:12.513 [2024-12-05 19:09:29.874947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:12.513 [2024-12-05 19:09:29.874956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:12.513 [2024-12-05 19:09:29.874964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:12.513 [2024-12-05 19:09:29.874972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:12.513 [2024-12-05 19:09:29.874979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:12.513 [2024-12-05 19:09:29.874989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:12.513 [2024-12-05 19:09:29.874996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:12.513 [2024-12-05 19:09:29.875005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:12.513 [2024-12-05 19:09:29.875012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:12.513 [2024-12-05 19:09:29.875022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:12.513 [2024-12-05 19:09:29.875030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:12.513 [2024-12-05 19:09:29.875039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:12.513 [2024-12-05 19:09:29.875047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:12.513 [2024-12-05 19:09:29.875058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:12.513 [2024-12-05 19:09:29.875065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:12.513 [2024-12-05 19:09:29.875074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:12.513 [2024-12-05 19:09:29.875082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:12.513 [2024-12-05 19:09:29.875090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:12.513 [2024-12-05 19:09:29.875097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:12.513 [2024-12-05 19:09:29.875106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:12.513 [2024-12-05 19:09:29.875113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:12.513 [2024-12-05 19:09:29.875122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:12.513 [2024-12-05 19:09:29.875129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:12.513 [2024-12-05 19:09:29.875138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:12.513 [2024-12-05 19:09:29.875145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:12.513 [2024-12-05 19:09:29.875154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:12.513 [2024-12-05 19:09:29.875162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:12.513 [2024-12-05 19:09:29.875171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:12.513 [2024-12-05 19:09:29.875178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:12.513 [2024-12-05 19:09:29.875189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:12.513 [2024-12-05 19:09:29.875196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:12.513 [2024-12-05 19:09:29.875205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:12.513 [2024-12-05 19:09:29.875212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:12.513 [2024-12-05 19:09:29.875221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:12.513 [2024-12-05 19:09:29.875229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:12.513 [2024-12-05 19:09:29.875239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:12.513 [2024-12-05 19:09:29.875246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:12.513 [2024-12-05 19:09:29.875267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:12.513 [2024-12-05 19:09:29.875274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:12.513 [2024-12-05 19:09:29.875283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:12.513 [2024-12-05 19:09:29.875290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:12.513 [2024-12-05 19:09:29.875299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:12.513 [2024-12-05 19:09:29.875306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:12.513 [2024-12-05 19:09:29.875316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:12.513 [2024-12-05 19:09:29.875324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:12.513 [2024-12-05 19:09:29.875335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:12.513 [2024-12-05 19:09:29.875343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:12.513 [2024-12-05 19:09:29.875352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:12.513 [2024-12-05 19:09:29.875359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:12.513 [2024-12-05 19:09:29.875368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:12.513 [2024-12-05 19:09:29.875375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:12.513 [2024-12-05 19:09:29.875385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:12.513 [2024-12-05 19:09:29.875392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:12.513 [2024-12-05 19:09:29.875401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:12.513 [2024-12-05 19:09:29.875409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:12.513 [2024-12-05 19:09:29.875418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:12.513 [2024-12-05 19:09:29.875425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:12.513 [2024-12-05 19:09:29.875434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:12.513 [2024-12-05 19:09:29.875441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:12.513 [2024-12-05 19:09:29.875450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:12.513 [2024-12-05 19:09:29.875458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:12.513 [2024-12-05 19:09:29.875469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:12.513 [2024-12-05 19:09:29.875490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:12.513 [2024-12-05 19:09:29.875500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:12.513 [2024-12-05 19:09:29.875508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:12.514 [2024-12-05 19:09:29.875517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:12.514 [2024-12-05 19:09:29.875524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:12.514 [2024-12-05 19:09:29.875533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:12.514 [2024-12-05 19:09:29.875540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:12.514 [2024-12-05 19:09:29.875549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:12.514 [2024-12-05 19:09:29.875556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:12.514 [2024-12-05 19:09:29.875565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:12.514 [2024-12-05 19:09:29.875572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:12.514 [2024-12-05 19:09:29.875581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:12.514 [2024-12-05 19:09:29.875588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:12.514 [2024-12-05 19:09:29.875597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:12.514 [2024-12-05 19:09:29.875610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:12.514 [2024-12-05 19:09:29.875620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:12.514 [2024-12-05 19:09:29.875628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:12.514 [2024-12-05 19:09:29.875644] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:12.514 [2024-12-05 19:09:29.875651] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 42d50fa2-6cab-4623-bd3d-509e916d40ae 00:19:12.514 [2024-12-05 19:09:29.875661] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:12.514 [2024-12-05 19:09:29.875670] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:12.514 [2024-12-05 19:09:29.875679] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:12.514 [2024-12-05 19:09:29.875686] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:12.514 [2024-12-05 19:09:29.875696] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:12.514 [2024-12-05 19:09:29.875704] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:12.514 [2024-12-05 19:09:29.875713] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:12.514 [2024-12-05 19:09:29.875719] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:12.514 [2024-12-05 19:09:29.875727] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:12.514 [2024-12-05 19:09:29.875734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.514 [2024-12-05 19:09:29.875743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:12.514 [2024-12-05 19:09:29.875751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.982 ms 00:19:12.514 [2024-12-05 19:09:29.875761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.514 [2024-12-05 19:09:29.877277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.514 [2024-12-05 19:09:29.877295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:12.514 [2024-12-05 19:09:29.877305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.491 ms 00:19:12.514 [2024-12-05 19:09:29.877314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.514 [2024-12-05 19:09:29.877415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.514 [2024-12-05 19:09:29.877426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:12.514 [2024-12-05 19:09:29.877435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:19:12.514 [2024-12-05 19:09:29.877443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.514 [2024-12-05 19:09:29.883366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.514 [2024-12-05 19:09:29.883490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:12.514 [2024-12-05 19:09:29.883550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.514 [2024-12-05 19:09:29.883606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.514 [2024-12-05 19:09:29.883710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.514 [2024-12-05 19:09:29.883802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:12.514 [2024-12-05 19:09:29.883825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.514 [2024-12-05 19:09:29.883847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.514 [2024-12-05 19:09:29.883945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.514 [2024-12-05 19:09:29.883974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:12.514 [2024-12-05 19:09:29.883994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.514 [2024-12-05 19:09:29.884014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.514 [2024-12-05 19:09:29.884103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.514 [2024-12-05 19:09:29.884169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:12.514 [2024-12-05 19:09:29.884214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.514 [2024-12-05 19:09:29.884240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.514 [2024-12-05 19:09:29.893676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.514 [2024-12-05 19:09:29.893804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:12.514 [2024-12-05 19:09:29.893854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.514 [2024-12-05 19:09:29.893877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.514 [2024-12-05 19:09:29.901801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.514 [2024-12-05 19:09:29.901930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:12.514 [2024-12-05 19:09:29.901980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.514 [2024-12-05 19:09:29.902006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.514 [2024-12-05 19:09:29.902100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.514 [2024-12-05 19:09:29.902130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:12.514 [2024-12-05 19:09:29.902247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.514 [2024-12-05 19:09:29.902284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.514 [2024-12-05 19:09:29.902358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.514 [2024-12-05 19:09:29.902381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:12.514 [2024-12-05 19:09:29.902401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.514 [2024-12-05 19:09:29.902448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.514 [2024-12-05 19:09:29.902554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.514 [2024-12-05 19:09:29.902585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:12.514 [2024-12-05 19:09:29.902634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.514 [2024-12-05 19:09:29.902658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.514 [2024-12-05 19:09:29.902724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.514 [2024-12-05 19:09:29.902749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:12.514 [2024-12-05 19:09:29.902768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.514 [2024-12-05 19:09:29.902817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.514 [2024-12-05 19:09:29.902884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.514 [2024-12-05 19:09:29.902908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:12.514 [2024-12-05 19:09:29.902965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.514 [2024-12-05 19:09:29.902988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.514 [2024-12-05 19:09:29.903061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.514 [2024-12-05 19:09:29.903087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:12.514 [2024-12-05 19:09:29.903106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.514 [2024-12-05 19:09:29.903152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.514 [2024-12-05 19:09:29.903380] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 51.547 ms, result 0 00:19:12.514 true 00:19:12.514 19:09:29 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 87200 00:19:12.514 19:09:29 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 87200 ']' 00:19:12.514 19:09:29 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 87200 00:19:12.514 19:09:29 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:19:12.514 19:09:29 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:12.514 19:09:29 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87200 00:19:12.514 19:09:29 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:12.514 19:09:29 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:12.514 19:09:29 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87200' 00:19:12.514 killing process with pid 87200 00:19:12.514 19:09:29 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 87200 00:19:12.514 19:09:29 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 87200 00:19:17.780 19:09:35 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:19:18.717 65536+0 records in 00:19:18.717 65536+0 records out 00:19:18.717 268435456 bytes (268 MB, 256 MiB) copied, 0.802657 s, 334 MB/s 00:19:18.717 19:09:35 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:18.717 [2024-12-05 19:09:35.968527] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:19:18.717 [2024-12-05 19:09:35.968784] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87355 ] 00:19:18.717 [2024-12-05 19:09:36.123943] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:18.717 [2024-12-05 19:09:36.145884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:18.717 [2024-12-05 19:09:36.236837] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:18.717 [2024-12-05 19:09:36.236912] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:18.978 [2024-12-05 19:09:36.394041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.978 [2024-12-05 19:09:36.394097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:18.978 [2024-12-05 19:09:36.394112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:18.978 [2024-12-05 19:09:36.394121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.978 [2024-12-05 19:09:36.396613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.978 [2024-12-05 19:09:36.396810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:18.978 [2024-12-05 19:09:36.396830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.472 ms 00:19:18.979 [2024-12-05 19:09:36.396839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.979 [2024-12-05 19:09:36.397318] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:18.979 [2024-12-05 19:09:36.397640] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:18.979 [2024-12-05 19:09:36.397675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.979 [2024-12-05 19:09:36.397685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:18.979 [2024-12-05 19:09:36.397696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.390 ms 00:19:18.979 [2024-12-05 19:09:36.397704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.979 [2024-12-05 19:09:36.400095] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:18.979 [2024-12-05 19:09:36.403853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.979 [2024-12-05 19:09:36.403910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:18.979 [2024-12-05 19:09:36.403925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.761 ms 00:19:18.979 [2024-12-05 19:09:36.403934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.979 [2024-12-05 19:09:36.404014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.979 [2024-12-05 19:09:36.404025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:18.979 [2024-12-05 19:09:36.404034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:19:18.979 [2024-12-05 19:09:36.404041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.979 [2024-12-05 19:09:36.411870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.979 [2024-12-05 19:09:36.411913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:18.979 [2024-12-05 19:09:36.411923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.784 ms 00:19:18.979 [2024-12-05 19:09:36.411940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.979 [2024-12-05 19:09:36.412077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.979 [2024-12-05 19:09:36.412089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:18.979 [2024-12-05 19:09:36.412102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:19:18.979 [2024-12-05 19:09:36.412112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.979 [2024-12-05 19:09:36.412137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.979 [2024-12-05 19:09:36.412147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:18.979 [2024-12-05 19:09:36.412155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:18.979 [2024-12-05 19:09:36.412162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.979 [2024-12-05 19:09:36.412187] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:18.979 [2024-12-05 19:09:36.414158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.979 [2024-12-05 19:09:36.414196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:18.979 [2024-12-05 19:09:36.414206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.978 ms 00:19:18.979 [2024-12-05 19:09:36.414219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.979 [2024-12-05 19:09:36.414300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.979 [2024-12-05 19:09:36.414313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:18.979 [2024-12-05 19:09:36.414323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:18.979 [2024-12-05 19:09:36.414331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.979 [2024-12-05 19:09:36.414353] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:18.979 [2024-12-05 19:09:36.414376] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:18.979 [2024-12-05 19:09:36.414439] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:18.979 [2024-12-05 19:09:36.414459] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:18.979 [2024-12-05 19:09:36.414565] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:18.979 [2024-12-05 19:09:36.414576] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:18.979 [2024-12-05 19:09:36.414587] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:18.979 [2024-12-05 19:09:36.414601] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:18.979 [2024-12-05 19:09:36.414611] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:18.979 [2024-12-05 19:09:36.414622] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:18.979 [2024-12-05 19:09:36.414630] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:18.979 [2024-12-05 19:09:36.414637] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:18.979 [2024-12-05 19:09:36.414649] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:18.979 [2024-12-05 19:09:36.414660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.979 [2024-12-05 19:09:36.414668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:18.979 [2024-12-05 19:09:36.414676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.308 ms 00:19:18.979 [2024-12-05 19:09:36.414683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.979 [2024-12-05 19:09:36.414772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.979 [2024-12-05 19:09:36.414788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:18.979 [2024-12-05 19:09:36.414797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:18.979 [2024-12-05 19:09:36.414806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.979 [2024-12-05 19:09:36.414911] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:18.979 [2024-12-05 19:09:36.414929] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:18.979 [2024-12-05 19:09:36.414938] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:18.979 [2024-12-05 19:09:36.414947] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:18.979 [2024-12-05 19:09:36.414957] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:18.979 [2024-12-05 19:09:36.414966] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:18.979 [2024-12-05 19:09:36.414974] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:18.979 [2024-12-05 19:09:36.414986] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:18.979 [2024-12-05 19:09:36.414995] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:18.979 [2024-12-05 19:09:36.415003] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:18.979 [2024-12-05 19:09:36.415011] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:18.979 [2024-12-05 19:09:36.415019] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:18.979 [2024-12-05 19:09:36.415027] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:18.979 [2024-12-05 19:09:36.415035] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:18.979 [2024-12-05 19:09:36.415043] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:18.979 [2024-12-05 19:09:36.415050] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:18.979 [2024-12-05 19:09:36.415058] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:18.979 [2024-12-05 19:09:36.415066] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:18.979 [2024-12-05 19:09:36.415074] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:18.979 [2024-12-05 19:09:36.415082] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:18.979 [2024-12-05 19:09:36.415089] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:18.979 [2024-12-05 19:09:36.415097] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:18.979 [2024-12-05 19:09:36.415106] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:18.979 [2024-12-05 19:09:36.415119] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:18.979 [2024-12-05 19:09:36.415127] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:18.979 [2024-12-05 19:09:36.415135] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:18.979 [2024-12-05 19:09:36.415143] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:18.979 [2024-12-05 19:09:36.415151] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:18.979 [2024-12-05 19:09:36.415159] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:18.979 [2024-12-05 19:09:36.415166] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:18.979 [2024-12-05 19:09:36.415175] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:18.979 [2024-12-05 19:09:36.415183] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:18.979 [2024-12-05 19:09:36.415191] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:18.979 [2024-12-05 19:09:36.415198] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:18.979 [2024-12-05 19:09:36.415206] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:18.979 [2024-12-05 19:09:36.415214] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:18.979 [2024-12-05 19:09:36.415222] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:18.979 [2024-12-05 19:09:36.415230] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:18.979 [2024-12-05 19:09:36.415237] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:18.979 [2024-12-05 19:09:36.415261] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:18.979 [2024-12-05 19:09:36.415269] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:18.979 [2024-12-05 19:09:36.415276] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:18.979 [2024-12-05 19:09:36.415283] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:18.979 [2024-12-05 19:09:36.415289] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:18.979 [2024-12-05 19:09:36.415296] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:18.979 [2024-12-05 19:09:36.415304] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:18.979 [2024-12-05 19:09:36.415311] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:18.979 [2024-12-05 19:09:36.415318] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:18.979 [2024-12-05 19:09:36.415325] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:18.979 [2024-12-05 19:09:36.415331] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:18.979 [2024-12-05 19:09:36.415339] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:18.979 [2024-12-05 19:09:36.415345] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:18.979 [2024-12-05 19:09:36.415352] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:18.979 [2024-12-05 19:09:36.415360] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:18.979 [2024-12-05 19:09:36.415370] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:18.979 [2024-12-05 19:09:36.415385] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:18.979 [2024-12-05 19:09:36.415393] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:18.979 [2024-12-05 19:09:36.415401] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:18.979 [2024-12-05 19:09:36.415409] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:18.979 [2024-12-05 19:09:36.415416] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:18.979 [2024-12-05 19:09:36.415423] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:18.979 [2024-12-05 19:09:36.415431] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:18.979 [2024-12-05 19:09:36.415444] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:18.979 [2024-12-05 19:09:36.415451] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:18.979 [2024-12-05 19:09:36.415458] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:18.979 [2024-12-05 19:09:36.415465] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:18.979 [2024-12-05 19:09:36.415472] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:18.979 [2024-12-05 19:09:36.415481] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:18.979 [2024-12-05 19:09:36.415488] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:18.979 [2024-12-05 19:09:36.415495] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:18.979 [2024-12-05 19:09:36.415506] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:18.979 [2024-12-05 19:09:36.415517] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:18.979 [2024-12-05 19:09:36.415524] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:18.979 [2024-12-05 19:09:36.415531] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:18.979 [2024-12-05 19:09:36.415538] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:18.979 [2024-12-05 19:09:36.415545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.979 [2024-12-05 19:09:36.415554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:18.979 [2024-12-05 19:09:36.415562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.703 ms 00:19:18.979 [2024-12-05 19:09:36.415569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.979 [2024-12-05 19:09:36.429431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.979 [2024-12-05 19:09:36.429614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:18.979 [2024-12-05 19:09:36.429682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.808 ms 00:19:18.979 [2024-12-05 19:09:36.429705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.979 [2024-12-05 19:09:36.429853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.979 [2024-12-05 19:09:36.429886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:18.980 [2024-12-05 19:09:36.429956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:19:18.980 [2024-12-05 19:09:36.429978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.980 [2024-12-05 19:09:36.450351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.980 [2024-12-05 19:09:36.450541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:18.980 [2024-12-05 19:09:36.450622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.334 ms 00:19:18.980 [2024-12-05 19:09:36.450650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.980 [2024-12-05 19:09:36.450774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.980 [2024-12-05 19:09:36.450807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:18.980 [2024-12-05 19:09:36.450838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:18.980 [2024-12-05 19:09:36.450860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.980 [2024-12-05 19:09:36.451485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.980 [2024-12-05 19:09:36.451617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:18.980 [2024-12-05 19:09:36.451683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.503 ms 00:19:18.980 [2024-12-05 19:09:36.451709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.980 [2024-12-05 19:09:36.451898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.980 [2024-12-05 19:09:36.452009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:18.980 [2024-12-05 19:09:36.452196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.142 ms 00:19:18.980 [2024-12-05 19:09:36.452240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.980 [2024-12-05 19:09:36.460405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.980 [2024-12-05 19:09:36.460548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:18.980 [2024-12-05 19:09:36.460601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.086 ms 00:19:18.980 [2024-12-05 19:09:36.460631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.980 [2024-12-05 19:09:36.464370] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:19:18.980 [2024-12-05 19:09:36.464529] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:18.980 [2024-12-05 19:09:36.464591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.980 [2024-12-05 19:09:36.464612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:18.980 [2024-12-05 19:09:36.464633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.814 ms 00:19:18.980 [2024-12-05 19:09:36.464651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.980 [2024-12-05 19:09:36.480219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.980 [2024-12-05 19:09:36.480380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:18.980 [2024-12-05 19:09:36.480442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.491 ms 00:19:18.980 [2024-12-05 19:09:36.480465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.980 [2024-12-05 19:09:36.483288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.980 [2024-12-05 19:09:36.483424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:18.980 [2024-12-05 19:09:36.483475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.726 ms 00:19:18.980 [2024-12-05 19:09:36.483496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.980 [2024-12-05 19:09:36.486420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.980 [2024-12-05 19:09:36.486607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:18.980 [2024-12-05 19:09:36.486673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.536 ms 00:19:18.980 [2024-12-05 19:09:36.486696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.980 [2024-12-05 19:09:36.487077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.980 [2024-12-05 19:09:36.487128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:18.980 [2024-12-05 19:09:36.487349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.255 ms 00:19:18.980 [2024-12-05 19:09:36.487421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.980 [2024-12-05 19:09:36.510334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.980 [2024-12-05 19:09:36.510531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:18.980 [2024-12-05 19:09:36.510591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.839 ms 00:19:18.980 [2024-12-05 19:09:36.510616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.980 [2024-12-05 19:09:36.518610] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:19.241 [2024-12-05 19:09:36.537714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.241 [2024-12-05 19:09:36.537874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:19.241 [2024-12-05 19:09:36.537945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.978 ms 00:19:19.241 [2024-12-05 19:09:36.537968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.241 [2024-12-05 19:09:36.538074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.241 [2024-12-05 19:09:36.538102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:19.241 [2024-12-05 19:09:36.538130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:19:19.241 [2024-12-05 19:09:36.538152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.241 [2024-12-05 19:09:36.538224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.241 [2024-12-05 19:09:36.538354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:19.241 [2024-12-05 19:09:36.538377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:19:19.241 [2024-12-05 19:09:36.538397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.241 [2024-12-05 19:09:36.538436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.241 [2024-12-05 19:09:36.538461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:19.241 [2024-12-05 19:09:36.538550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:19.241 [2024-12-05 19:09:36.538576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.241 [2024-12-05 19:09:36.538634] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:19.241 [2024-12-05 19:09:36.538659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.241 [2024-12-05 19:09:36.538731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:19.241 [2024-12-05 19:09:36.538756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:19:19.241 [2024-12-05 19:09:36.538776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.241 [2024-12-05 19:09:36.544642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.241 [2024-12-05 19:09:36.544794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:19.241 [2024-12-05 19:09:36.544811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.827 ms 00:19:19.241 [2024-12-05 19:09:36.544820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.241 [2024-12-05 19:09:36.544911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.241 [2024-12-05 19:09:36.544922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:19.241 [2024-12-05 19:09:36.544932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:19:19.241 [2024-12-05 19:09:36.544940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.241 [2024-12-05 19:09:36.546512] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:19.241 [2024-12-05 19:09:36.547867] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 152.127 ms, result 0 00:19:19.241 [2024-12-05 19:09:36.549151] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:19.241 [2024-12-05 19:09:36.556559] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:20.186  [2024-12-05T19:09:38.687Z] Copying: 17/256 [MB] (17 MBps) [2024-12-05T19:09:39.630Z] Copying: 38/256 [MB] (20 MBps) [2024-12-05T19:09:40.573Z] Copying: 55/256 [MB] (16 MBps) [2024-12-05T19:09:41.954Z] Copying: 75/256 [MB] (20 MBps) [2024-12-05T19:09:42.894Z] Copying: 95/256 [MB] (19 MBps) [2024-12-05T19:09:43.836Z] Copying: 108/256 [MB] (13 MBps) [2024-12-05T19:09:44.779Z] Copying: 123/256 [MB] (14 MBps) [2024-12-05T19:09:45.722Z] Copying: 137/256 [MB] (14 MBps) [2024-12-05T19:09:46.665Z] Copying: 151/256 [MB] (13 MBps) [2024-12-05T19:09:47.607Z] Copying: 169/256 [MB] (18 MBps) [2024-12-05T19:09:48.997Z] Copying: 183984/262144 [kB] (10068 kBps) [2024-12-05T19:09:49.567Z] Copying: 194012/262144 [kB] (10028 kBps) [2024-12-05T19:09:50.955Z] Copying: 204112/262144 [kB] (10100 kBps) [2024-12-05T19:09:51.901Z] Copying: 214192/262144 [kB] (10080 kBps) [2024-12-05T19:09:52.847Z] Copying: 224192/262144 [kB] (10000 kBps) [2024-12-05T19:09:53.790Z] Copying: 234296/262144 [kB] (10104 kBps) [2024-12-05T19:09:54.849Z] Copying: 244320/262144 [kB] (10024 kBps) [2024-12-05T19:09:55.460Z] Copying: 254272/262144 [kB] (9952 kBps) [2024-12-05T19:09:55.460Z] Copying: 256/256 [MB] (average 13 MBps)[2024-12-05 19:09:55.332149] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:37.901 [2024-12-05 19:09:55.334060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.901 [2024-12-05 19:09:55.334281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:37.901 [2024-12-05 19:09:55.334306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:37.901 [2024-12-05 19:09:55.334316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.901 [2024-12-05 19:09:55.334349] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:37.901 [2024-12-05 19:09:55.335021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.901 [2024-12-05 19:09:55.335056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:37.901 [2024-12-05 19:09:55.335067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.656 ms 00:19:37.901 [2024-12-05 19:09:55.335076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.901 [2024-12-05 19:09:55.338048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.901 [2024-12-05 19:09:55.338207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:37.901 [2024-12-05 19:09:55.338227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.944 ms 00:19:37.901 [2024-12-05 19:09:55.338243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.901 [2024-12-05 19:09:55.346822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.901 [2024-12-05 19:09:55.346992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:37.902 [2024-12-05 19:09:55.347628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.502 ms 00:19:37.902 [2024-12-05 19:09:55.347683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.902 [2024-12-05 19:09:55.354764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.902 [2024-12-05 19:09:55.354921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:37.902 [2024-12-05 19:09:55.354988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.931 ms 00:19:37.902 [2024-12-05 19:09:55.355011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.902 [2024-12-05 19:09:55.357899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.902 [2024-12-05 19:09:55.358061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:37.902 [2024-12-05 19:09:55.358122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.806 ms 00:19:37.902 [2024-12-05 19:09:55.358144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.902 [2024-12-05 19:09:55.362738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.902 [2024-12-05 19:09:55.362908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:37.902 [2024-12-05 19:09:55.362985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.527 ms 00:19:37.902 [2024-12-05 19:09:55.363009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.902 [2024-12-05 19:09:55.363184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.902 [2024-12-05 19:09:55.363337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:37.902 [2024-12-05 19:09:55.363369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:19:37.902 [2024-12-05 19:09:55.363393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.902 [2024-12-05 19:09:55.366682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.902 [2024-12-05 19:09:55.366836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:37.902 [2024-12-05 19:09:55.366853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.257 ms 00:19:37.902 [2024-12-05 19:09:55.366861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.902 [2024-12-05 19:09:55.369658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.902 [2024-12-05 19:09:55.369708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:37.902 [2024-12-05 19:09:55.369717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.758 ms 00:19:37.902 [2024-12-05 19:09:55.369724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.902 [2024-12-05 19:09:55.371960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.902 [2024-12-05 19:09:55.372006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:37.902 [2024-12-05 19:09:55.372017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.189 ms 00:19:37.902 [2024-12-05 19:09:55.372024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.902 [2024-12-05 19:09:55.374109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.902 [2024-12-05 19:09:55.374156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:37.902 [2024-12-05 19:09:55.374166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.011 ms 00:19:37.902 [2024-12-05 19:09:55.374173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.902 [2024-12-05 19:09:55.374216] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:37.902 [2024-12-05 19:09:55.374232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:37.902 [2024-12-05 19:09:55.374242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:37.902 [2024-12-05 19:09:55.374270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:37.902 [2024-12-05 19:09:55.374278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:37.902 [2024-12-05 19:09:55.374287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:37.902 [2024-12-05 19:09:55.374294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:37.902 [2024-12-05 19:09:55.374302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:37.902 [2024-12-05 19:09:55.374310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:37.902 [2024-12-05 19:09:55.374318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:37.902 [2024-12-05 19:09:55.374325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:37.902 [2024-12-05 19:09:55.374333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:37.902 [2024-12-05 19:09:55.374341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:37.902 [2024-12-05 19:09:55.374348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:37.902 [2024-12-05 19:09:55.374356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:37.902 [2024-12-05 19:09:55.374363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:37.902 [2024-12-05 19:09:55.374371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:37.902 [2024-12-05 19:09:55.374378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:37.902 [2024-12-05 19:09:55.374385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:37.902 [2024-12-05 19:09:55.374392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:37.902 [2024-12-05 19:09:55.374400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:37.902 [2024-12-05 19:09:55.374407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:37.902 [2024-12-05 19:09:55.374414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:37.902 [2024-12-05 19:09:55.374423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:37.902 [2024-12-05 19:09:55.374430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:37.902 [2024-12-05 19:09:55.374437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:37.902 [2024-12-05 19:09:55.374445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:37.902 [2024-12-05 19:09:55.374456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:37.902 [2024-12-05 19:09:55.374463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:37.902 [2024-12-05 19:09:55.374470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:37.902 [2024-12-05 19:09:55.374478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:37.902 [2024-12-05 19:09:55.374501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:37.902 [2024-12-05 19:09:55.374509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:37.902 [2024-12-05 19:09:55.374517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:37.902 [2024-12-05 19:09:55.374525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:37.902 [2024-12-05 19:09:55.374534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:37.902 [2024-12-05 19:09:55.374542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:37.902 [2024-12-05 19:09:55.374549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:37.902 [2024-12-05 19:09:55.374557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:37.902 [2024-12-05 19:09:55.374565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:37.902 [2024-12-05 19:09:55.374572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:37.902 [2024-12-05 19:09:55.374581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:37.902 [2024-12-05 19:09:55.374589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:37.902 [2024-12-05 19:09:55.374596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:37.902 [2024-12-05 19:09:55.374604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:37.902 [2024-12-05 19:09:55.374611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:37.902 [2024-12-05 19:09:55.374619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:37.902 [2024-12-05 19:09:55.374627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:37.902 [2024-12-05 19:09:55.374634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:37.902 [2024-12-05 19:09:55.374641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:37.902 [2024-12-05 19:09:55.374649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:37.902 [2024-12-05 19:09:55.374656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:37.902 [2024-12-05 19:09:55.374664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:37.902 [2024-12-05 19:09:55.374672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:37.902 [2024-12-05 19:09:55.374679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:37.902 [2024-12-05 19:09:55.374686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:37.902 [2024-12-05 19:09:55.374694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:37.902 [2024-12-05 19:09:55.374701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:37.902 [2024-12-05 19:09:55.374708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:37.902 [2024-12-05 19:09:55.374717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:37.902 [2024-12-05 19:09:55.374725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:37.902 [2024-12-05 19:09:55.374733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:37.902 [2024-12-05 19:09:55.374740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:37.902 [2024-12-05 19:09:55.374747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:37.902 [2024-12-05 19:09:55.374756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:37.902 [2024-12-05 19:09:55.374763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:37.902 [2024-12-05 19:09:55.374771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:37.902 [2024-12-05 19:09:55.374779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:37.902 [2024-12-05 19:09:55.374786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:37.902 [2024-12-05 19:09:55.374794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:37.902 [2024-12-05 19:09:55.374801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:37.902 [2024-12-05 19:09:55.374808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:37.902 [2024-12-05 19:09:55.374816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:37.902 [2024-12-05 19:09:55.374824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:37.902 [2024-12-05 19:09:55.374832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:37.902 [2024-12-05 19:09:55.374839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:37.902 [2024-12-05 19:09:55.374847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:37.902 [2024-12-05 19:09:55.374854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:37.902 [2024-12-05 19:09:55.374861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:37.902 [2024-12-05 19:09:55.374869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:37.902 [2024-12-05 19:09:55.374877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:37.902 [2024-12-05 19:09:55.374884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:37.902 [2024-12-05 19:09:55.374891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:37.902 [2024-12-05 19:09:55.374898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:37.902 [2024-12-05 19:09:55.374906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:37.902 [2024-12-05 19:09:55.374914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:37.902 [2024-12-05 19:09:55.374921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:37.902 [2024-12-05 19:09:55.374929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:37.902 [2024-12-05 19:09:55.374936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:37.902 [2024-12-05 19:09:55.374944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:37.902 [2024-12-05 19:09:55.374951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:37.902 [2024-12-05 19:09:55.374959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:37.902 [2024-12-05 19:09:55.374968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:37.902 [2024-12-05 19:09:55.374975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:37.902 [2024-12-05 19:09:55.374983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:37.902 [2024-12-05 19:09:55.374990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:37.902 [2024-12-05 19:09:55.375002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:37.902 [2024-12-05 19:09:55.375009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:37.902 [2024-12-05 19:09:55.375018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:37.902 [2024-12-05 19:09:55.375025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:37.902 [2024-12-05 19:09:55.375033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:37.902 [2024-12-05 19:09:55.375050] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:37.902 [2024-12-05 19:09:55.375058] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 42d50fa2-6cab-4623-bd3d-509e916d40ae 00:19:37.902 [2024-12-05 19:09:55.375068] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:37.902 [2024-12-05 19:09:55.375080] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:37.902 [2024-12-05 19:09:55.375091] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:37.902 [2024-12-05 19:09:55.375099] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:37.902 [2024-12-05 19:09:55.375106] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:37.902 [2024-12-05 19:09:55.375115] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:37.902 [2024-12-05 19:09:55.375122] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:37.902 [2024-12-05 19:09:55.375128] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:37.902 [2024-12-05 19:09:55.375135] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:37.902 [2024-12-05 19:09:55.375142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.902 [2024-12-05 19:09:55.375154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:37.902 [2024-12-05 19:09:55.375163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.927 ms 00:19:37.902 [2024-12-05 19:09:55.375171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.902 [2024-12-05 19:09:55.377582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.902 [2024-12-05 19:09:55.377614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:37.902 [2024-12-05 19:09:55.377624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.393 ms 00:19:37.902 [2024-12-05 19:09:55.377633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.902 [2024-12-05 19:09:55.377750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.902 [2024-12-05 19:09:55.377759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:37.902 [2024-12-05 19:09:55.377768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:19:37.902 [2024-12-05 19:09:55.377780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.902 [2024-12-05 19:09:55.385663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:37.902 [2024-12-05 19:09:55.385710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:37.902 [2024-12-05 19:09:55.385721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:37.902 [2024-12-05 19:09:55.385738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.902 [2024-12-05 19:09:55.385828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:37.902 [2024-12-05 19:09:55.385839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:37.902 [2024-12-05 19:09:55.385847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:37.902 [2024-12-05 19:09:55.385855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.902 [2024-12-05 19:09:55.385904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:37.902 [2024-12-05 19:09:55.385913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:37.902 [2024-12-05 19:09:55.385921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:37.902 [2024-12-05 19:09:55.385934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.902 [2024-12-05 19:09:55.385953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:37.902 [2024-12-05 19:09:55.385962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:37.902 [2024-12-05 19:09:55.385971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:37.903 [2024-12-05 19:09:55.385978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.903 [2024-12-05 19:09:55.400349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:37.903 [2024-12-05 19:09:55.400407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:37.903 [2024-12-05 19:09:55.400419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:37.903 [2024-12-05 19:09:55.400428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.903 [2024-12-05 19:09:55.411941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:37.903 [2024-12-05 19:09:55.411997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:37.903 [2024-12-05 19:09:55.412009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:37.903 [2024-12-05 19:09:55.412017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.903 [2024-12-05 19:09:55.412069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:37.903 [2024-12-05 19:09:55.412078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:37.903 [2024-12-05 19:09:55.412087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:37.903 [2024-12-05 19:09:55.412096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.903 [2024-12-05 19:09:55.412128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:37.903 [2024-12-05 19:09:55.412138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:37.903 [2024-12-05 19:09:55.412155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:37.903 [2024-12-05 19:09:55.412164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.903 [2024-12-05 19:09:55.412244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:37.903 [2024-12-05 19:09:55.412282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:37.903 [2024-12-05 19:09:55.412292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:37.903 [2024-12-05 19:09:55.412308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.903 [2024-12-05 19:09:55.412354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:37.903 [2024-12-05 19:09:55.412365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:37.903 [2024-12-05 19:09:55.412376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:37.903 [2024-12-05 19:09:55.412388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.903 [2024-12-05 19:09:55.412436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:37.903 [2024-12-05 19:09:55.412447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:37.903 [2024-12-05 19:09:55.412456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:37.903 [2024-12-05 19:09:55.412464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.903 [2024-12-05 19:09:55.412515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:37.903 [2024-12-05 19:09:55.412526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:37.903 [2024-12-05 19:09:55.412540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:37.903 [2024-12-05 19:09:55.412549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.903 [2024-12-05 19:09:55.412709] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 78.615 ms, result 0 00:19:38.475 00:19:38.475 00:19:38.475 19:09:55 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=87568 00:19:38.475 19:09:55 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 87568 00:19:38.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:38.475 19:09:55 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 87568 ']' 00:19:38.475 19:09:55 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:38.475 19:09:55 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:38.475 19:09:55 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:38.475 19:09:55 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:38.475 19:09:55 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:19:38.475 19:09:55 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:38.475 [2024-12-05 19:09:56.002708] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:19:38.475 [2024-12-05 19:09:56.003392] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87568 ] 00:19:38.736 [2024-12-05 19:09:56.151964] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:38.736 [2024-12-05 19:09:56.181313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:39.310 19:09:56 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:39.310 19:09:56 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:19:39.310 19:09:56 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:19:39.572 [2024-12-05 19:09:57.080497] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:39.572 [2024-12-05 19:09:57.080789] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:39.835 [2024-12-05 19:09:57.258812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.835 [2024-12-05 19:09:57.259044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:39.835 [2024-12-05 19:09:57.259069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:39.835 [2024-12-05 19:09:57.259080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.835 [2024-12-05 19:09:57.261706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.835 [2024-12-05 19:09:57.261758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:39.835 [2024-12-05 19:09:57.261769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.599 ms 00:19:39.835 [2024-12-05 19:09:57.261778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.835 [2024-12-05 19:09:57.261910] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:39.835 [2024-12-05 19:09:57.262206] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:39.835 [2024-12-05 19:09:57.262223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.835 [2024-12-05 19:09:57.262237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:39.835 [2024-12-05 19:09:57.262247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.322 ms 00:19:39.835 [2024-12-05 19:09:57.262438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.835 [2024-12-05 19:09:57.264164] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:39.835 [2024-12-05 19:09:57.268060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.835 [2024-12-05 19:09:57.268225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:39.835 [2024-12-05 19:09:57.268312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.892 ms 00:19:39.835 [2024-12-05 19:09:57.268338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.835 [2024-12-05 19:09:57.268425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.835 [2024-12-05 19:09:57.268453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:39.835 [2024-12-05 19:09:57.268479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:19:39.835 [2024-12-05 19:09:57.268498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.835 [2024-12-05 19:09:57.276725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.835 [2024-12-05 19:09:57.276880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:39.835 [2024-12-05 19:09:57.276905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.157 ms 00:19:39.835 [2024-12-05 19:09:57.276914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.835 [2024-12-05 19:09:57.277056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.835 [2024-12-05 19:09:57.277071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:39.835 [2024-12-05 19:09:57.277084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:19:39.835 [2024-12-05 19:09:57.277095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.835 [2024-12-05 19:09:57.277127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.835 [2024-12-05 19:09:57.277136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:39.835 [2024-12-05 19:09:57.277149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:39.835 [2024-12-05 19:09:57.277156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.835 [2024-12-05 19:09:57.277182] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:39.835 [2024-12-05 19:09:57.279274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.835 [2024-12-05 19:09:57.279318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:39.835 [2024-12-05 19:09:57.279332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.099 ms 00:19:39.835 [2024-12-05 19:09:57.279342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.835 [2024-12-05 19:09:57.279386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.835 [2024-12-05 19:09:57.279397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:39.835 [2024-12-05 19:09:57.279406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:39.835 [2024-12-05 19:09:57.279415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.835 [2024-12-05 19:09:57.279437] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:39.835 [2024-12-05 19:09:57.279467] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:39.835 [2024-12-05 19:09:57.279504] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:39.835 [2024-12-05 19:09:57.279524] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:39.835 [2024-12-05 19:09:57.279630] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:39.835 [2024-12-05 19:09:57.279644] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:39.835 [2024-12-05 19:09:57.279656] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:39.835 [2024-12-05 19:09:57.279670] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:39.836 [2024-12-05 19:09:57.279679] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:39.836 [2024-12-05 19:09:57.279692] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:39.836 [2024-12-05 19:09:57.279703] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:39.836 [2024-12-05 19:09:57.279716] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:39.836 [2024-12-05 19:09:57.279726] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:39.836 [2024-12-05 19:09:57.279736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.836 [2024-12-05 19:09:57.279744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:39.836 [2024-12-05 19:09:57.279753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.301 ms 00:19:39.836 [2024-12-05 19:09:57.279760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.836 [2024-12-05 19:09:57.279849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.836 [2024-12-05 19:09:57.279859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:39.836 [2024-12-05 19:09:57.279869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:39.836 [2024-12-05 19:09:57.279876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.836 [2024-12-05 19:09:57.279980] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:39.836 [2024-12-05 19:09:57.279995] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:39.836 [2024-12-05 19:09:57.280007] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:39.836 [2024-12-05 19:09:57.280016] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:39.836 [2024-12-05 19:09:57.280030] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:39.836 [2024-12-05 19:09:57.280038] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:39.836 [2024-12-05 19:09:57.280048] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:39.836 [2024-12-05 19:09:57.280058] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:39.836 [2024-12-05 19:09:57.280068] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:39.836 [2024-12-05 19:09:57.280076] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:39.836 [2024-12-05 19:09:57.280086] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:39.836 [2024-12-05 19:09:57.280093] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:39.836 [2024-12-05 19:09:57.280103] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:39.836 [2024-12-05 19:09:57.280111] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:39.836 [2024-12-05 19:09:57.280122] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:39.836 [2024-12-05 19:09:57.280129] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:39.836 [2024-12-05 19:09:57.280141] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:39.836 [2024-12-05 19:09:57.280149] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:39.836 [2024-12-05 19:09:57.280159] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:39.836 [2024-12-05 19:09:57.280168] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:39.836 [2024-12-05 19:09:57.280180] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:39.836 [2024-12-05 19:09:57.280188] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:39.836 [2024-12-05 19:09:57.280197] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:39.836 [2024-12-05 19:09:57.280205] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:39.836 [2024-12-05 19:09:57.280214] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:39.836 [2024-12-05 19:09:57.280222] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:39.836 [2024-12-05 19:09:57.280232] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:39.836 [2024-12-05 19:09:57.280239] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:39.836 [2024-12-05 19:09:57.280272] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:39.836 [2024-12-05 19:09:57.280280] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:39.836 [2024-12-05 19:09:57.280291] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:39.836 [2024-12-05 19:09:57.280298] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:39.836 [2024-12-05 19:09:57.280308] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:39.836 [2024-12-05 19:09:57.280314] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:39.836 [2024-12-05 19:09:57.280323] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:39.836 [2024-12-05 19:09:57.280329] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:39.836 [2024-12-05 19:09:57.280340] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:39.836 [2024-12-05 19:09:57.280347] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:39.836 [2024-12-05 19:09:57.280355] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:39.836 [2024-12-05 19:09:57.280362] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:39.836 [2024-12-05 19:09:57.280371] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:39.836 [2024-12-05 19:09:57.280378] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:39.836 [2024-12-05 19:09:57.280388] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:39.836 [2024-12-05 19:09:57.280395] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:39.836 [2024-12-05 19:09:57.280405] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:39.836 [2024-12-05 19:09:57.280413] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:39.836 [2024-12-05 19:09:57.280424] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:39.836 [2024-12-05 19:09:57.280431] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:39.836 [2024-12-05 19:09:57.280441] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:39.836 [2024-12-05 19:09:57.280456] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:39.836 [2024-12-05 19:09:57.280466] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:39.836 [2024-12-05 19:09:57.280472] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:39.836 [2024-12-05 19:09:57.280483] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:39.836 [2024-12-05 19:09:57.280492] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:39.836 [2024-12-05 19:09:57.280503] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:39.836 [2024-12-05 19:09:57.280513] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:39.836 [2024-12-05 19:09:57.280524] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:39.836 [2024-12-05 19:09:57.280531] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:39.836 [2024-12-05 19:09:57.280540] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:39.836 [2024-12-05 19:09:57.280548] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:39.836 [2024-12-05 19:09:57.280557] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:39.836 [2024-12-05 19:09:57.280566] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:39.836 [2024-12-05 19:09:57.280575] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:39.836 [2024-12-05 19:09:57.280582] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:39.836 [2024-12-05 19:09:57.280592] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:39.836 [2024-12-05 19:09:57.280599] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:39.836 [2024-12-05 19:09:57.280617] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:39.836 [2024-12-05 19:09:57.280624] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:39.836 [2024-12-05 19:09:57.280636] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:39.836 [2024-12-05 19:09:57.280643] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:39.836 [2024-12-05 19:09:57.280655] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:39.836 [2024-12-05 19:09:57.280664] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:39.837 [2024-12-05 19:09:57.280673] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:39.837 [2024-12-05 19:09:57.280680] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:39.837 [2024-12-05 19:09:57.280689] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:39.837 [2024-12-05 19:09:57.280697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.837 [2024-12-05 19:09:57.280707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:39.837 [2024-12-05 19:09:57.280718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.788 ms 00:19:39.837 [2024-12-05 19:09:57.280728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.837 [2024-12-05 19:09:57.295090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.837 [2024-12-05 19:09:57.295143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:39.837 [2024-12-05 19:09:57.295156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.301 ms 00:19:39.837 [2024-12-05 19:09:57.295167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.837 [2024-12-05 19:09:57.295329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.837 [2024-12-05 19:09:57.295347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:39.837 [2024-12-05 19:09:57.295356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:19:39.837 [2024-12-05 19:09:57.295365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.837 [2024-12-05 19:09:57.308189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.837 [2024-12-05 19:09:57.308240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:39.837 [2024-12-05 19:09:57.308271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.802 ms 00:19:39.837 [2024-12-05 19:09:57.308290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.837 [2024-12-05 19:09:57.308357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.837 [2024-12-05 19:09:57.308371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:39.837 [2024-12-05 19:09:57.308379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:39.837 [2024-12-05 19:09:57.308390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.837 [2024-12-05 19:09:57.308898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.837 [2024-12-05 19:09:57.308945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:39.837 [2024-12-05 19:09:57.308958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.485 ms 00:19:39.837 [2024-12-05 19:09:57.308969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.837 [2024-12-05 19:09:57.309130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.837 [2024-12-05 19:09:57.309145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:39.837 [2024-12-05 19:09:57.309155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:19:39.837 [2024-12-05 19:09:57.309166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.837 [2024-12-05 19:09:57.317752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.837 [2024-12-05 19:09:57.317803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:39.837 [2024-12-05 19:09:57.317814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.563 ms 00:19:39.837 [2024-12-05 19:09:57.317824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.837 [2024-12-05 19:09:57.332210] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:39.837 [2024-12-05 19:09:57.332294] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:39.837 [2024-12-05 19:09:57.332310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.837 [2024-12-05 19:09:57.332322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:39.837 [2024-12-05 19:09:57.332333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.382 ms 00:19:39.837 [2024-12-05 19:09:57.332343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.837 [2024-12-05 19:09:57.351422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.837 [2024-12-05 19:09:57.351481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:39.837 [2024-12-05 19:09:57.351495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.016 ms 00:19:39.837 [2024-12-05 19:09:57.351509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.837 [2024-12-05 19:09:57.354436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.837 [2024-12-05 19:09:57.354489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:39.837 [2024-12-05 19:09:57.354499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.835 ms 00:19:39.837 [2024-12-05 19:09:57.354508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.837 [2024-12-05 19:09:57.357129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.837 [2024-12-05 19:09:57.357337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:39.837 [2024-12-05 19:09:57.357356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.567 ms 00:19:39.837 [2024-12-05 19:09:57.357365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.837 [2024-12-05 19:09:57.357866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.837 [2024-12-05 19:09:57.357907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:39.837 [2024-12-05 19:09:57.357921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.270 ms 00:19:39.837 [2024-12-05 19:09:57.357932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.837 [2024-12-05 19:09:57.381765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.837 [2024-12-05 19:09:57.381832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:39.837 [2024-12-05 19:09:57.381845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.809 ms 00:19:39.837 [2024-12-05 19:09:57.381859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.837 [2024-12-05 19:09:57.390025] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:40.099 [2024-12-05 19:09:57.409374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.099 [2024-12-05 19:09:57.409423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:40.099 [2024-12-05 19:09:57.409438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.414 ms 00:19:40.099 [2024-12-05 19:09:57.409447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.099 [2024-12-05 19:09:57.409556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.099 [2024-12-05 19:09:57.409576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:40.099 [2024-12-05 19:09:57.409588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:19:40.099 [2024-12-05 19:09:57.409597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.099 [2024-12-05 19:09:57.409658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.099 [2024-12-05 19:09:57.409672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:40.099 [2024-12-05 19:09:57.409683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:19:40.099 [2024-12-05 19:09:57.409691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.099 [2024-12-05 19:09:57.409718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.099 [2024-12-05 19:09:57.409727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:40.099 [2024-12-05 19:09:57.409745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:40.099 [2024-12-05 19:09:57.409752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.099 [2024-12-05 19:09:57.409790] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:40.100 [2024-12-05 19:09:57.409800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.100 [2024-12-05 19:09:57.409810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:40.100 [2024-12-05 19:09:57.409818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:40.100 [2024-12-05 19:09:57.409828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.100 [2024-12-05 19:09:57.415993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.100 [2024-12-05 19:09:57.416052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:40.100 [2024-12-05 19:09:57.416064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.143 ms 00:19:40.100 [2024-12-05 19:09:57.416077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.100 [2024-12-05 19:09:57.416168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.100 [2024-12-05 19:09:57.416181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:40.100 [2024-12-05 19:09:57.416190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:19:40.100 [2024-12-05 19:09:57.416201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.100 [2024-12-05 19:09:57.417347] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:40.100 [2024-12-05 19:09:57.418764] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 158.177 ms, result 0 00:19:40.100 [2024-12-05 19:09:57.420957] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:40.100 Some configs were skipped because the RPC state that can call them passed over. 00:19:40.100 19:09:57 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:19:40.100 [2024-12-05 19:09:57.650535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.100 [2024-12-05 19:09:57.650731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:40.100 [2024-12-05 19:09:57.650807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.136 ms 00:19:40.100 [2024-12-05 19:09:57.650833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.100 [2024-12-05 19:09:57.650893] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.504 ms, result 0 00:19:40.100 true 00:19:40.362 19:09:57 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:19:40.362 [2024-12-05 19:09:57.866329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.362 [2024-12-05 19:09:57.866512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:40.362 [2024-12-05 19:09:57.866575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.760 ms 00:19:40.362 [2024-12-05 19:09:57.866602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.362 [2024-12-05 19:09:57.866659] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.091 ms, result 0 00:19:40.362 true 00:19:40.362 19:09:57 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 87568 00:19:40.362 19:09:57 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 87568 ']' 00:19:40.362 19:09:57 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 87568 00:19:40.362 19:09:57 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:19:40.362 19:09:57 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:40.362 19:09:57 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87568 00:19:40.362 killing process with pid 87568 00:19:40.362 19:09:57 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:40.362 19:09:57 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:40.362 19:09:57 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87568' 00:19:40.362 19:09:57 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 87568 00:19:40.362 19:09:57 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 87568 00:19:40.627 [2024-12-05 19:09:58.048700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.627 [2024-12-05 19:09:58.048771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:40.627 [2024-12-05 19:09:58.048792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:40.627 [2024-12-05 19:09:58.048804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.627 [2024-12-05 19:09:58.048832] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:40.627 [2024-12-05 19:09:58.049547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.627 [2024-12-05 19:09:58.049587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:40.627 [2024-12-05 19:09:58.049602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.700 ms 00:19:40.627 [2024-12-05 19:09:58.049614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.627 [2024-12-05 19:09:58.049909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.627 [2024-12-05 19:09:58.049929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:40.627 [2024-12-05 19:09:58.049939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.266 ms 00:19:40.627 [2024-12-05 19:09:58.049949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.627 [2024-12-05 19:09:58.054554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.627 [2024-12-05 19:09:58.054599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:40.627 [2024-12-05 19:09:58.054610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.584 ms 00:19:40.627 [2024-12-05 19:09:58.054625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.627 [2024-12-05 19:09:58.061698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.627 [2024-12-05 19:09:58.061745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:40.627 [2024-12-05 19:09:58.061756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.031 ms 00:19:40.627 [2024-12-05 19:09:58.061769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.627 [2024-12-05 19:09:58.064721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.627 [2024-12-05 19:09:58.064779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:40.627 [2024-12-05 19:09:58.064790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.880 ms 00:19:40.627 [2024-12-05 19:09:58.064799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.627 [2024-12-05 19:09:58.069243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.627 [2024-12-05 19:09:58.069311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:40.627 [2024-12-05 19:09:58.069322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.393 ms 00:19:40.627 [2024-12-05 19:09:58.069336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.627 [2024-12-05 19:09:58.069460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.627 [2024-12-05 19:09:58.069472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:40.627 [2024-12-05 19:09:58.069481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:19:40.627 [2024-12-05 19:09:58.069491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.627 [2024-12-05 19:09:58.072740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.627 [2024-12-05 19:09:58.072798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:40.627 [2024-12-05 19:09:58.072809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.229 ms 00:19:40.627 [2024-12-05 19:09:58.072821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.627 [2024-12-05 19:09:58.075611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.627 [2024-12-05 19:09:58.075801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:40.627 [2024-12-05 19:09:58.075819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.743 ms 00:19:40.627 [2024-12-05 19:09:58.075829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.627 [2024-12-05 19:09:58.077821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.627 [2024-12-05 19:09:58.077877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:40.627 [2024-12-05 19:09:58.077887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.948 ms 00:19:40.627 [2024-12-05 19:09:58.077897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.627 [2024-12-05 19:09:58.080069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.627 [2024-12-05 19:09:58.080123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:40.627 [2024-12-05 19:09:58.080134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.080 ms 00:19:40.627 [2024-12-05 19:09:58.080142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.627 [2024-12-05 19:09:58.080187] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:40.627 [2024-12-05 19:09:58.080204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:40.627 [2024-12-05 19:09:58.080215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:40.627 [2024-12-05 19:09:58.080228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:40.627 [2024-12-05 19:09:58.080236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:40.627 [2024-12-05 19:09:58.080247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:40.627 [2024-12-05 19:09:58.080272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:40.627 [2024-12-05 19:09:58.080283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:40.627 [2024-12-05 19:09:58.080291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:40.627 [2024-12-05 19:09:58.080303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:40.628 [2024-12-05 19:09:58.080311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:40.628 [2024-12-05 19:09:58.080321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:40.628 [2024-12-05 19:09:58.080329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:40.628 [2024-12-05 19:09:58.080338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:40.628 [2024-12-05 19:09:58.080346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:40.628 [2024-12-05 19:09:58.080364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:40.628 [2024-12-05 19:09:58.080372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:40.628 [2024-12-05 19:09:58.080382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:40.628 [2024-12-05 19:09:58.080389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:40.628 [2024-12-05 19:09:58.080400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:40.628 [2024-12-05 19:09:58.080409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:40.628 [2024-12-05 19:09:58.080418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:40.628 [2024-12-05 19:09:58.080426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:40.628 [2024-12-05 19:09:58.080435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:40.628 [2024-12-05 19:09:58.080443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:40.628 [2024-12-05 19:09:58.080452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:40.628 [2024-12-05 19:09:58.080460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:40.628 [2024-12-05 19:09:58.080471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:40.628 [2024-12-05 19:09:58.080478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:40.628 [2024-12-05 19:09:58.080488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:40.628 [2024-12-05 19:09:58.080496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:40.628 [2024-12-05 19:09:58.080506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:40.628 [2024-12-05 19:09:58.080515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:40.628 [2024-12-05 19:09:58.080526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:40.628 [2024-12-05 19:09:58.080534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:40.628 [2024-12-05 19:09:58.080547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:40.628 [2024-12-05 19:09:58.080554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:40.628 [2024-12-05 19:09:58.080563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:40.628 [2024-12-05 19:09:58.080571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:40.628 [2024-12-05 19:09:58.080581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:40.628 [2024-12-05 19:09:58.080589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:40.628 [2024-12-05 19:09:58.080599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:40.628 [2024-12-05 19:09:58.080606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:40.628 [2024-12-05 19:09:58.080615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:40.628 [2024-12-05 19:09:58.080624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:40.628 [2024-12-05 19:09:58.080633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:40.628 [2024-12-05 19:09:58.080640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:40.628 [2024-12-05 19:09:58.080650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:40.628 [2024-12-05 19:09:58.080657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:40.628 [2024-12-05 19:09:58.080667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:40.628 [2024-12-05 19:09:58.080674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:40.628 [2024-12-05 19:09:58.080685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:40.628 [2024-12-05 19:09:58.080693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:40.628 [2024-12-05 19:09:58.080703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:40.628 [2024-12-05 19:09:58.080711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:40.628 [2024-12-05 19:09:58.080720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:40.628 [2024-12-05 19:09:58.080728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:40.628 [2024-12-05 19:09:58.080737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:40.628 [2024-12-05 19:09:58.080745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:40.628 [2024-12-05 19:09:58.080754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:40.628 [2024-12-05 19:09:58.080762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:40.628 [2024-12-05 19:09:58.080773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:40.628 [2024-12-05 19:09:58.080780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:40.628 [2024-12-05 19:09:58.080790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:40.628 [2024-12-05 19:09:58.080798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:40.628 [2024-12-05 19:09:58.080808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:40.628 [2024-12-05 19:09:58.080816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:40.628 [2024-12-05 19:09:58.080828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:40.628 [2024-12-05 19:09:58.080836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:40.628 [2024-12-05 19:09:58.080846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:40.628 [2024-12-05 19:09:58.080853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:40.628 [2024-12-05 19:09:58.080864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:40.628 [2024-12-05 19:09:58.080871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:40.628 [2024-12-05 19:09:58.080881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:40.628 [2024-12-05 19:09:58.080888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:40.628 [2024-12-05 19:09:58.080898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:40.628 [2024-12-05 19:09:58.080905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:40.628 [2024-12-05 19:09:58.080915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:40.628 [2024-12-05 19:09:58.080922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:40.628 [2024-12-05 19:09:58.080932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:40.628 [2024-12-05 19:09:58.080939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:40.628 [2024-12-05 19:09:58.080949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:40.628 [2024-12-05 19:09:58.080956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:40.628 [2024-12-05 19:09:58.080967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:40.628 [2024-12-05 19:09:58.080975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:40.628 [2024-12-05 19:09:58.080985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:40.628 [2024-12-05 19:09:58.080992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:40.628 [2024-12-05 19:09:58.081003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:40.628 [2024-12-05 19:09:58.081011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:40.628 [2024-12-05 19:09:58.081020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:40.628 [2024-12-05 19:09:58.081028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:40.628 [2024-12-05 19:09:58.081038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:40.628 [2024-12-05 19:09:58.081046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:40.628 [2024-12-05 19:09:58.081056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:40.628 [2024-12-05 19:09:58.081063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:40.629 [2024-12-05 19:09:58.081075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:40.629 [2024-12-05 19:09:58.081083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:40.629 [2024-12-05 19:09:58.081094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:40.629 [2024-12-05 19:09:58.081102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:40.629 [2024-12-05 19:09:58.081115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:40.629 [2024-12-05 19:09:58.081123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:40.629 [2024-12-05 19:09:58.081142] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:40.629 [2024-12-05 19:09:58.081151] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 42d50fa2-6cab-4623-bd3d-509e916d40ae 00:19:40.629 [2024-12-05 19:09:58.081162] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:40.629 [2024-12-05 19:09:58.081172] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:40.629 [2024-12-05 19:09:58.081181] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:40.629 [2024-12-05 19:09:58.081190] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:40.629 [2024-12-05 19:09:58.081199] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:40.629 [2024-12-05 19:09:58.081211] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:40.629 [2024-12-05 19:09:58.081221] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:40.629 [2024-12-05 19:09:58.081227] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:40.629 [2024-12-05 19:09:58.081237] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:40.629 [2024-12-05 19:09:58.081244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.629 [2024-12-05 19:09:58.081265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:40.629 [2024-12-05 19:09:58.081274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.058 ms 00:19:40.629 [2024-12-05 19:09:58.081287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.629 [2024-12-05 19:09:58.083605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.629 [2024-12-05 19:09:58.083643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:40.629 [2024-12-05 19:09:58.083653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.297 ms 00:19:40.629 [2024-12-05 19:09:58.083663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.629 [2024-12-05 19:09:58.083792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.629 [2024-12-05 19:09:58.083803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:40.629 [2024-12-05 19:09:58.083814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:19:40.629 [2024-12-05 19:09:58.083823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.629 [2024-12-05 19:09:58.091916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.629 [2024-12-05 19:09:58.091970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:40.629 [2024-12-05 19:09:58.091980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.629 [2024-12-05 19:09:58.091991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.629 [2024-12-05 19:09:58.092074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.629 [2024-12-05 19:09:58.092086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:40.629 [2024-12-05 19:09:58.092095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.629 [2024-12-05 19:09:58.092112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.629 [2024-12-05 19:09:58.092159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.629 [2024-12-05 19:09:58.092171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:40.629 [2024-12-05 19:09:58.092182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.629 [2024-12-05 19:09:58.092191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.629 [2024-12-05 19:09:58.092210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.629 [2024-12-05 19:09:58.092221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:40.629 [2024-12-05 19:09:58.092228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.629 [2024-12-05 19:09:58.092239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.629 [2024-12-05 19:09:58.107160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.629 [2024-12-05 19:09:58.107444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:40.629 [2024-12-05 19:09:58.107465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.629 [2024-12-05 19:09:58.107485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.629 [2024-12-05 19:09:58.119040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.629 [2024-12-05 19:09:58.119264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:40.629 [2024-12-05 19:09:58.119284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.629 [2024-12-05 19:09:58.119299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.629 [2024-12-05 19:09:58.119373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.629 [2024-12-05 19:09:58.119391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:40.629 [2024-12-05 19:09:58.119400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.629 [2024-12-05 19:09:58.119411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.629 [2024-12-05 19:09:58.119455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.629 [2024-12-05 19:09:58.119467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:40.629 [2024-12-05 19:09:58.119475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.629 [2024-12-05 19:09:58.119486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.629 [2024-12-05 19:09:58.119572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.629 [2024-12-05 19:09:58.119587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:40.629 [2024-12-05 19:09:58.119596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.629 [2024-12-05 19:09:58.119607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.629 [2024-12-05 19:09:58.119641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.629 [2024-12-05 19:09:58.119653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:40.629 [2024-12-05 19:09:58.119662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.629 [2024-12-05 19:09:58.119674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.629 [2024-12-05 19:09:58.119723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.629 [2024-12-05 19:09:58.119736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:40.629 [2024-12-05 19:09:58.119747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.629 [2024-12-05 19:09:58.119757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.629 [2024-12-05 19:09:58.119810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.629 [2024-12-05 19:09:58.119824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:40.629 [2024-12-05 19:09:58.119833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.629 [2024-12-05 19:09:58.119845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.629 [2024-12-05 19:09:58.120001] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 71.270 ms, result 0 00:19:40.892 19:09:58 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:19:40.892 19:09:58 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:40.892 [2024-12-05 19:09:58.426832] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:19:40.892 [2024-12-05 19:09:58.427414] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87604 ] 00:19:41.154 [2024-12-05 19:09:58.574462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:41.154 [2024-12-05 19:09:58.605304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:41.418 [2024-12-05 19:09:58.726743] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:41.418 [2024-12-05 19:09:58.726832] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:41.418 [2024-12-05 19:09:58.888217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.418 [2024-12-05 19:09:58.888299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:41.418 [2024-12-05 19:09:58.888315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:41.418 [2024-12-05 19:09:58.888325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.418 [2024-12-05 19:09:58.891065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.418 [2024-12-05 19:09:58.891126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:41.418 [2024-12-05 19:09:58.891138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.714 ms 00:19:41.418 [2024-12-05 19:09:58.891146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.418 [2024-12-05 19:09:58.891283] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:41.418 [2024-12-05 19:09:58.891555] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:41.418 [2024-12-05 19:09:58.891586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.418 [2024-12-05 19:09:58.891596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:41.418 [2024-12-05 19:09:58.891606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.343 ms 00:19:41.418 [2024-12-05 19:09:58.891614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.418 [2024-12-05 19:09:58.893370] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:41.418 [2024-12-05 19:09:58.897176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.418 [2024-12-05 19:09:58.897228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:41.418 [2024-12-05 19:09:58.897245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.809 ms 00:19:41.418 [2024-12-05 19:09:58.897274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.418 [2024-12-05 19:09:58.897362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.418 [2024-12-05 19:09:58.897377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:41.418 [2024-12-05 19:09:58.897390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:19:41.418 [2024-12-05 19:09:58.897397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.418 [2024-12-05 19:09:58.905623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.418 [2024-12-05 19:09:58.905664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:41.418 [2024-12-05 19:09:58.905679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.178 ms 00:19:41.418 [2024-12-05 19:09:58.905687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.418 [2024-12-05 19:09:58.905827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.418 [2024-12-05 19:09:58.905840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:41.418 [2024-12-05 19:09:58.905849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:19:41.418 [2024-12-05 19:09:58.905860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.418 [2024-12-05 19:09:58.905887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.418 [2024-12-05 19:09:58.905896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:41.418 [2024-12-05 19:09:58.905904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:41.418 [2024-12-05 19:09:58.905912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.418 [2024-12-05 19:09:58.905934] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:41.418 [2024-12-05 19:09:58.908056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.418 [2024-12-05 19:09:58.908096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:41.418 [2024-12-05 19:09:58.908107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.127 ms 00:19:41.418 [2024-12-05 19:09:58.908124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.418 [2024-12-05 19:09:58.908170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.418 [2024-12-05 19:09:58.908181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:41.418 [2024-12-05 19:09:58.908190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:41.418 [2024-12-05 19:09:58.908198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.418 [2024-12-05 19:09:58.908217] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:41.418 [2024-12-05 19:09:58.908239] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:41.418 [2024-12-05 19:09:58.908303] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:41.418 [2024-12-05 19:09:58.908325] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:41.418 [2024-12-05 19:09:58.908432] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:41.418 [2024-12-05 19:09:58.908444] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:41.418 [2024-12-05 19:09:58.908455] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:41.418 [2024-12-05 19:09:58.908467] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:41.418 [2024-12-05 19:09:58.908476] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:41.418 [2024-12-05 19:09:58.908485] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:41.418 [2024-12-05 19:09:58.908497] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:41.418 [2024-12-05 19:09:58.908505] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:41.418 [2024-12-05 19:09:58.908512] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:41.418 [2024-12-05 19:09:58.908529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.418 [2024-12-05 19:09:58.908537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:41.418 [2024-12-05 19:09:58.908545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.312 ms 00:19:41.418 [2024-12-05 19:09:58.908553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.418 [2024-12-05 19:09:58.908645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.418 [2024-12-05 19:09:58.908655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:41.418 [2024-12-05 19:09:58.908662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:41.418 [2024-12-05 19:09:58.908669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.418 [2024-12-05 19:09:58.908770] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:41.419 [2024-12-05 19:09:58.908787] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:41.419 [2024-12-05 19:09:58.908796] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:41.419 [2024-12-05 19:09:58.908806] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:41.419 [2024-12-05 19:09:58.908820] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:41.419 [2024-12-05 19:09:58.908827] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:41.419 [2024-12-05 19:09:58.908836] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:41.419 [2024-12-05 19:09:58.908847] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:41.419 [2024-12-05 19:09:58.908855] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:41.419 [2024-12-05 19:09:58.908862] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:41.419 [2024-12-05 19:09:58.908870] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:41.419 [2024-12-05 19:09:58.908879] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:41.419 [2024-12-05 19:09:58.908887] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:41.419 [2024-12-05 19:09:58.908894] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:41.419 [2024-12-05 19:09:58.908903] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:41.419 [2024-12-05 19:09:58.908911] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:41.419 [2024-12-05 19:09:58.908918] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:41.419 [2024-12-05 19:09:58.908926] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:41.419 [2024-12-05 19:09:58.908934] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:41.419 [2024-12-05 19:09:58.908942] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:41.419 [2024-12-05 19:09:58.908950] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:41.419 [2024-12-05 19:09:58.908958] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:41.419 [2024-12-05 19:09:58.908966] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:41.419 [2024-12-05 19:09:58.908977] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:41.419 [2024-12-05 19:09:58.908985] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:41.419 [2024-12-05 19:09:58.908993] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:41.419 [2024-12-05 19:09:58.908999] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:41.419 [2024-12-05 19:09:58.909006] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:41.419 [2024-12-05 19:09:58.909014] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:41.419 [2024-12-05 19:09:58.909022] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:41.419 [2024-12-05 19:09:58.909028] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:41.419 [2024-12-05 19:09:58.909035] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:41.419 [2024-12-05 19:09:58.909043] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:41.419 [2024-12-05 19:09:58.909050] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:41.419 [2024-12-05 19:09:58.909057] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:41.419 [2024-12-05 19:09:58.909063] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:41.419 [2024-12-05 19:09:58.909070] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:41.419 [2024-12-05 19:09:58.909078] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:41.419 [2024-12-05 19:09:58.909085] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:41.419 [2024-12-05 19:09:58.909094] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:41.419 [2024-12-05 19:09:58.909101] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:41.419 [2024-12-05 19:09:58.909108] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:41.419 [2024-12-05 19:09:58.909115] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:41.419 [2024-12-05 19:09:58.909122] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:41.419 [2024-12-05 19:09:58.909129] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:41.419 [2024-12-05 19:09:58.909137] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:41.419 [2024-12-05 19:09:58.909145] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:41.419 [2024-12-05 19:09:58.909152] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:41.419 [2024-12-05 19:09:58.909159] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:41.419 [2024-12-05 19:09:58.909166] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:41.419 [2024-12-05 19:09:58.909172] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:41.419 [2024-12-05 19:09:58.909179] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:41.419 [2024-12-05 19:09:58.909185] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:41.419 [2024-12-05 19:09:58.909193] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:41.419 [2024-12-05 19:09:58.909203] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:41.419 [2024-12-05 19:09:58.909213] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:41.419 [2024-12-05 19:09:58.909221] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:41.419 [2024-12-05 19:09:58.909228] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:41.419 [2024-12-05 19:09:58.909235] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:41.419 [2024-12-05 19:09:58.909241] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:41.419 [2024-12-05 19:09:58.909284] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:41.419 [2024-12-05 19:09:58.909293] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:41.419 [2024-12-05 19:09:58.909307] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:41.419 [2024-12-05 19:09:58.909314] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:41.419 [2024-12-05 19:09:58.909322] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:41.419 [2024-12-05 19:09:58.909330] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:41.419 [2024-12-05 19:09:58.909338] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:41.419 [2024-12-05 19:09:58.909345] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:41.419 [2024-12-05 19:09:58.909353] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:41.419 [2024-12-05 19:09:58.909360] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:41.419 [2024-12-05 19:09:58.909376] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:41.420 [2024-12-05 19:09:58.909388] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:41.420 [2024-12-05 19:09:58.909397] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:41.420 [2024-12-05 19:09:58.909405] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:41.420 [2024-12-05 19:09:58.909412] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:41.420 [2024-12-05 19:09:58.909420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.420 [2024-12-05 19:09:58.909437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:41.420 [2024-12-05 19:09:58.909445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.719 ms 00:19:41.420 [2024-12-05 19:09:58.909453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.420 [2024-12-05 19:09:58.923634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.420 [2024-12-05 19:09:58.923685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:41.420 [2024-12-05 19:09:58.923697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.127 ms 00:19:41.420 [2024-12-05 19:09:58.923706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.420 [2024-12-05 19:09:58.923842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.420 [2024-12-05 19:09:58.923859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:41.420 [2024-12-05 19:09:58.923873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:19:41.420 [2024-12-05 19:09:58.923881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.420 [2024-12-05 19:09:58.946066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.420 [2024-12-05 19:09:58.946354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:41.420 [2024-12-05 19:09:58.946384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.160 ms 00:19:41.420 [2024-12-05 19:09:58.946397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.420 [2024-12-05 19:09:58.946538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.420 [2024-12-05 19:09:58.946556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:41.420 [2024-12-05 19:09:58.946570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:41.420 [2024-12-05 19:09:58.946582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.420 [2024-12-05 19:09:58.947201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.420 [2024-12-05 19:09:58.947228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:41.420 [2024-12-05 19:09:58.947245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.585 ms 00:19:41.420 [2024-12-05 19:09:58.947285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.420 [2024-12-05 19:09:58.947501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.420 [2024-12-05 19:09:58.947519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:41.420 [2024-12-05 19:09:58.947533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.175 ms 00:19:41.420 [2024-12-05 19:09:58.947544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.420 [2024-12-05 19:09:58.956726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.420 [2024-12-05 19:09:58.956772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:41.420 [2024-12-05 19:09:58.956783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.150 ms 00:19:41.420 [2024-12-05 19:09:58.956802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.420 [2024-12-05 19:09:58.960721] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:41.420 [2024-12-05 19:09:58.960772] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:41.420 [2024-12-05 19:09:58.960785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.420 [2024-12-05 19:09:58.960793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:41.420 [2024-12-05 19:09:58.960803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.856 ms 00:19:41.420 [2024-12-05 19:09:58.960810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.682 [2024-12-05 19:09:58.976747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.682 [2024-12-05 19:09:58.976926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:41.682 [2024-12-05 19:09:58.976947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.857 ms 00:19:41.682 [2024-12-05 19:09:58.976957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.682 [2024-12-05 19:09:58.979878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.682 [2024-12-05 19:09:58.979927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:41.682 [2024-12-05 19:09:58.979938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.833 ms 00:19:41.682 [2024-12-05 19:09:58.979946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.682 [2024-12-05 19:09:58.982597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.682 [2024-12-05 19:09:58.982757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:41.682 [2024-12-05 19:09:58.982815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.536 ms 00:19:41.682 [2024-12-05 19:09:58.982838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.682 [2024-12-05 19:09:58.983301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.682 [2024-12-05 19:09:58.983471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:41.682 [2024-12-05 19:09:58.983491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.293 ms 00:19:41.682 [2024-12-05 19:09:58.983499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.682 [2024-12-05 19:09:59.008460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.682 [2024-12-05 19:09:59.008531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:41.682 [2024-12-05 19:09:59.008548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.929 ms 00:19:41.682 [2024-12-05 19:09:59.008557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.682 [2024-12-05 19:09:59.016565] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:41.682 [2024-12-05 19:09:59.036047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.682 [2024-12-05 19:09:59.036111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:41.682 [2024-12-05 19:09:59.036124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.389 ms 00:19:41.682 [2024-12-05 19:09:59.036133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.682 [2024-12-05 19:09:59.036229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.682 [2024-12-05 19:09:59.036240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:41.682 [2024-12-05 19:09:59.036279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:19:41.682 [2024-12-05 19:09:59.036289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.682 [2024-12-05 19:09:59.036354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.682 [2024-12-05 19:09:59.036364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:41.682 [2024-12-05 19:09:59.036374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:19:41.682 [2024-12-05 19:09:59.036382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.682 [2024-12-05 19:09:59.036407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.682 [2024-12-05 19:09:59.036417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:41.682 [2024-12-05 19:09:59.036425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:41.682 [2024-12-05 19:09:59.036437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.682 [2024-12-05 19:09:59.036475] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:41.682 [2024-12-05 19:09:59.036487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.682 [2024-12-05 19:09:59.036495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:41.682 [2024-12-05 19:09:59.036504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:41.682 [2024-12-05 19:09:59.036511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.682 [2024-12-05 19:09:59.042706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.682 [2024-12-05 19:09:59.042883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:41.682 [2024-12-05 19:09:59.042904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.173 ms 00:19:41.682 [2024-12-05 19:09:59.042913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.682 [2024-12-05 19:09:59.043008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.683 [2024-12-05 19:09:59.043025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:41.683 [2024-12-05 19:09:59.043035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:19:41.683 [2024-12-05 19:09:59.043043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.683 [2024-12-05 19:09:59.044280] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:41.683 [2024-12-05 19:09:59.045690] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 155.723 ms, result 0 00:19:41.683 [2024-12-05 19:09:59.046997] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:41.683 [2024-12-05 19:09:59.054389] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:42.625  [2024-12-05T19:10:01.128Z] Copying: 13/256 [MB] (13 MBps) [2024-12-05T19:10:02.071Z] Copying: 24/256 [MB] (10 MBps) [2024-12-05T19:10:03.461Z] Copying: 38/256 [MB] (13 MBps) [2024-12-05T19:10:04.407Z] Copying: 48/256 [MB] (10 MBps) [2024-12-05T19:10:05.350Z] Copying: 60/256 [MB] (11 MBps) [2024-12-05T19:10:06.292Z] Copying: 79/256 [MB] (19 MBps) [2024-12-05T19:10:07.232Z] Copying: 96/256 [MB] (17 MBps) [2024-12-05T19:10:08.215Z] Copying: 113/256 [MB] (17 MBps) [2024-12-05T19:10:09.159Z] Copying: 133/256 [MB] (19 MBps) [2024-12-05T19:10:10.105Z] Copying: 153/256 [MB] (20 MBps) [2024-12-05T19:10:11.494Z] Copying: 166/256 [MB] (12 MBps) [2024-12-05T19:10:12.067Z] Copying: 183/256 [MB] (16 MBps) [2024-12-05T19:10:13.455Z] Copying: 204/256 [MB] (21 MBps) [2024-12-05T19:10:14.402Z] Copying: 224/256 [MB] (19 MBps) [2024-12-05T19:10:14.978Z] Copying: 239/256 [MB] (15 MBps) [2024-12-05T19:10:14.978Z] Copying: 256/256 [MB] (average 16 MBps)[2024-12-05 19:10:14.824662] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:57.419 [2024-12-05 19:10:14.826565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.419 [2024-12-05 19:10:14.826616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:57.420 [2024-12-05 19:10:14.826630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:57.420 [2024-12-05 19:10:14.826639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.420 [2024-12-05 19:10:14.826662] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:57.420 [2024-12-05 19:10:14.827340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.420 [2024-12-05 19:10:14.827362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:57.420 [2024-12-05 19:10:14.827374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.663 ms 00:19:57.420 [2024-12-05 19:10:14.827383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.420 [2024-12-05 19:10:14.827650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.420 [2024-12-05 19:10:14.827669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:57.420 [2024-12-05 19:10:14.827683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.244 ms 00:19:57.420 [2024-12-05 19:10:14.827692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.420 [2024-12-05 19:10:14.831418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.420 [2024-12-05 19:10:14.831441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:57.420 [2024-12-05 19:10:14.831451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.710 ms 00:19:57.420 [2024-12-05 19:10:14.831460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.420 [2024-12-05 19:10:14.838434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.420 [2024-12-05 19:10:14.838471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:57.420 [2024-12-05 19:10:14.838482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.942 ms 00:19:57.420 [2024-12-05 19:10:14.838497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.420 [2024-12-05 19:10:14.841133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.420 [2024-12-05 19:10:14.841181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:57.420 [2024-12-05 19:10:14.841192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.582 ms 00:19:57.420 [2024-12-05 19:10:14.841199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.420 [2024-12-05 19:10:14.845898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.420 [2024-12-05 19:10:14.845949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:57.420 [2024-12-05 19:10:14.845960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.655 ms 00:19:57.420 [2024-12-05 19:10:14.845968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.420 [2024-12-05 19:10:14.846108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.420 [2024-12-05 19:10:14.846118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:57.420 [2024-12-05 19:10:14.846131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:19:57.420 [2024-12-05 19:10:14.846145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.420 [2024-12-05 19:10:14.849414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.420 [2024-12-05 19:10:14.849458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:57.420 [2024-12-05 19:10:14.849468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.251 ms 00:19:57.420 [2024-12-05 19:10:14.849475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.420 [2024-12-05 19:10:14.851930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.420 [2024-12-05 19:10:14.851976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:57.420 [2024-12-05 19:10:14.851986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.412 ms 00:19:57.420 [2024-12-05 19:10:14.851992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.420 [2024-12-05 19:10:14.854146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.420 [2024-12-05 19:10:14.854190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:57.420 [2024-12-05 19:10:14.854200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.111 ms 00:19:57.420 [2024-12-05 19:10:14.854208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.420 [2024-12-05 19:10:14.856262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.420 [2024-12-05 19:10:14.856301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:57.420 [2024-12-05 19:10:14.856310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.956 ms 00:19:57.420 [2024-12-05 19:10:14.856317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.420 [2024-12-05 19:10:14.856356] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:57.420 [2024-12-05 19:10:14.856371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:57.420 [2024-12-05 19:10:14.856381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:57.420 [2024-12-05 19:10:14.856389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:57.420 [2024-12-05 19:10:14.856397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:57.420 [2024-12-05 19:10:14.856404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:57.420 [2024-12-05 19:10:14.856412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:57.420 [2024-12-05 19:10:14.856420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:57.420 [2024-12-05 19:10:14.856428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:57.420 [2024-12-05 19:10:14.856436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:57.420 [2024-12-05 19:10:14.856443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:57.420 [2024-12-05 19:10:14.856451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:57.420 [2024-12-05 19:10:14.856458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:57.420 [2024-12-05 19:10:14.856465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:57.420 [2024-12-05 19:10:14.856473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:57.420 [2024-12-05 19:10:14.856480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:57.420 [2024-12-05 19:10:14.856487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:57.420 [2024-12-05 19:10:14.856494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:57.420 [2024-12-05 19:10:14.856501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:57.420 [2024-12-05 19:10:14.856509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:57.420 [2024-12-05 19:10:14.856516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:57.420 [2024-12-05 19:10:14.856523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:57.420 [2024-12-05 19:10:14.856531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:57.420 [2024-12-05 19:10:14.856538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:57.420 [2024-12-05 19:10:14.856546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:57.420 [2024-12-05 19:10:14.856553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:57.420 [2024-12-05 19:10:14.856560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:57.420 [2024-12-05 19:10:14.856570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:57.420 [2024-12-05 19:10:14.856577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:57.420 [2024-12-05 19:10:14.856584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:57.420 [2024-12-05 19:10:14.856602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:57.420 [2024-12-05 19:10:14.856610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:57.420 [2024-12-05 19:10:14.856619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:57.420 [2024-12-05 19:10:14.856627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:57.420 [2024-12-05 19:10:14.856635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:57.420 [2024-12-05 19:10:14.856643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:57.420 [2024-12-05 19:10:14.856651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:57.420 [2024-12-05 19:10:14.856659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:57.420 [2024-12-05 19:10:14.856667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:57.420 [2024-12-05 19:10:14.856674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:57.420 [2024-12-05 19:10:14.856682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:57.420 [2024-12-05 19:10:14.856690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:57.420 [2024-12-05 19:10:14.856697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:57.420 [2024-12-05 19:10:14.856705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:57.420 [2024-12-05 19:10:14.856713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:57.420 [2024-12-05 19:10:14.856720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:57.420 [2024-12-05 19:10:14.856728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:57.421 [2024-12-05 19:10:14.856735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:57.421 [2024-12-05 19:10:14.856742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:57.421 [2024-12-05 19:10:14.856749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:57.421 [2024-12-05 19:10:14.856757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:57.421 [2024-12-05 19:10:14.856765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:57.421 [2024-12-05 19:10:14.856772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:57.421 [2024-12-05 19:10:14.856780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:57.421 [2024-12-05 19:10:14.856787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:57.421 [2024-12-05 19:10:14.856794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:57.421 [2024-12-05 19:10:14.856802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:57.421 [2024-12-05 19:10:14.856809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:57.421 [2024-12-05 19:10:14.856817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:57.421 [2024-12-05 19:10:14.856825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:57.421 [2024-12-05 19:10:14.856832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:57.421 [2024-12-05 19:10:14.856839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:57.421 [2024-12-05 19:10:14.856846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:57.421 [2024-12-05 19:10:14.856854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:57.421 [2024-12-05 19:10:14.856864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:57.421 [2024-12-05 19:10:14.856872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:57.421 [2024-12-05 19:10:14.856879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:57.421 [2024-12-05 19:10:14.856887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:57.421 [2024-12-05 19:10:14.856895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:57.421 [2024-12-05 19:10:14.856903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:57.421 [2024-12-05 19:10:14.856910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:57.421 [2024-12-05 19:10:14.856917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:57.421 [2024-12-05 19:10:14.856925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:57.421 [2024-12-05 19:10:14.856933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:57.421 [2024-12-05 19:10:14.856940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:57.421 [2024-12-05 19:10:14.856948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:57.421 [2024-12-05 19:10:14.856956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:57.421 [2024-12-05 19:10:14.856963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:57.421 [2024-12-05 19:10:14.856970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:57.421 [2024-12-05 19:10:14.856977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:57.421 [2024-12-05 19:10:14.856985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:57.421 [2024-12-05 19:10:14.856992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:57.421 [2024-12-05 19:10:14.857000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:57.421 [2024-12-05 19:10:14.857008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:57.421 [2024-12-05 19:10:14.857015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:57.421 [2024-12-05 19:10:14.857023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:57.421 [2024-12-05 19:10:14.857031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:57.421 [2024-12-05 19:10:14.857039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:57.421 [2024-12-05 19:10:14.857046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:57.421 [2024-12-05 19:10:14.857054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:57.421 [2024-12-05 19:10:14.857061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:57.421 [2024-12-05 19:10:14.857068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:57.421 [2024-12-05 19:10:14.857075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:57.421 [2024-12-05 19:10:14.857082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:57.421 [2024-12-05 19:10:14.857090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:57.421 [2024-12-05 19:10:14.857098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:57.421 [2024-12-05 19:10:14.857106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:57.421 [2024-12-05 19:10:14.857114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:57.421 [2024-12-05 19:10:14.857121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:57.421 [2024-12-05 19:10:14.857130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:57.421 [2024-12-05 19:10:14.857137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:57.421 [2024-12-05 19:10:14.857153] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:57.421 [2024-12-05 19:10:14.857162] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 42d50fa2-6cab-4623-bd3d-509e916d40ae 00:19:57.421 [2024-12-05 19:10:14.857175] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:57.421 [2024-12-05 19:10:14.857182] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:57.421 [2024-12-05 19:10:14.857190] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:57.421 [2024-12-05 19:10:14.857203] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:57.421 [2024-12-05 19:10:14.857212] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:57.421 [2024-12-05 19:10:14.857223] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:57.421 [2024-12-05 19:10:14.857232] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:57.421 [2024-12-05 19:10:14.857238] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:57.421 [2024-12-05 19:10:14.857245] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:57.421 [2024-12-05 19:10:14.857272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.421 [2024-12-05 19:10:14.857281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:57.421 [2024-12-05 19:10:14.857290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.918 ms 00:19:57.421 [2024-12-05 19:10:14.857298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.421 [2024-12-05 19:10:14.859473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.421 [2024-12-05 19:10:14.859644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:57.421 [2024-12-05 19:10:14.859663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.157 ms 00:19:57.421 [2024-12-05 19:10:14.859680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.421 [2024-12-05 19:10:14.859796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.421 [2024-12-05 19:10:14.859806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:57.421 [2024-12-05 19:10:14.859814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:19:57.421 [2024-12-05 19:10:14.859822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.421 [2024-12-05 19:10:14.867349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.421 [2024-12-05 19:10:14.867392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:57.421 [2024-12-05 19:10:14.867402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.421 [2024-12-05 19:10:14.867415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.421 [2024-12-05 19:10:14.867495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.421 [2024-12-05 19:10:14.867504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:57.421 [2024-12-05 19:10:14.867512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.421 [2024-12-05 19:10:14.867524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.421 [2024-12-05 19:10:14.867568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.421 [2024-12-05 19:10:14.867577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:57.421 [2024-12-05 19:10:14.867585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.421 [2024-12-05 19:10:14.867592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.421 [2024-12-05 19:10:14.867612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.421 [2024-12-05 19:10:14.867620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:57.421 [2024-12-05 19:10:14.867628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.421 [2024-12-05 19:10:14.867636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.422 [2024-12-05 19:10:14.881398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.422 [2024-12-05 19:10:14.881453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:57.422 [2024-12-05 19:10:14.881464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.422 [2024-12-05 19:10:14.881479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.422 [2024-12-05 19:10:14.891770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.422 [2024-12-05 19:10:14.891821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:57.422 [2024-12-05 19:10:14.891832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.422 [2024-12-05 19:10:14.891841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.422 [2024-12-05 19:10:14.891893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.422 [2024-12-05 19:10:14.891903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:57.422 [2024-12-05 19:10:14.891911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.422 [2024-12-05 19:10:14.891919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.422 [2024-12-05 19:10:14.891952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.422 [2024-12-05 19:10:14.891969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:57.422 [2024-12-05 19:10:14.891977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.422 [2024-12-05 19:10:14.891986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.422 [2024-12-05 19:10:14.892058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.422 [2024-12-05 19:10:14.892069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:57.422 [2024-12-05 19:10:14.892085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.422 [2024-12-05 19:10:14.892093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.422 [2024-12-05 19:10:14.892127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.422 [2024-12-05 19:10:14.892140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:57.422 [2024-12-05 19:10:14.892153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.422 [2024-12-05 19:10:14.892160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.422 [2024-12-05 19:10:14.892205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.422 [2024-12-05 19:10:14.892215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:57.422 [2024-12-05 19:10:14.892223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.422 [2024-12-05 19:10:14.892231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.422 [2024-12-05 19:10:14.892306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.422 [2024-12-05 19:10:14.892322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:57.422 [2024-12-05 19:10:14.892332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.422 [2024-12-05 19:10:14.892339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.422 [2024-12-05 19:10:14.892493] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 65.896 ms, result 0 00:19:57.684 00:19:57.684 00:19:57.684 19:10:15 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:19:57.684 19:10:15 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:19:58.259 19:10:15 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:58.259 [2024-12-05 19:10:15.748577] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:19:58.259 [2024-12-05 19:10:15.748733] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87791 ] 00:19:58.520 [2024-12-05 19:10:15.896073] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:58.520 [2024-12-05 19:10:15.924270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:58.520 [2024-12-05 19:10:16.039669] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:58.520 [2024-12-05 19:10:16.040015] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:58.783 [2024-12-05 19:10:16.199883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.783 [2024-12-05 19:10:16.199941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:58.783 [2024-12-05 19:10:16.199956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:58.783 [2024-12-05 19:10:16.199966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.783 [2024-12-05 19:10:16.202571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.783 [2024-12-05 19:10:16.202761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:58.783 [2024-12-05 19:10:16.202788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.584 ms 00:19:58.783 [2024-12-05 19:10:16.202798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.783 [2024-12-05 19:10:16.203247] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:58.783 [2024-12-05 19:10:16.204070] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:58.783 [2024-12-05 19:10:16.204130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.783 [2024-12-05 19:10:16.204141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:58.783 [2024-12-05 19:10:16.204152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.915 ms 00:19:58.783 [2024-12-05 19:10:16.204160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.783 [2024-12-05 19:10:16.206059] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:58.783 [2024-12-05 19:10:16.210013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.783 [2024-12-05 19:10:16.210061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:58.783 [2024-12-05 19:10:16.210078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.956 ms 00:19:58.783 [2024-12-05 19:10:16.210087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.783 [2024-12-05 19:10:16.210170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.783 [2024-12-05 19:10:16.210180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:58.783 [2024-12-05 19:10:16.210195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:19:58.783 [2024-12-05 19:10:16.210202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.783 [2024-12-05 19:10:16.218849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.783 [2024-12-05 19:10:16.218894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:58.783 [2024-12-05 19:10:16.218912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.601 ms 00:19:58.783 [2024-12-05 19:10:16.218921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.783 [2024-12-05 19:10:16.219062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.783 [2024-12-05 19:10:16.219077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:58.783 [2024-12-05 19:10:16.219086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:19:58.783 [2024-12-05 19:10:16.219097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.783 [2024-12-05 19:10:16.219124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.783 [2024-12-05 19:10:16.219133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:58.783 [2024-12-05 19:10:16.219141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:58.783 [2024-12-05 19:10:16.219149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.783 [2024-12-05 19:10:16.219171] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:58.783 [2024-12-05 19:10:16.221214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.783 [2024-12-05 19:10:16.221270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:58.783 [2024-12-05 19:10:16.221285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.049 ms 00:19:58.783 [2024-12-05 19:10:16.221292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.783 [2024-12-05 19:10:16.221336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.783 [2024-12-05 19:10:16.221348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:58.783 [2024-12-05 19:10:16.221357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:58.783 [2024-12-05 19:10:16.221365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.783 [2024-12-05 19:10:16.221385] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:58.783 [2024-12-05 19:10:16.221407] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:58.783 [2024-12-05 19:10:16.221449] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:58.783 [2024-12-05 19:10:16.221468] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:58.783 [2024-12-05 19:10:16.221585] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:58.783 [2024-12-05 19:10:16.221598] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:58.783 [2024-12-05 19:10:16.221617] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:58.783 [2024-12-05 19:10:16.221628] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:58.783 [2024-12-05 19:10:16.221640] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:58.783 [2024-12-05 19:10:16.221649] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:58.783 [2024-12-05 19:10:16.221657] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:58.783 [2024-12-05 19:10:16.221665] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:58.783 [2024-12-05 19:10:16.221678] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:58.783 [2024-12-05 19:10:16.221689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.783 [2024-12-05 19:10:16.221696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:58.783 [2024-12-05 19:10:16.221704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.306 ms 00:19:58.783 [2024-12-05 19:10:16.221711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.783 [2024-12-05 19:10:16.221803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.783 [2024-12-05 19:10:16.221813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:58.783 [2024-12-05 19:10:16.221820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:58.784 [2024-12-05 19:10:16.221829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.784 [2024-12-05 19:10:16.221930] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:58.784 [2024-12-05 19:10:16.221942] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:58.784 [2024-12-05 19:10:16.221951] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:58.784 [2024-12-05 19:10:16.221961] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:58.784 [2024-12-05 19:10:16.221977] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:58.784 [2024-12-05 19:10:16.221985] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:58.784 [2024-12-05 19:10:16.221994] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:58.784 [2024-12-05 19:10:16.222004] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:58.784 [2024-12-05 19:10:16.222012] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:58.784 [2024-12-05 19:10:16.222019] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:58.784 [2024-12-05 19:10:16.222027] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:58.784 [2024-12-05 19:10:16.222035] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:58.784 [2024-12-05 19:10:16.222043] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:58.784 [2024-12-05 19:10:16.222050] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:58.784 [2024-12-05 19:10:16.222060] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:58.784 [2024-12-05 19:10:16.222068] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:58.784 [2024-12-05 19:10:16.222075] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:58.784 [2024-12-05 19:10:16.222084] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:58.784 [2024-12-05 19:10:16.222092] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:58.784 [2024-12-05 19:10:16.222102] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:58.784 [2024-12-05 19:10:16.222110] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:58.784 [2024-12-05 19:10:16.222118] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:58.784 [2024-12-05 19:10:16.222126] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:58.784 [2024-12-05 19:10:16.222138] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:58.784 [2024-12-05 19:10:16.222147] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:58.784 [2024-12-05 19:10:16.222155] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:58.784 [2024-12-05 19:10:16.222162] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:58.784 [2024-12-05 19:10:16.222170] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:58.784 [2024-12-05 19:10:16.222177] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:58.784 [2024-12-05 19:10:16.222185] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:58.784 [2024-12-05 19:10:16.222193] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:58.784 [2024-12-05 19:10:16.222201] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:58.784 [2024-12-05 19:10:16.222209] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:58.784 [2024-12-05 19:10:16.222215] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:58.784 [2024-12-05 19:10:16.222223] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:58.784 [2024-12-05 19:10:16.222230] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:58.784 [2024-12-05 19:10:16.222241] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:58.784 [2024-12-05 19:10:16.222266] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:58.784 [2024-12-05 19:10:16.222274] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:58.784 [2024-12-05 19:10:16.222284] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:58.784 [2024-12-05 19:10:16.222291] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:58.784 [2024-12-05 19:10:16.222300] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:58.784 [2024-12-05 19:10:16.222308] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:58.784 [2024-12-05 19:10:16.222316] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:58.784 [2024-12-05 19:10:16.222325] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:58.784 [2024-12-05 19:10:16.222333] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:58.784 [2024-12-05 19:10:16.222340] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:58.784 [2024-12-05 19:10:16.222347] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:58.784 [2024-12-05 19:10:16.222354] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:58.784 [2024-12-05 19:10:16.222361] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:58.784 [2024-12-05 19:10:16.222368] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:58.784 [2024-12-05 19:10:16.222376] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:58.784 [2024-12-05 19:10:16.222383] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:58.784 [2024-12-05 19:10:16.222391] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:58.784 [2024-12-05 19:10:16.222401] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:58.784 [2024-12-05 19:10:16.222415] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:58.784 [2024-12-05 19:10:16.222423] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:58.784 [2024-12-05 19:10:16.222430] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:58.784 [2024-12-05 19:10:16.222439] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:58.784 [2024-12-05 19:10:16.222446] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:58.784 [2024-12-05 19:10:16.222454] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:58.784 [2024-12-05 19:10:16.222462] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:58.784 [2024-12-05 19:10:16.222476] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:58.784 [2024-12-05 19:10:16.222484] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:58.784 [2024-12-05 19:10:16.222491] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:58.784 [2024-12-05 19:10:16.222498] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:58.784 [2024-12-05 19:10:16.222505] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:58.784 [2024-12-05 19:10:16.222514] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:58.784 [2024-12-05 19:10:16.222521] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:58.784 [2024-12-05 19:10:16.222528] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:58.784 [2024-12-05 19:10:16.222537] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:58.784 [2024-12-05 19:10:16.222548] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:58.784 [2024-12-05 19:10:16.222556] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:58.784 [2024-12-05 19:10:16.222565] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:58.784 [2024-12-05 19:10:16.222573] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:58.784 [2024-12-05 19:10:16.222583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.784 [2024-12-05 19:10:16.222590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:58.784 [2024-12-05 19:10:16.222599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.722 ms 00:19:58.784 [2024-12-05 19:10:16.222605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.784 [2024-12-05 19:10:16.236244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.784 [2024-12-05 19:10:16.236302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:58.784 [2024-12-05 19:10:16.236313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.587 ms 00:19:58.784 [2024-12-05 19:10:16.236322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.784 [2024-12-05 19:10:16.236462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.784 [2024-12-05 19:10:16.236473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:58.784 [2024-12-05 19:10:16.236482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:19:58.784 [2024-12-05 19:10:16.236490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.784 [2024-12-05 19:10:16.259198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.784 [2024-12-05 19:10:16.259316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:58.784 [2024-12-05 19:10:16.259343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.684 ms 00:19:58.784 [2024-12-05 19:10:16.259359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.784 [2024-12-05 19:10:16.259475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.784 [2024-12-05 19:10:16.259498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:58.784 [2024-12-05 19:10:16.259510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:58.784 [2024-12-05 19:10:16.259521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.784 [2024-12-05 19:10:16.260087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.784 [2024-12-05 19:10:16.260138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:58.784 [2024-12-05 19:10:16.260154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.535 ms 00:19:58.784 [2024-12-05 19:10:16.260166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.785 [2024-12-05 19:10:16.260382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.785 [2024-12-05 19:10:16.260396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:58.785 [2024-12-05 19:10:16.260414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.172 ms 00:19:58.785 [2024-12-05 19:10:16.260425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.785 [2024-12-05 19:10:16.268830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.785 [2024-12-05 19:10:16.268883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:58.785 [2024-12-05 19:10:16.268895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.372 ms 00:19:58.785 [2024-12-05 19:10:16.268906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.785 [2024-12-05 19:10:16.272661] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:58.785 [2024-12-05 19:10:16.272709] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:58.785 [2024-12-05 19:10:16.272727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.785 [2024-12-05 19:10:16.272735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:58.785 [2024-12-05 19:10:16.272744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.707 ms 00:19:58.785 [2024-12-05 19:10:16.272752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.785 [2024-12-05 19:10:16.288604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.785 [2024-12-05 19:10:16.288659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:58.785 [2024-12-05 19:10:16.288671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.792 ms 00:19:58.785 [2024-12-05 19:10:16.288679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.785 [2024-12-05 19:10:16.291462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.785 [2024-12-05 19:10:16.291503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:58.785 [2024-12-05 19:10:16.291513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.713 ms 00:19:58.785 [2024-12-05 19:10:16.291520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.785 [2024-12-05 19:10:16.294070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.785 [2024-12-05 19:10:16.294122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:58.785 [2024-12-05 19:10:16.294132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.463 ms 00:19:58.785 [2024-12-05 19:10:16.294139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.785 [2024-12-05 19:10:16.294515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.785 [2024-12-05 19:10:16.294529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:58.785 [2024-12-05 19:10:16.294539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.300 ms 00:19:58.785 [2024-12-05 19:10:16.294546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.785 [2024-12-05 19:10:16.317465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.785 [2024-12-05 19:10:16.317702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:58.785 [2024-12-05 19:10:16.317722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.889 ms 00:19:58.785 [2024-12-05 19:10:16.317733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.785 [2024-12-05 19:10:16.325750] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:59.047 [2024-12-05 19:10:16.344685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.047 [2024-12-05 19:10:16.344879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:59.047 [2024-12-05 19:10:16.344898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.863 ms 00:19:59.047 [2024-12-05 19:10:16.344907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.047 [2024-12-05 19:10:16.345005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.047 [2024-12-05 19:10:16.345020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:59.047 [2024-12-05 19:10:16.345030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:19:59.047 [2024-12-05 19:10:16.345039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.047 [2024-12-05 19:10:16.345097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.047 [2024-12-05 19:10:16.345107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:59.047 [2024-12-05 19:10:16.345116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:19:59.047 [2024-12-05 19:10:16.345130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.047 [2024-12-05 19:10:16.345154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.047 [2024-12-05 19:10:16.345163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:59.047 [2024-12-05 19:10:16.345175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:59.047 [2024-12-05 19:10:16.345183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.047 [2024-12-05 19:10:16.345221] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:59.047 [2024-12-05 19:10:16.345232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.047 [2024-12-05 19:10:16.345248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:59.047 [2024-12-05 19:10:16.345289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:59.047 [2024-12-05 19:10:16.345297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.047 [2024-12-05 19:10:16.351275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.047 [2024-12-05 19:10:16.351320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:59.047 [2024-12-05 19:10:16.351340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.951 ms 00:19:59.047 [2024-12-05 19:10:16.351352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.047 [2024-12-05 19:10:16.351446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.047 [2024-12-05 19:10:16.351457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:59.047 [2024-12-05 19:10:16.351466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:19:59.047 [2024-12-05 19:10:16.351475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.047 [2024-12-05 19:10:16.352512] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:59.047 [2024-12-05 19:10:16.353831] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 152.293 ms, result 0 00:19:59.047 [2024-12-05 19:10:16.355150] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:59.047 [2024-12-05 19:10:16.362534] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:59.309  [2024-12-05T19:10:16.868Z] Copying: 4096/4096 [kB] (average 13 MBps)[2024-12-05 19:10:16.656544] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:59.309 [2024-12-05 19:10:16.657575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.309 [2024-12-05 19:10:16.657746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:59.309 [2024-12-05 19:10:16.657774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:59.309 [2024-12-05 19:10:16.657782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.309 [2024-12-05 19:10:16.657807] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:59.309 [2024-12-05 19:10:16.658414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.309 [2024-12-05 19:10:16.658442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:59.309 [2024-12-05 19:10:16.658452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.593 ms 00:19:59.309 [2024-12-05 19:10:16.658460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.309 [2024-12-05 19:10:16.660048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.309 [2024-12-05 19:10:16.660090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:59.309 [2024-12-05 19:10:16.660099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.565 ms 00:19:59.309 [2024-12-05 19:10:16.660108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.309 [2024-12-05 19:10:16.663895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.309 [2024-12-05 19:10:16.663922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:59.309 [2024-12-05 19:10:16.663931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.772 ms 00:19:59.309 [2024-12-05 19:10:16.663938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.309 [2024-12-05 19:10:16.670990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.309 [2024-12-05 19:10:16.671032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:59.309 [2024-12-05 19:10:16.671045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.026 ms 00:19:59.309 [2024-12-05 19:10:16.671056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.309 [2024-12-05 19:10:16.672537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.309 [2024-12-05 19:10:16.672681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:59.309 [2024-12-05 19:10:16.672698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.423 ms 00:19:59.309 [2024-12-05 19:10:16.672705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.309 [2024-12-05 19:10:16.676487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.309 [2024-12-05 19:10:16.676525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:59.309 [2024-12-05 19:10:16.676542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.747 ms 00:19:59.310 [2024-12-05 19:10:16.676550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.310 [2024-12-05 19:10:16.676660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.310 [2024-12-05 19:10:16.676672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:59.310 [2024-12-05 19:10:16.676681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:19:59.310 [2024-12-05 19:10:16.676689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.310 [2024-12-05 19:10:16.678797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.310 [2024-12-05 19:10:16.678835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:59.310 [2024-12-05 19:10:16.678844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.092 ms 00:19:59.310 [2024-12-05 19:10:16.678851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.310 [2024-12-05 19:10:16.680493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.310 [2024-12-05 19:10:16.680529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:59.310 [2024-12-05 19:10:16.680537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.608 ms 00:19:59.310 [2024-12-05 19:10:16.680544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.310 [2024-12-05 19:10:16.681855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.310 [2024-12-05 19:10:16.681890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:59.310 [2024-12-05 19:10:16.681899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.278 ms 00:19:59.310 [2024-12-05 19:10:16.681906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.310 [2024-12-05 19:10:16.682996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.310 [2024-12-05 19:10:16.683031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:59.310 [2024-12-05 19:10:16.683040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.029 ms 00:19:59.310 [2024-12-05 19:10:16.683047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.310 [2024-12-05 19:10:16.683079] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:59.310 [2024-12-05 19:10:16.683092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:59.310 [2024-12-05 19:10:16.683101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:59.310 [2024-12-05 19:10:16.683109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:59.310 [2024-12-05 19:10:16.683117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:59.310 [2024-12-05 19:10:16.683125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:59.310 [2024-12-05 19:10:16.683132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:59.310 [2024-12-05 19:10:16.683139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:59.310 [2024-12-05 19:10:16.683146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:59.310 [2024-12-05 19:10:16.683154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:59.310 [2024-12-05 19:10:16.683161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:59.310 [2024-12-05 19:10:16.683169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:59.310 [2024-12-05 19:10:16.683176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:59.310 [2024-12-05 19:10:16.683183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:59.310 [2024-12-05 19:10:16.683191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:59.310 [2024-12-05 19:10:16.683198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:59.310 [2024-12-05 19:10:16.683205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:59.310 [2024-12-05 19:10:16.683212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:59.310 [2024-12-05 19:10:16.683219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:59.310 [2024-12-05 19:10:16.683226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:59.310 [2024-12-05 19:10:16.683233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:59.310 [2024-12-05 19:10:16.683241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:59.310 [2024-12-05 19:10:16.683248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:59.310 [2024-12-05 19:10:16.683268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:59.310 [2024-12-05 19:10:16.683275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:59.310 [2024-12-05 19:10:16.683283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:59.310 [2024-12-05 19:10:16.683290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:59.310 [2024-12-05 19:10:16.683299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:59.310 [2024-12-05 19:10:16.683306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:59.310 [2024-12-05 19:10:16.683314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:59.310 [2024-12-05 19:10:16.683329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:59.310 [2024-12-05 19:10:16.683336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:59.310 [2024-12-05 19:10:16.683343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:59.310 [2024-12-05 19:10:16.683351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:59.310 [2024-12-05 19:10:16.683360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:59.310 [2024-12-05 19:10:16.683368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:59.310 [2024-12-05 19:10:16.683375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:59.310 [2024-12-05 19:10:16.683382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:59.310 [2024-12-05 19:10:16.683390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:59.310 [2024-12-05 19:10:16.683397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:59.310 [2024-12-05 19:10:16.683404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:59.310 [2024-12-05 19:10:16.683412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:59.310 [2024-12-05 19:10:16.683419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:59.310 [2024-12-05 19:10:16.683426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:59.310 [2024-12-05 19:10:16.683433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:59.310 [2024-12-05 19:10:16.683441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:59.310 [2024-12-05 19:10:16.683448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:59.310 [2024-12-05 19:10:16.683456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:59.310 [2024-12-05 19:10:16.683463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:59.310 [2024-12-05 19:10:16.683470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:59.310 [2024-12-05 19:10:16.683477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:59.310 [2024-12-05 19:10:16.683485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:59.310 [2024-12-05 19:10:16.683492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:59.310 [2024-12-05 19:10:16.683500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:59.310 [2024-12-05 19:10:16.683514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:59.310 [2024-12-05 19:10:16.683522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:59.310 [2024-12-05 19:10:16.683528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:59.310 [2024-12-05 19:10:16.683535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:59.310 [2024-12-05 19:10:16.683543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:59.310 [2024-12-05 19:10:16.683550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:59.310 [2024-12-05 19:10:16.683556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:59.310 [2024-12-05 19:10:16.683563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:59.310 [2024-12-05 19:10:16.683570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:59.310 [2024-12-05 19:10:16.683577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:59.310 [2024-12-05 19:10:16.683584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:59.310 [2024-12-05 19:10:16.683592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:59.310 [2024-12-05 19:10:16.683600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:59.311 [2024-12-05 19:10:16.683608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:59.311 [2024-12-05 19:10:16.683616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:59.311 [2024-12-05 19:10:16.683623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:59.311 [2024-12-05 19:10:16.683630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:59.311 [2024-12-05 19:10:16.683637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:59.311 [2024-12-05 19:10:16.683644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:59.311 [2024-12-05 19:10:16.683652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:59.311 [2024-12-05 19:10:16.683659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:59.311 [2024-12-05 19:10:16.683666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:59.311 [2024-12-05 19:10:16.683674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:59.311 [2024-12-05 19:10:16.683681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:59.311 [2024-12-05 19:10:16.683688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:59.311 [2024-12-05 19:10:16.683696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:59.311 [2024-12-05 19:10:16.683703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:59.311 [2024-12-05 19:10:16.683709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:59.311 [2024-12-05 19:10:16.683716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:59.311 [2024-12-05 19:10:16.683724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:59.311 [2024-12-05 19:10:16.683731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:59.311 [2024-12-05 19:10:16.683738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:59.311 [2024-12-05 19:10:16.683745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:59.311 [2024-12-05 19:10:16.683752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:59.311 [2024-12-05 19:10:16.683759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:59.311 [2024-12-05 19:10:16.683767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:59.311 [2024-12-05 19:10:16.683773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:59.311 [2024-12-05 19:10:16.683780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:59.311 [2024-12-05 19:10:16.683787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:59.311 [2024-12-05 19:10:16.683794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:59.311 [2024-12-05 19:10:16.683802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:59.311 [2024-12-05 19:10:16.683809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:59.311 [2024-12-05 19:10:16.683815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:59.311 [2024-12-05 19:10:16.683823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:59.311 [2024-12-05 19:10:16.683832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:59.311 [2024-12-05 19:10:16.683839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:59.311 [2024-12-05 19:10:16.683847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:59.311 [2024-12-05 19:10:16.683861] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:59.311 [2024-12-05 19:10:16.683869] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 42d50fa2-6cab-4623-bd3d-509e916d40ae 00:19:59.311 [2024-12-05 19:10:16.683877] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:59.311 [2024-12-05 19:10:16.683884] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:59.311 [2024-12-05 19:10:16.683892] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:59.311 [2024-12-05 19:10:16.683899] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:59.311 [2024-12-05 19:10:16.683909] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:59.311 [2024-12-05 19:10:16.683916] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:59.311 [2024-12-05 19:10:16.683924] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:59.311 [2024-12-05 19:10:16.683930] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:59.311 [2024-12-05 19:10:16.683936] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:59.311 [2024-12-05 19:10:16.683943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.311 [2024-12-05 19:10:16.683951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:59.311 [2024-12-05 19:10:16.683959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.865 ms 00:19:59.311 [2024-12-05 19:10:16.683966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.311 [2024-12-05 19:10:16.685578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.311 [2024-12-05 19:10:16.685601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:59.311 [2024-12-05 19:10:16.685615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.595 ms 00:19:59.311 [2024-12-05 19:10:16.685623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.311 [2024-12-05 19:10:16.685717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.311 [2024-12-05 19:10:16.685726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:59.311 [2024-12-05 19:10:16.685735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:19:59.311 [2024-12-05 19:10:16.685746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.311 [2024-12-05 19:10:16.691623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:59.311 [2024-12-05 19:10:16.691765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:59.311 [2024-12-05 19:10:16.691787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:59.311 [2024-12-05 19:10:16.691795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.311 [2024-12-05 19:10:16.691889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:59.311 [2024-12-05 19:10:16.691902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:59.311 [2024-12-05 19:10:16.691910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:59.311 [2024-12-05 19:10:16.691918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.311 [2024-12-05 19:10:16.691964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:59.311 [2024-12-05 19:10:16.691973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:59.311 [2024-12-05 19:10:16.691981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:59.311 [2024-12-05 19:10:16.691994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.311 [2024-12-05 19:10:16.692013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:59.311 [2024-12-05 19:10:16.692020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:59.311 [2024-12-05 19:10:16.692032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:59.311 [2024-12-05 19:10:16.692039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.311 [2024-12-05 19:10:16.702415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:59.311 [2024-12-05 19:10:16.702556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:59.311 [2024-12-05 19:10:16.702579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:59.311 [2024-12-05 19:10:16.702587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.311 [2024-12-05 19:10:16.710422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:59.311 [2024-12-05 19:10:16.710462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:59.311 [2024-12-05 19:10:16.710473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:59.311 [2024-12-05 19:10:16.710481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.311 [2024-12-05 19:10:16.710508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:59.311 [2024-12-05 19:10:16.710517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:59.311 [2024-12-05 19:10:16.710525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:59.311 [2024-12-05 19:10:16.710533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.311 [2024-12-05 19:10:16.710571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:59.311 [2024-12-05 19:10:16.710581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:59.311 [2024-12-05 19:10:16.710589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:59.311 [2024-12-05 19:10:16.710596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.311 [2024-12-05 19:10:16.710667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:59.311 [2024-12-05 19:10:16.710677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:59.311 [2024-12-05 19:10:16.710691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:59.311 [2024-12-05 19:10:16.710698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.311 [2024-12-05 19:10:16.710731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:59.311 [2024-12-05 19:10:16.710740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:59.311 [2024-12-05 19:10:16.710748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:59.311 [2024-12-05 19:10:16.710755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.311 [2024-12-05 19:10:16.710792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:59.312 [2024-12-05 19:10:16.710801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:59.312 [2024-12-05 19:10:16.710809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:59.312 [2024-12-05 19:10:16.710816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.312 [2024-12-05 19:10:16.710861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:59.312 [2024-12-05 19:10:16.710871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:59.312 [2024-12-05 19:10:16.710879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:59.312 [2024-12-05 19:10:16.710886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.312 [2024-12-05 19:10:16.711023] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 53.419 ms, result 0 00:19:59.573 00:19:59.573 00:19:59.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:59.573 19:10:16 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=87805 00:19:59.573 19:10:16 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 87805 00:19:59.573 19:10:16 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:19:59.573 19:10:16 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 87805 ']' 00:19:59.573 19:10:16 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:59.573 19:10:16 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:59.573 19:10:16 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:59.574 19:10:16 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:59.574 19:10:16 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:59.574 [2024-12-05 19:10:17.000270] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:19:59.574 [2024-12-05 19:10:17.000427] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87805 ] 00:19:59.836 [2024-12-05 19:10:17.146173] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:59.836 [2024-12-05 19:10:17.174169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:00.409 19:10:17 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:00.409 19:10:17 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:20:00.409 19:10:17 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:20:00.672 [2024-12-05 19:10:18.076589] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:00.672 [2024-12-05 19:10:18.076680] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:00.934 [2024-12-05 19:10:18.254110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.934 [2024-12-05 19:10:18.254175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:00.934 [2024-12-05 19:10:18.254191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:00.934 [2024-12-05 19:10:18.254202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.934 [2024-12-05 19:10:18.256782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.934 [2024-12-05 19:10:18.256979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:00.934 [2024-12-05 19:10:18.256999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.559 ms 00:20:00.934 [2024-12-05 19:10:18.257010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.934 [2024-12-05 19:10:18.257161] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:00.934 [2024-12-05 19:10:18.257462] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:00.934 [2024-12-05 19:10:18.257480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.934 [2024-12-05 19:10:18.257491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:00.934 [2024-12-05 19:10:18.257505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.330 ms 00:20:00.934 [2024-12-05 19:10:18.257517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.934 [2024-12-05 19:10:18.259265] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:00.934 [2024-12-05 19:10:18.263042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.934 [2024-12-05 19:10:18.263093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:00.934 [2024-12-05 19:10:18.263107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.790 ms 00:20:00.934 [2024-12-05 19:10:18.263115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.934 [2024-12-05 19:10:18.263196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.934 [2024-12-05 19:10:18.263207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:00.934 [2024-12-05 19:10:18.263220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:20:00.934 [2024-12-05 19:10:18.263228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.934 [2024-12-05 19:10:18.271705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.934 [2024-12-05 19:10:18.271749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:00.934 [2024-12-05 19:10:18.271762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.400 ms 00:20:00.934 [2024-12-05 19:10:18.271770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.934 [2024-12-05 19:10:18.271902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.934 [2024-12-05 19:10:18.271913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:00.934 [2024-12-05 19:10:18.271924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:20:00.934 [2024-12-05 19:10:18.271936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.934 [2024-12-05 19:10:18.271966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.934 [2024-12-05 19:10:18.271977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:00.934 [2024-12-05 19:10:18.271990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:00.934 [2024-12-05 19:10:18.271997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.934 [2024-12-05 19:10:18.272022] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:00.934 [2024-12-05 19:10:18.274105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.934 [2024-12-05 19:10:18.274148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:00.934 [2024-12-05 19:10:18.274161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.090 ms 00:20:00.934 [2024-12-05 19:10:18.274171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.934 [2024-12-05 19:10:18.274213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.934 [2024-12-05 19:10:18.274224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:00.934 [2024-12-05 19:10:18.274233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:00.934 [2024-12-05 19:10:18.274267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.934 [2024-12-05 19:10:18.274289] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:00.934 [2024-12-05 19:10:18.274313] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:00.934 [2024-12-05 19:10:18.274364] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:00.934 [2024-12-05 19:10:18.274385] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:00.934 [2024-12-05 19:10:18.274491] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:00.934 [2024-12-05 19:10:18.274504] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:00.934 [2024-12-05 19:10:18.274521] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:00.934 [2024-12-05 19:10:18.274533] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:00.934 [2024-12-05 19:10:18.274542] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:00.934 [2024-12-05 19:10:18.274556] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:00.934 [2024-12-05 19:10:18.274563] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:00.934 [2024-12-05 19:10:18.274573] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:00.934 [2024-12-05 19:10:18.274583] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:00.934 [2024-12-05 19:10:18.274593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.934 [2024-12-05 19:10:18.274601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:00.934 [2024-12-05 19:10:18.274610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.305 ms 00:20:00.934 [2024-12-05 19:10:18.274618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.934 [2024-12-05 19:10:18.274711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.934 [2024-12-05 19:10:18.274724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:00.935 [2024-12-05 19:10:18.274733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:00.935 [2024-12-05 19:10:18.274740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.935 [2024-12-05 19:10:18.274845] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:00.935 [2024-12-05 19:10:18.274856] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:00.935 [2024-12-05 19:10:18.274871] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:00.935 [2024-12-05 19:10:18.274880] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:00.935 [2024-12-05 19:10:18.274896] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:00.935 [2024-12-05 19:10:18.274904] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:00.935 [2024-12-05 19:10:18.274914] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:00.935 [2024-12-05 19:10:18.274922] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:00.935 [2024-12-05 19:10:18.274932] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:00.935 [2024-12-05 19:10:18.274940] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:00.935 [2024-12-05 19:10:18.274949] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:00.935 [2024-12-05 19:10:18.274958] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:00.935 [2024-12-05 19:10:18.274968] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:00.935 [2024-12-05 19:10:18.274975] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:00.935 [2024-12-05 19:10:18.274985] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:00.935 [2024-12-05 19:10:18.274992] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:00.935 [2024-12-05 19:10:18.275002] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:00.935 [2024-12-05 19:10:18.275010] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:00.935 [2024-12-05 19:10:18.275022] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:00.935 [2024-12-05 19:10:18.275030] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:00.935 [2024-12-05 19:10:18.275044] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:00.935 [2024-12-05 19:10:18.275052] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:00.935 [2024-12-05 19:10:18.275062] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:00.935 [2024-12-05 19:10:18.275070] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:00.935 [2024-12-05 19:10:18.275079] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:00.935 [2024-12-05 19:10:18.275088] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:00.935 [2024-12-05 19:10:18.275097] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:00.935 [2024-12-05 19:10:18.275105] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:00.935 [2024-12-05 19:10:18.275116] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:00.935 [2024-12-05 19:10:18.275124] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:00.935 [2024-12-05 19:10:18.275136] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:00.935 [2024-12-05 19:10:18.275144] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:00.935 [2024-12-05 19:10:18.275155] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:00.935 [2024-12-05 19:10:18.275162] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:00.935 [2024-12-05 19:10:18.275172] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:00.935 [2024-12-05 19:10:18.275180] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:00.935 [2024-12-05 19:10:18.275191] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:00.935 [2024-12-05 19:10:18.275199] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:00.935 [2024-12-05 19:10:18.275209] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:00.935 [2024-12-05 19:10:18.275216] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:00.935 [2024-12-05 19:10:18.275226] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:00.935 [2024-12-05 19:10:18.275234] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:00.935 [2024-12-05 19:10:18.275244] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:00.935 [2024-12-05 19:10:18.275283] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:00.935 [2024-12-05 19:10:18.275295] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:00.935 [2024-12-05 19:10:18.275304] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:00.935 [2024-12-05 19:10:18.275314] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:00.935 [2024-12-05 19:10:18.275323] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:00.935 [2024-12-05 19:10:18.275332] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:00.935 [2024-12-05 19:10:18.275340] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:00.935 [2024-12-05 19:10:18.275353] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:00.935 [2024-12-05 19:10:18.275361] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:00.935 [2024-12-05 19:10:18.275373] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:00.935 [2024-12-05 19:10:18.275383] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:00.935 [2024-12-05 19:10:18.275396] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:00.935 [2024-12-05 19:10:18.275406] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:00.935 [2024-12-05 19:10:18.275417] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:00.935 [2024-12-05 19:10:18.275425] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:00.935 [2024-12-05 19:10:18.275436] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:00.935 [2024-12-05 19:10:18.275444] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:00.935 [2024-12-05 19:10:18.275454] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:00.935 [2024-12-05 19:10:18.275462] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:00.935 [2024-12-05 19:10:18.275472] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:00.935 [2024-12-05 19:10:18.275481] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:00.935 [2024-12-05 19:10:18.275491] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:00.935 [2024-12-05 19:10:18.275499] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:00.935 [2024-12-05 19:10:18.275514] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:00.935 [2024-12-05 19:10:18.275523] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:00.935 [2024-12-05 19:10:18.275536] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:00.935 [2024-12-05 19:10:18.275544] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:00.935 [2024-12-05 19:10:18.275557] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:00.935 [2024-12-05 19:10:18.275566] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:00.935 [2024-12-05 19:10:18.275585] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:00.935 [2024-12-05 19:10:18.275594] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:00.935 [2024-12-05 19:10:18.275604] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:00.935 [2024-12-05 19:10:18.275613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.935 [2024-12-05 19:10:18.275625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:00.935 [2024-12-05 19:10:18.275633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.839 ms 00:20:00.935 [2024-12-05 19:10:18.275645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.935 [2024-12-05 19:10:18.289218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.935 [2024-12-05 19:10:18.289284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:00.935 [2024-12-05 19:10:18.289296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.511 ms 00:20:00.935 [2024-12-05 19:10:18.289306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.935 [2024-12-05 19:10:18.289438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.935 [2024-12-05 19:10:18.289467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:00.935 [2024-12-05 19:10:18.289476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:20:00.935 [2024-12-05 19:10:18.289485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.935 [2024-12-05 19:10:18.301820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.935 [2024-12-05 19:10:18.301868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:00.935 [2024-12-05 19:10:18.301879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.311 ms 00:20:00.935 [2024-12-05 19:10:18.301893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.935 [2024-12-05 19:10:18.301961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.935 [2024-12-05 19:10:18.301974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:00.935 [2024-12-05 19:10:18.301982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:00.935 [2024-12-05 19:10:18.301992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.935 [2024-12-05 19:10:18.302509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.935 [2024-12-05 19:10:18.302543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:00.935 [2024-12-05 19:10:18.302554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.492 ms 00:20:00.935 [2024-12-05 19:10:18.302566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.935 [2024-12-05 19:10:18.302718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.935 [2024-12-05 19:10:18.302740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:00.935 [2024-12-05 19:10:18.302749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:20:00.935 [2024-12-05 19:10:18.302760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.935 [2024-12-05 19:10:18.310903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.935 [2024-12-05 19:10:18.310952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:00.935 [2024-12-05 19:10:18.310962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.115 ms 00:20:00.935 [2024-12-05 19:10:18.310972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.935 [2024-12-05 19:10:18.322164] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:00.935 [2024-12-05 19:10:18.322234] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:00.935 [2024-12-05 19:10:18.322273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.935 [2024-12-05 19:10:18.322289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:00.935 [2024-12-05 19:10:18.322301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.193 ms 00:20:00.935 [2024-12-05 19:10:18.322314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.935 [2024-12-05 19:10:18.340056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.935 [2024-12-05 19:10:18.340111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:00.935 [2024-12-05 19:10:18.340124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.674 ms 00:20:00.935 [2024-12-05 19:10:18.340137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.935 [2024-12-05 19:10:18.342980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.935 [2024-12-05 19:10:18.343199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:00.935 [2024-12-05 19:10:18.343217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.753 ms 00:20:00.935 [2024-12-05 19:10:18.343228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.935 [2024-12-05 19:10:18.345780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.935 [2024-12-05 19:10:18.345831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:00.935 [2024-12-05 19:10:18.345840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.413 ms 00:20:00.935 [2024-12-05 19:10:18.345849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.935 [2024-12-05 19:10:18.346221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.935 [2024-12-05 19:10:18.346235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:00.935 [2024-12-05 19:10:18.346246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:20:00.935 [2024-12-05 19:10:18.346501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.935 [2024-12-05 19:10:18.369243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.935 [2024-12-05 19:10:18.369467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:00.935 [2024-12-05 19:10:18.369552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.695 ms 00:20:00.935 [2024-12-05 19:10:18.369586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.935 [2024-12-05 19:10:18.377543] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:00.935 [2024-12-05 19:10:18.395859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.935 [2024-12-05 19:10:18.396021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:00.935 [2024-12-05 19:10:18.396046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.168 ms 00:20:00.935 [2024-12-05 19:10:18.396055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.935 [2024-12-05 19:10:18.396146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.935 [2024-12-05 19:10:18.396161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:00.935 [2024-12-05 19:10:18.396172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:20:00.935 [2024-12-05 19:10:18.396181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.936 [2024-12-05 19:10:18.396246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.936 [2024-12-05 19:10:18.396291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:00.936 [2024-12-05 19:10:18.396302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:20:00.936 [2024-12-05 19:10:18.396310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.936 [2024-12-05 19:10:18.396338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.936 [2024-12-05 19:10:18.396347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:00.936 [2024-12-05 19:10:18.396366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:00.936 [2024-12-05 19:10:18.396374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.936 [2024-12-05 19:10:18.396412] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:00.936 [2024-12-05 19:10:18.396422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.936 [2024-12-05 19:10:18.396432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:00.936 [2024-12-05 19:10:18.396440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:00.936 [2024-12-05 19:10:18.396470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.936 [2024-12-05 19:10:18.402205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.936 [2024-12-05 19:10:18.402281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:00.936 [2024-12-05 19:10:18.402300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.711 ms 00:20:00.936 [2024-12-05 19:10:18.402316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.936 [2024-12-05 19:10:18.402406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.936 [2024-12-05 19:10:18.402418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:00.936 [2024-12-05 19:10:18.402428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:20:00.936 [2024-12-05 19:10:18.402439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.936 [2024-12-05 19:10:18.403629] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:00.936 [2024-12-05 19:10:18.405082] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 149.177 ms, result 0 00:20:00.936 [2024-12-05 19:10:18.406765] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:00.936 Some configs were skipped because the RPC state that can call them passed over. 00:20:00.936 19:10:18 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:20:01.197 [2024-12-05 19:10:18.644843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.197 [2024-12-05 19:10:18.645036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:01.197 [2024-12-05 19:10:18.645107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.138 ms 00:20:01.197 [2024-12-05 19:10:18.645133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.197 [2024-12-05 19:10:18.645192] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.492 ms, result 0 00:20:01.197 true 00:20:01.197 19:10:18 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:20:01.458 [2024-12-05 19:10:18.867308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.458 [2024-12-05 19:10:18.867503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:01.458 [2024-12-05 19:10:18.867574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.572 ms 00:20:01.458 [2024-12-05 19:10:18.867602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.458 [2024-12-05 19:10:18.867681] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.940 ms, result 0 00:20:01.458 true 00:20:01.458 19:10:18 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 87805 00:20:01.458 19:10:18 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 87805 ']' 00:20:01.458 19:10:18 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 87805 00:20:01.458 19:10:18 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:20:01.458 19:10:18 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:01.458 19:10:18 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87805 00:20:01.458 killing process with pid 87805 00:20:01.459 19:10:18 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:01.459 19:10:18 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:01.459 19:10:18 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87805' 00:20:01.459 19:10:18 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 87805 00:20:01.459 19:10:18 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 87805 00:20:01.721 [2024-12-05 19:10:19.043714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.721 [2024-12-05 19:10:19.043781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:01.721 [2024-12-05 19:10:19.043801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:01.721 [2024-12-05 19:10:19.043811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.721 [2024-12-05 19:10:19.043840] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:01.721 [2024-12-05 19:10:19.044514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.721 [2024-12-05 19:10:19.044556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:01.721 [2024-12-05 19:10:19.044568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.659 ms 00:20:01.721 [2024-12-05 19:10:19.044579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.721 [2024-12-05 19:10:19.044876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.721 [2024-12-05 19:10:19.044892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:01.721 [2024-12-05 19:10:19.044901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:20:01.721 [2024-12-05 19:10:19.044911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.721 [2024-12-05 19:10:19.049511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.721 [2024-12-05 19:10:19.049563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:01.721 [2024-12-05 19:10:19.049573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.575 ms 00:20:01.721 [2024-12-05 19:10:19.049585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.721 [2024-12-05 19:10:19.056660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.721 [2024-12-05 19:10:19.056705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:01.721 [2024-12-05 19:10:19.056715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.034 ms 00:20:01.721 [2024-12-05 19:10:19.056727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.721 [2024-12-05 19:10:19.059288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.721 [2024-12-05 19:10:19.059446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:01.721 [2024-12-05 19:10:19.059463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.469 ms 00:20:01.721 [2024-12-05 19:10:19.059472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.721 [2024-12-05 19:10:19.063739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.721 [2024-12-05 19:10:19.063791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:01.721 [2024-12-05 19:10:19.063801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.225 ms 00:20:01.721 [2024-12-05 19:10:19.063819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.721 [2024-12-05 19:10:19.063960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.721 [2024-12-05 19:10:19.063973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:01.721 [2024-12-05 19:10:19.063982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:20:01.721 [2024-12-05 19:10:19.063992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.721 [2024-12-05 19:10:19.066893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.721 [2024-12-05 19:10:19.067056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:01.721 [2024-12-05 19:10:19.067074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.878 ms 00:20:01.721 [2024-12-05 19:10:19.067087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.721 [2024-12-05 19:10:19.069637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.721 [2024-12-05 19:10:19.069682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:01.721 [2024-12-05 19:10:19.069692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.490 ms 00:20:01.721 [2024-12-05 19:10:19.069702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.721 [2024-12-05 19:10:19.071879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.721 [2024-12-05 19:10:19.071931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:01.721 [2024-12-05 19:10:19.071941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.131 ms 00:20:01.721 [2024-12-05 19:10:19.071950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.721 [2024-12-05 19:10:19.073834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.721 [2024-12-05 19:10:19.073986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:01.721 [2024-12-05 19:10:19.074002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.811 ms 00:20:01.721 [2024-12-05 19:10:19.074014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.721 [2024-12-05 19:10:19.074053] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:01.721 [2024-12-05 19:10:19.074070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:01.721 [2024-12-05 19:10:19.074080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:01.721 [2024-12-05 19:10:19.074092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:01.721 [2024-12-05 19:10:19.074100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:01.721 [2024-12-05 19:10:19.074110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:01.721 [2024-12-05 19:10:19.074118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:01.721 [2024-12-05 19:10:19.074128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:01.722 [2024-12-05 19:10:19.074136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:01.722 [2024-12-05 19:10:19.074146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:01.722 [2024-12-05 19:10:19.074154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:01.722 [2024-12-05 19:10:19.074163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:01.722 [2024-12-05 19:10:19.074171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:01.722 [2024-12-05 19:10:19.074180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:01.722 [2024-12-05 19:10:19.074187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:01.722 [2024-12-05 19:10:19.074204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:01.722 [2024-12-05 19:10:19.074211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:01.722 [2024-12-05 19:10:19.074220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:01.722 [2024-12-05 19:10:19.074228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:01.722 [2024-12-05 19:10:19.074239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:01.722 [2024-12-05 19:10:19.074246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:01.722 [2024-12-05 19:10:19.074277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:01.722 [2024-12-05 19:10:19.074285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:01.722 [2024-12-05 19:10:19.074297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:01.722 [2024-12-05 19:10:19.074305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:01.722 [2024-12-05 19:10:19.074314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:01.722 [2024-12-05 19:10:19.074322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:01.722 [2024-12-05 19:10:19.074332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:01.722 [2024-12-05 19:10:19.074341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:01.722 [2024-12-05 19:10:19.074351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:01.722 [2024-12-05 19:10:19.074359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:01.722 [2024-12-05 19:10:19.074370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:01.722 [2024-12-05 19:10:19.074378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:01.722 [2024-12-05 19:10:19.074388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:01.722 [2024-12-05 19:10:19.074397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:01.722 [2024-12-05 19:10:19.074410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:01.722 [2024-12-05 19:10:19.074418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:01.722 [2024-12-05 19:10:19.074428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:01.722 [2024-12-05 19:10:19.074436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:01.722 [2024-12-05 19:10:19.074445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:01.722 [2024-12-05 19:10:19.074452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:01.722 [2024-12-05 19:10:19.074462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:01.722 [2024-12-05 19:10:19.074470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:01.722 [2024-12-05 19:10:19.074479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:01.722 [2024-12-05 19:10:19.074487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:01.722 [2024-12-05 19:10:19.074496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:01.722 [2024-12-05 19:10:19.074504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:01.722 [2024-12-05 19:10:19.074513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:01.722 [2024-12-05 19:10:19.074520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:01.722 [2024-12-05 19:10:19.074529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:01.722 [2024-12-05 19:10:19.074537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:01.722 [2024-12-05 19:10:19.074550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:01.722 [2024-12-05 19:10:19.074558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:01.722 [2024-12-05 19:10:19.074566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:01.722 [2024-12-05 19:10:19.074575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:01.722 [2024-12-05 19:10:19.074585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:01.722 [2024-12-05 19:10:19.074592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:01.722 [2024-12-05 19:10:19.074602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:01.722 [2024-12-05 19:10:19.074609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:01.722 [2024-12-05 19:10:19.074619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:01.722 [2024-12-05 19:10:19.074627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:01.722 [2024-12-05 19:10:19.074636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:01.722 [2024-12-05 19:10:19.074644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:01.722 [2024-12-05 19:10:19.074653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:01.722 [2024-12-05 19:10:19.074660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:01.722 [2024-12-05 19:10:19.074671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:01.722 [2024-12-05 19:10:19.074681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:01.722 [2024-12-05 19:10:19.074693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:01.722 [2024-12-05 19:10:19.074700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:01.722 [2024-12-05 19:10:19.074710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:01.722 [2024-12-05 19:10:19.074717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:01.722 [2024-12-05 19:10:19.074726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:01.722 [2024-12-05 19:10:19.074734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:01.722 [2024-12-05 19:10:19.074743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:01.722 [2024-12-05 19:10:19.074751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:01.722 [2024-12-05 19:10:19.074761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:01.722 [2024-12-05 19:10:19.074769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:01.722 [2024-12-05 19:10:19.074780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:01.722 [2024-12-05 19:10:19.074788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:01.722 [2024-12-05 19:10:19.074797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:01.722 [2024-12-05 19:10:19.074805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:01.722 [2024-12-05 19:10:19.074815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:01.722 [2024-12-05 19:10:19.074822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:01.722 [2024-12-05 19:10:19.074834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:01.722 [2024-12-05 19:10:19.074842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:01.722 [2024-12-05 19:10:19.074852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:01.722 [2024-12-05 19:10:19.074860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:01.722 [2024-12-05 19:10:19.074869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:01.722 [2024-12-05 19:10:19.074877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:01.722 [2024-12-05 19:10:19.074886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:01.722 [2024-12-05 19:10:19.074894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:01.723 [2024-12-05 19:10:19.074903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:01.723 [2024-12-05 19:10:19.074910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:01.723 [2024-12-05 19:10:19.074919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:01.723 [2024-12-05 19:10:19.074927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:01.723 [2024-12-05 19:10:19.074936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:01.723 [2024-12-05 19:10:19.074943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:01.723 [2024-12-05 19:10:19.074954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:01.723 [2024-12-05 19:10:19.074962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:01.723 [2024-12-05 19:10:19.074973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:01.723 [2024-12-05 19:10:19.074981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:01.723 [2024-12-05 19:10:19.074999] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:01.723 [2024-12-05 19:10:19.075007] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 42d50fa2-6cab-4623-bd3d-509e916d40ae 00:20:01.723 [2024-12-05 19:10:19.075018] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:01.723 [2024-12-05 19:10:19.075029] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:01.723 [2024-12-05 19:10:19.075038] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:01.723 [2024-12-05 19:10:19.075046] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:01.723 [2024-12-05 19:10:19.075055] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:01.723 [2024-12-05 19:10:19.075067] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:01.723 [2024-12-05 19:10:19.075077] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:01.723 [2024-12-05 19:10:19.075083] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:01.723 [2024-12-05 19:10:19.075091] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:01.723 [2024-12-05 19:10:19.075099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.723 [2024-12-05 19:10:19.075108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:01.723 [2024-12-05 19:10:19.075116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.047 ms 00:20:01.723 [2024-12-05 19:10:19.075127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.723 [2024-12-05 19:10:19.077384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.723 [2024-12-05 19:10:19.077420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:01.723 [2024-12-05 19:10:19.077431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.237 ms 00:20:01.723 [2024-12-05 19:10:19.077442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.723 [2024-12-05 19:10:19.077622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.723 [2024-12-05 19:10:19.077637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:01.723 [2024-12-05 19:10:19.077648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:20:01.723 [2024-12-05 19:10:19.077658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.723 [2024-12-05 19:10:19.085471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:01.723 [2024-12-05 19:10:19.085533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:01.723 [2024-12-05 19:10:19.085546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:01.723 [2024-12-05 19:10:19.085556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.723 [2024-12-05 19:10:19.085635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:01.723 [2024-12-05 19:10:19.085647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:01.723 [2024-12-05 19:10:19.085659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:01.723 [2024-12-05 19:10:19.085672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.723 [2024-12-05 19:10:19.085726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:01.723 [2024-12-05 19:10:19.085737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:01.723 [2024-12-05 19:10:19.085746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:01.723 [2024-12-05 19:10:19.085756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.723 [2024-12-05 19:10:19.085774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:01.723 [2024-12-05 19:10:19.085785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:01.723 [2024-12-05 19:10:19.085792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:01.723 [2024-12-05 19:10:19.085802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.723 [2024-12-05 19:10:19.099842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:01.723 [2024-12-05 19:10:19.099897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:01.723 [2024-12-05 19:10:19.099912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:01.723 [2024-12-05 19:10:19.099930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.723 [2024-12-05 19:10:19.110103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:01.723 [2024-12-05 19:10:19.110160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:01.723 [2024-12-05 19:10:19.110171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:01.723 [2024-12-05 19:10:19.110184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.723 [2024-12-05 19:10:19.110274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:01.723 [2024-12-05 19:10:19.110296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:01.723 [2024-12-05 19:10:19.110305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:01.723 [2024-12-05 19:10:19.110316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.723 [2024-12-05 19:10:19.110351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:01.723 [2024-12-05 19:10:19.110362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:01.723 [2024-12-05 19:10:19.110371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:01.723 [2024-12-05 19:10:19.110384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.723 [2024-12-05 19:10:19.110457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:01.723 [2024-12-05 19:10:19.110472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:01.723 [2024-12-05 19:10:19.110481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:01.723 [2024-12-05 19:10:19.110491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.723 [2024-12-05 19:10:19.110526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:01.723 [2024-12-05 19:10:19.110538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:01.723 [2024-12-05 19:10:19.110545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:01.723 [2024-12-05 19:10:19.110558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.723 [2024-12-05 19:10:19.110600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:01.723 [2024-12-05 19:10:19.110612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:01.723 [2024-12-05 19:10:19.110623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:01.723 [2024-12-05 19:10:19.110633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.723 [2024-12-05 19:10:19.110681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:01.723 [2024-12-05 19:10:19.110694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:01.723 [2024-12-05 19:10:19.110702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:01.723 [2024-12-05 19:10:19.110714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.723 [2024-12-05 19:10:19.110865] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 67.124 ms, result 0 00:20:01.984 19:10:19 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:01.984 [2024-12-05 19:10:19.413486] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:20:01.985 [2024-12-05 19:10:19.413889] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87848 ] 00:20:02.247 [2024-12-05 19:10:19.560580] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:02.247 [2024-12-05 19:10:19.588728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:02.247 [2024-12-05 19:10:19.704922] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:02.247 [2024-12-05 19:10:19.705020] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:02.510 [2024-12-05 19:10:19.865242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.510 [2024-12-05 19:10:19.865478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:02.510 [2024-12-05 19:10:19.865502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:02.510 [2024-12-05 19:10:19.865512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.510 [2024-12-05 19:10:19.868064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.510 [2024-12-05 19:10:19.868117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:02.510 [2024-12-05 19:10:19.868128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.497 ms 00:20:02.510 [2024-12-05 19:10:19.868136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.510 [2024-12-05 19:10:19.868271] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:02.510 [2024-12-05 19:10:19.868541] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:02.510 [2024-12-05 19:10:19.868561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.510 [2024-12-05 19:10:19.868570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:02.510 [2024-12-05 19:10:19.868580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.329 ms 00:20:02.510 [2024-12-05 19:10:19.868588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.510 [2024-12-05 19:10:19.870812] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:02.510 [2024-12-05 19:10:19.874449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.510 [2024-12-05 19:10:19.874506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:02.510 [2024-12-05 19:10:19.874522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.641 ms 00:20:02.510 [2024-12-05 19:10:19.874531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.510 [2024-12-05 19:10:19.874630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.510 [2024-12-05 19:10:19.874646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:02.510 [2024-12-05 19:10:19.874656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:20:02.510 [2024-12-05 19:10:19.874664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.510 [2024-12-05 19:10:19.882517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.510 [2024-12-05 19:10:19.882557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:02.510 [2024-12-05 19:10:19.882567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.806 ms 00:20:02.510 [2024-12-05 19:10:19.882576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.510 [2024-12-05 19:10:19.882714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.510 [2024-12-05 19:10:19.882733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:02.510 [2024-12-05 19:10:19.882742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:20:02.510 [2024-12-05 19:10:19.882754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.510 [2024-12-05 19:10:19.882785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.510 [2024-12-05 19:10:19.882794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:02.510 [2024-12-05 19:10:19.882806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:02.510 [2024-12-05 19:10:19.882813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.510 [2024-12-05 19:10:19.882837] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:02.510 [2024-12-05 19:10:19.884829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.510 [2024-12-05 19:10:19.884999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:02.510 [2024-12-05 19:10:19.885017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.000 ms 00:20:02.510 [2024-12-05 19:10:19.885031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.510 [2024-12-05 19:10:19.885079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.510 [2024-12-05 19:10:19.885092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:02.510 [2024-12-05 19:10:19.885101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:02.510 [2024-12-05 19:10:19.885108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.510 [2024-12-05 19:10:19.885126] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:02.510 [2024-12-05 19:10:19.885148] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:02.510 [2024-12-05 19:10:19.885190] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:02.510 [2024-12-05 19:10:19.885208] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:02.511 [2024-12-05 19:10:19.885334] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:02.511 [2024-12-05 19:10:19.885347] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:02.511 [2024-12-05 19:10:19.885358] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:02.511 [2024-12-05 19:10:19.885369] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:02.511 [2024-12-05 19:10:19.885378] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:02.511 [2024-12-05 19:10:19.885386] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:02.511 [2024-12-05 19:10:19.885394] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:02.511 [2024-12-05 19:10:19.885406] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:02.511 [2024-12-05 19:10:19.885413] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:02.511 [2024-12-05 19:10:19.885427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.511 [2024-12-05 19:10:19.885441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:02.511 [2024-12-05 19:10:19.885449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.301 ms 00:20:02.511 [2024-12-05 19:10:19.885456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.511 [2024-12-05 19:10:19.885558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.511 [2024-12-05 19:10:19.885569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:02.511 [2024-12-05 19:10:19.885578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:20:02.511 [2024-12-05 19:10:19.885591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.511 [2024-12-05 19:10:19.885693] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:02.511 [2024-12-05 19:10:19.885707] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:02.511 [2024-12-05 19:10:19.885716] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:02.511 [2024-12-05 19:10:19.885725] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:02.511 [2024-12-05 19:10:19.885740] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:02.511 [2024-12-05 19:10:19.885748] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:02.511 [2024-12-05 19:10:19.885757] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:02.511 [2024-12-05 19:10:19.885768] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:02.511 [2024-12-05 19:10:19.885777] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:02.511 [2024-12-05 19:10:19.885785] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:02.511 [2024-12-05 19:10:19.885793] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:02.511 [2024-12-05 19:10:19.885802] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:02.511 [2024-12-05 19:10:19.885809] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:02.511 [2024-12-05 19:10:19.885817] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:02.511 [2024-12-05 19:10:19.885825] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:02.511 [2024-12-05 19:10:19.885835] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:02.511 [2024-12-05 19:10:19.885843] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:02.511 [2024-12-05 19:10:19.885851] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:02.511 [2024-12-05 19:10:19.885859] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:02.511 [2024-12-05 19:10:19.885866] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:02.511 [2024-12-05 19:10:19.885874] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:02.511 [2024-12-05 19:10:19.885882] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:02.511 [2024-12-05 19:10:19.885889] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:02.511 [2024-12-05 19:10:19.885902] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:02.511 [2024-12-05 19:10:19.885910] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:02.511 [2024-12-05 19:10:19.885917] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:02.511 [2024-12-05 19:10:19.885924] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:02.511 [2024-12-05 19:10:19.885932] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:02.511 [2024-12-05 19:10:19.885940] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:02.511 [2024-12-05 19:10:19.885947] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:02.511 [2024-12-05 19:10:19.885954] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:02.511 [2024-12-05 19:10:19.885960] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:02.511 [2024-12-05 19:10:19.885967] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:02.511 [2024-12-05 19:10:19.885973] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:02.511 [2024-12-05 19:10:19.885980] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:02.511 [2024-12-05 19:10:19.885986] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:02.511 [2024-12-05 19:10:19.885993] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:02.511 [2024-12-05 19:10:19.885999] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:02.511 [2024-12-05 19:10:19.886005] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:02.511 [2024-12-05 19:10:19.886015] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:02.511 [2024-12-05 19:10:19.886022] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:02.511 [2024-12-05 19:10:19.886028] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:02.511 [2024-12-05 19:10:19.886035] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:02.511 [2024-12-05 19:10:19.886041] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:02.511 [2024-12-05 19:10:19.886049] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:02.511 [2024-12-05 19:10:19.886057] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:02.511 [2024-12-05 19:10:19.886064] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:02.511 [2024-12-05 19:10:19.886073] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:02.511 [2024-12-05 19:10:19.886080] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:02.511 [2024-12-05 19:10:19.886087] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:02.511 [2024-12-05 19:10:19.886093] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:02.511 [2024-12-05 19:10:19.886099] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:02.511 [2024-12-05 19:10:19.886107] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:02.511 [2024-12-05 19:10:19.886115] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:02.511 [2024-12-05 19:10:19.886125] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:02.511 [2024-12-05 19:10:19.886135] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:02.511 [2024-12-05 19:10:19.886143] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:02.511 [2024-12-05 19:10:19.886150] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:02.511 [2024-12-05 19:10:19.886157] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:02.511 [2024-12-05 19:10:19.886164] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:02.511 [2024-12-05 19:10:19.886171] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:02.511 [2024-12-05 19:10:19.886178] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:02.511 [2024-12-05 19:10:19.886191] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:02.511 [2024-12-05 19:10:19.886199] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:02.511 [2024-12-05 19:10:19.886206] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:02.511 [2024-12-05 19:10:19.886213] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:02.511 [2024-12-05 19:10:19.886220] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:02.511 [2024-12-05 19:10:19.886227] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:02.511 [2024-12-05 19:10:19.886234] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:02.511 [2024-12-05 19:10:19.886241] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:02.511 [2024-12-05 19:10:19.886267] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:02.511 [2024-12-05 19:10:19.886278] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:02.511 [2024-12-05 19:10:19.886285] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:02.511 [2024-12-05 19:10:19.886293] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:02.511 [2024-12-05 19:10:19.886301] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:02.511 [2024-12-05 19:10:19.886309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.512 [2024-12-05 19:10:19.886317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:02.512 [2024-12-05 19:10:19.886328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.686 ms 00:20:02.512 [2024-12-05 19:10:19.886335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.512 [2024-12-05 19:10:19.900163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.512 [2024-12-05 19:10:19.900342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:02.512 [2024-12-05 19:10:19.900508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.773 ms 00:20:02.512 [2024-12-05 19:10:19.900549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.512 [2024-12-05 19:10:19.900702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.512 [2024-12-05 19:10:19.900744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:02.512 [2024-12-05 19:10:19.900826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:20:02.512 [2024-12-05 19:10:19.900851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.512 [2024-12-05 19:10:19.924542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.512 [2024-12-05 19:10:19.924753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:02.512 [2024-12-05 19:10:19.924831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.651 ms 00:20:02.512 [2024-12-05 19:10:19.924862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.512 [2024-12-05 19:10:19.925001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.512 [2024-12-05 19:10:19.925110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:02.512 [2024-12-05 19:10:19.925143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:02.512 [2024-12-05 19:10:19.925168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.512 [2024-12-05 19:10:19.925798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.512 [2024-12-05 19:10:19.926061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:02.512 [2024-12-05 19:10:19.926145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.549 ms 00:20:02.512 [2024-12-05 19:10:19.926179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.512 [2024-12-05 19:10:19.926769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.512 [2024-12-05 19:10:19.926940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:02.512 [2024-12-05 19:10:19.927030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.157 ms 00:20:02.512 [2024-12-05 19:10:19.927063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.512 [2024-12-05 19:10:19.936059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.512 [2024-12-05 19:10:19.936226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:02.512 [2024-12-05 19:10:19.936316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.569 ms 00:20:02.512 [2024-12-05 19:10:19.936348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.512 [2024-12-05 19:10:19.940118] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:02.512 [2024-12-05 19:10:19.940303] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:02.512 [2024-12-05 19:10:19.940369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.512 [2024-12-05 19:10:19.940390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:02.512 [2024-12-05 19:10:19.940410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.866 ms 00:20:02.512 [2024-12-05 19:10:19.940430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.512 [2024-12-05 19:10:19.955985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.512 [2024-12-05 19:10:19.956143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:02.512 [2024-12-05 19:10:19.956201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.479 ms 00:20:02.512 [2024-12-05 19:10:19.956223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.512 [2024-12-05 19:10:19.959467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.512 [2024-12-05 19:10:19.959641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:02.512 [2024-12-05 19:10:19.959704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.810 ms 00:20:02.512 [2024-12-05 19:10:19.959729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.512 [2024-12-05 19:10:19.962396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.512 [2024-12-05 19:10:19.962560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:02.512 [2024-12-05 19:10:19.962620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.535 ms 00:20:02.512 [2024-12-05 19:10:19.962641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.512 [2024-12-05 19:10:19.963082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.512 [2024-12-05 19:10:19.963322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:02.512 [2024-12-05 19:10:19.963343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.275 ms 00:20:02.512 [2024-12-05 19:10:19.963351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.512 [2024-12-05 19:10:19.986140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.512 [2024-12-05 19:10:19.986197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:02.512 [2024-12-05 19:10:19.986211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.748 ms 00:20:02.512 [2024-12-05 19:10:19.986219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.512 [2024-12-05 19:10:19.994557] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:02.512 [2024-12-05 19:10:20.014992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.512 [2024-12-05 19:10:20.015054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:02.512 [2024-12-05 19:10:20.015068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.650 ms 00:20:02.512 [2024-12-05 19:10:20.015077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.512 [2024-12-05 19:10:20.015173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.512 [2024-12-05 19:10:20.015185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:02.512 [2024-12-05 19:10:20.015202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:20:02.512 [2024-12-05 19:10:20.015210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.512 [2024-12-05 19:10:20.015294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.512 [2024-12-05 19:10:20.015306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:02.512 [2024-12-05 19:10:20.015315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:20:02.512 [2024-12-05 19:10:20.015323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.512 [2024-12-05 19:10:20.015351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.512 [2024-12-05 19:10:20.015361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:02.512 [2024-12-05 19:10:20.015369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:02.512 [2024-12-05 19:10:20.015384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.512 [2024-12-05 19:10:20.015420] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:02.512 [2024-12-05 19:10:20.015434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.512 [2024-12-05 19:10:20.015443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:02.512 [2024-12-05 19:10:20.015452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:20:02.512 [2024-12-05 19:10:20.015460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.512 [2024-12-05 19:10:20.021114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.512 [2024-12-05 19:10:20.021303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:02.512 [2024-12-05 19:10:20.021322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.630 ms 00:20:02.512 [2024-12-05 19:10:20.021331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.512 [2024-12-05 19:10:20.021424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.512 [2024-12-05 19:10:20.021434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:02.512 [2024-12-05 19:10:20.021444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:20:02.512 [2024-12-05 19:10:20.021452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.512 [2024-12-05 19:10:20.022522] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:02.512 [2024-12-05 19:10:20.023828] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 156.926 ms, result 0 00:20:02.512 [2024-12-05 19:10:20.025180] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:02.512 [2024-12-05 19:10:20.032487] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:03.895  [2024-12-05T19:10:22.399Z] Copying: 14/256 [MB] (14 MBps) [2024-12-05T19:10:23.344Z] Copying: 24/256 [MB] (10 MBps) [2024-12-05T19:10:24.343Z] Copying: 35/256 [MB] (11 MBps) [2024-12-05T19:10:25.287Z] Copying: 46/256 [MB] (10 MBps) [2024-12-05T19:10:26.228Z] Copying: 56/256 [MB] (10 MBps) [2024-12-05T19:10:27.171Z] Copying: 76/256 [MB] (20 MBps) [2024-12-05T19:10:28.113Z] Copying: 93/256 [MB] (16 MBps) [2024-12-05T19:10:29.499Z] Copying: 116/256 [MB] (23 MBps) [2024-12-05T19:10:30.441Z] Copying: 131/256 [MB] (14 MBps) [2024-12-05T19:10:31.382Z] Copying: 152/256 [MB] (21 MBps) [2024-12-05T19:10:32.322Z] Copying: 164/256 [MB] (11 MBps) [2024-12-05T19:10:33.264Z] Copying: 174/256 [MB] (10 MBps) [2024-12-05T19:10:34.201Z] Copying: 192/256 [MB] (17 MBps) [2024-12-05T19:10:35.140Z] Copying: 210/256 [MB] (18 MBps) [2024-12-05T19:10:36.520Z] Copying: 230/256 [MB] (20 MBps) [2024-12-05T19:10:36.781Z] Copying: 244/256 [MB] (13 MBps) [2024-12-05T19:10:37.044Z] Copying: 256/256 [MB] (average 15 MBps)[2024-12-05 19:10:36.926729] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:19.485 [2024-12-05 19:10:36.928719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.485 [2024-12-05 19:10:36.928773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:19.485 [2024-12-05 19:10:36.928790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:19.485 [2024-12-05 19:10:36.928799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.485 [2024-12-05 19:10:36.928833] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:19.485 [2024-12-05 19:10:36.929565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.485 [2024-12-05 19:10:36.929599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:19.485 [2024-12-05 19:10:36.929612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.717 ms 00:20:19.485 [2024-12-05 19:10:36.929622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.485 [2024-12-05 19:10:36.929922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.485 [2024-12-05 19:10:36.929951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:19.485 [2024-12-05 19:10:36.929966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.269 ms 00:20:19.485 [2024-12-05 19:10:36.929975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.485 [2024-12-05 19:10:36.933704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.485 [2024-12-05 19:10:36.933726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:19.485 [2024-12-05 19:10:36.933742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.712 ms 00:20:19.485 [2024-12-05 19:10:36.933750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.485 [2024-12-05 19:10:36.940808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.485 [2024-12-05 19:10:36.940847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:19.485 [2024-12-05 19:10:36.940858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.037 ms 00:20:19.485 [2024-12-05 19:10:36.940878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.485 [2024-12-05 19:10:36.943772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.485 [2024-12-05 19:10:36.943975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:19.485 [2024-12-05 19:10:36.943994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.810 ms 00:20:19.485 [2024-12-05 19:10:36.944003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.485 [2024-12-05 19:10:36.949084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.485 [2024-12-05 19:10:36.949138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:19.485 [2024-12-05 19:10:36.949149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.035 ms 00:20:19.485 [2024-12-05 19:10:36.949157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.485 [2024-12-05 19:10:36.949322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.485 [2024-12-05 19:10:36.949335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:19.485 [2024-12-05 19:10:36.949349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:20:19.485 [2024-12-05 19:10:36.949357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.485 [2024-12-05 19:10:36.953065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.485 [2024-12-05 19:10:36.953128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:19.485 [2024-12-05 19:10:36.953141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.686 ms 00:20:19.485 [2024-12-05 19:10:36.953150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.485 [2024-12-05 19:10:36.955927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.485 [2024-12-05 19:10:36.955975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:19.485 [2024-12-05 19:10:36.955985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.727 ms 00:20:19.486 [2024-12-05 19:10:36.955992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.486 [2024-12-05 19:10:36.958444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.486 [2024-12-05 19:10:36.958611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:19.486 [2024-12-05 19:10:36.958629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.404 ms 00:20:19.486 [2024-12-05 19:10:36.958637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.486 [2024-12-05 19:10:36.961066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.486 [2024-12-05 19:10:36.961116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:19.486 [2024-12-05 19:10:36.961127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.275 ms 00:20:19.486 [2024-12-05 19:10:36.961134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.486 [2024-12-05 19:10:36.961192] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:19.486 [2024-12-05 19:10:36.961208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:19.486 [2024-12-05 19:10:36.961218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:19.486 [2024-12-05 19:10:36.961227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:19.486 [2024-12-05 19:10:36.961235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:19.486 [2024-12-05 19:10:36.961243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:19.486 [2024-12-05 19:10:36.961270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:19.486 [2024-12-05 19:10:36.961279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:19.486 [2024-12-05 19:10:36.961288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:19.486 [2024-12-05 19:10:36.961296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:19.486 [2024-12-05 19:10:36.961304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:19.486 [2024-12-05 19:10:36.961312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:19.486 [2024-12-05 19:10:36.961321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:19.486 [2024-12-05 19:10:36.961329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:19.486 [2024-12-05 19:10:36.961337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:19.486 [2024-12-05 19:10:36.961345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:19.486 [2024-12-05 19:10:36.961353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:19.486 [2024-12-05 19:10:36.961361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:19.486 [2024-12-05 19:10:36.961369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:19.486 [2024-12-05 19:10:36.961377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:19.486 [2024-12-05 19:10:36.961385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:19.486 [2024-12-05 19:10:36.961393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:19.486 [2024-12-05 19:10:36.961400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:19.486 [2024-12-05 19:10:36.961408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:19.486 [2024-12-05 19:10:36.961416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:19.486 [2024-12-05 19:10:36.961424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:19.486 [2024-12-05 19:10:36.961431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:19.486 [2024-12-05 19:10:36.961458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:19.486 [2024-12-05 19:10:36.961466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:19.486 [2024-12-05 19:10:36.961474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:19.486 [2024-12-05 19:10:36.961493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:19.486 [2024-12-05 19:10:36.961501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:19.486 [2024-12-05 19:10:36.961511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:19.486 [2024-12-05 19:10:36.961520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:19.486 [2024-12-05 19:10:36.961543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:19.486 [2024-12-05 19:10:36.961552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:19.486 [2024-12-05 19:10:36.961560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:19.486 [2024-12-05 19:10:36.961568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:19.486 [2024-12-05 19:10:36.961576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:19.486 [2024-12-05 19:10:36.961585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:19.486 [2024-12-05 19:10:36.961594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:19.486 [2024-12-05 19:10:36.961602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:19.486 [2024-12-05 19:10:36.961610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:19.486 [2024-12-05 19:10:36.961618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:19.486 [2024-12-05 19:10:36.961625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:19.486 [2024-12-05 19:10:36.961634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:19.486 [2024-12-05 19:10:36.961642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:19.486 [2024-12-05 19:10:36.961650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:19.486 [2024-12-05 19:10:36.961658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:19.486 [2024-12-05 19:10:36.961665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:19.486 [2024-12-05 19:10:36.961673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:19.486 [2024-12-05 19:10:36.961681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:19.486 [2024-12-05 19:10:36.961689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:19.486 [2024-12-05 19:10:36.961697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:19.486 [2024-12-05 19:10:36.961704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:19.486 [2024-12-05 19:10:36.961713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:19.486 [2024-12-05 19:10:36.961722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:19.486 [2024-12-05 19:10:36.961729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:19.486 [2024-12-05 19:10:36.961737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:19.486 [2024-12-05 19:10:36.961744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:19.486 [2024-12-05 19:10:36.961751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:19.486 [2024-12-05 19:10:36.961759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:19.486 [2024-12-05 19:10:36.961767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:19.486 [2024-12-05 19:10:36.961774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:19.486 [2024-12-05 19:10:36.961784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:19.486 [2024-12-05 19:10:36.961791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:19.486 [2024-12-05 19:10:36.961800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:19.486 [2024-12-05 19:10:36.961808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:19.486 [2024-12-05 19:10:36.961815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:19.487 [2024-12-05 19:10:36.961823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:19.487 [2024-12-05 19:10:36.961830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:19.487 [2024-12-05 19:10:36.961838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:19.487 [2024-12-05 19:10:36.961846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:19.487 [2024-12-05 19:10:36.961854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:19.487 [2024-12-05 19:10:36.961861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:19.487 [2024-12-05 19:10:36.961868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:19.487 [2024-12-05 19:10:36.961876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:19.487 [2024-12-05 19:10:36.961883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:19.487 [2024-12-05 19:10:36.961891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:19.487 [2024-12-05 19:10:36.961898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:19.487 [2024-12-05 19:10:36.961906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:19.487 [2024-12-05 19:10:36.961914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:19.487 [2024-12-05 19:10:36.961921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:19.487 [2024-12-05 19:10:36.961928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:19.487 [2024-12-05 19:10:36.961935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:19.487 [2024-12-05 19:10:36.961944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:19.487 [2024-12-05 19:10:36.961951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:19.487 [2024-12-05 19:10:36.961959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:19.487 [2024-12-05 19:10:36.961967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:19.487 [2024-12-05 19:10:36.961974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:19.487 [2024-12-05 19:10:36.961981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:19.487 [2024-12-05 19:10:36.961988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:19.487 [2024-12-05 19:10:36.961996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:19.487 [2024-12-05 19:10:36.962004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:19.487 [2024-12-05 19:10:36.962011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:19.487 [2024-12-05 19:10:36.962018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:19.487 [2024-12-05 19:10:36.962027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:19.487 [2024-12-05 19:10:36.962034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:19.487 [2024-12-05 19:10:36.962042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:19.487 [2024-12-05 19:10:36.962050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:19.487 [2024-12-05 19:10:36.962058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:19.487 [2024-12-05 19:10:36.962075] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:19.487 [2024-12-05 19:10:36.962083] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 42d50fa2-6cab-4623-bd3d-509e916d40ae 00:20:19.487 [2024-12-05 19:10:36.962091] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:19.487 [2024-12-05 19:10:36.962099] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:19.487 [2024-12-05 19:10:36.962106] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:19.487 [2024-12-05 19:10:36.962114] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:19.487 [2024-12-05 19:10:36.962122] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:19.487 [2024-12-05 19:10:36.962136] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:19.487 [2024-12-05 19:10:36.962148] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:19.487 [2024-12-05 19:10:36.962155] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:19.487 [2024-12-05 19:10:36.962162] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:19.487 [2024-12-05 19:10:36.962169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.487 [2024-12-05 19:10:36.962178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:19.487 [2024-12-05 19:10:36.962188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.979 ms 00:20:19.487 [2024-12-05 19:10:36.962196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.487 [2024-12-05 19:10:36.964708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.487 [2024-12-05 19:10:36.964850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:19.487 [2024-12-05 19:10:36.964912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.490 ms 00:20:19.487 [2024-12-05 19:10:36.964946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.487 [2024-12-05 19:10:36.965083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.487 [2024-12-05 19:10:36.965303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:19.487 [2024-12-05 19:10:36.965333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:20:19.487 [2024-12-05 19:10:36.965354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.487 [2024-12-05 19:10:36.973137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:19.487 [2024-12-05 19:10:36.973310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:19.487 [2024-12-05 19:10:36.973367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:19.487 [2024-12-05 19:10:36.973398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.487 [2024-12-05 19:10:36.973500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:19.487 [2024-12-05 19:10:36.973537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:19.487 [2024-12-05 19:10:36.973565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:19.487 [2024-12-05 19:10:36.973584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.487 [2024-12-05 19:10:36.973707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:19.487 [2024-12-05 19:10:36.973736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:19.487 [2024-12-05 19:10:36.973758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:19.487 [2024-12-05 19:10:36.973778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.487 [2024-12-05 19:10:36.973815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:19.487 [2024-12-05 19:10:36.973838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:19.487 [2024-12-05 19:10:36.973858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:19.487 [2024-12-05 19:10:36.973921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.487 [2024-12-05 19:10:36.987995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:19.487 [2024-12-05 19:10:36.988153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:19.487 [2024-12-05 19:10:36.988208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:19.487 [2024-12-05 19:10:36.988239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.487 [2024-12-05 19:10:36.998855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:19.487 [2024-12-05 19:10:36.999012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:19.487 [2024-12-05 19:10:36.999067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:19.487 [2024-12-05 19:10:36.999091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.487 [2024-12-05 19:10:36.999156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:19.487 [2024-12-05 19:10:36.999178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:19.487 [2024-12-05 19:10:36.999199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:19.487 [2024-12-05 19:10:36.999219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.487 [2024-12-05 19:10:36.999280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:19.487 [2024-12-05 19:10:36.999310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:19.487 [2024-12-05 19:10:36.999331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:19.488 [2024-12-05 19:10:36.999393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.488 [2024-12-05 19:10:36.999499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:19.488 [2024-12-05 19:10:36.999525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:19.488 [2024-12-05 19:10:36.999598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:19.488 [2024-12-05 19:10:36.999621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.488 [2024-12-05 19:10:36.999675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:19.488 [2024-12-05 19:10:36.999703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:19.488 [2024-12-05 19:10:36.999759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:19.488 [2024-12-05 19:10:36.999782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.488 [2024-12-05 19:10:36.999840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:19.488 [2024-12-05 19:10:36.999862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:19.488 [2024-12-05 19:10:36.999889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:19.488 [2024-12-05 19:10:36.999914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.488 [2024-12-05 19:10:36.999975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:19.488 [2024-12-05 19:10:37.000004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:19.488 [2024-12-05 19:10:37.000025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:19.488 [2024-12-05 19:10:37.000044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.488 [2024-12-05 19:10:37.000304] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 71.533 ms, result 0 00:20:19.749 00:20:19.749 00:20:19.749 19:10:37 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:20.320 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:20:20.320 19:10:37 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:20:20.320 19:10:37 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:20:20.320 19:10:37 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:20.320 19:10:37 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:20.320 19:10:37 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:20:20.320 19:10:37 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:20.320 Process with pid 87805 is not found 00:20:20.320 19:10:37 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 87805 00:20:20.320 19:10:37 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 87805 ']' 00:20:20.320 19:10:37 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 87805 00:20:20.320 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (87805) - No such process 00:20:20.320 19:10:37 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 87805 is not found' 00:20:20.320 00:20:20.320 real 1m15.424s 00:20:20.320 user 1m38.840s 00:20:20.320 sys 0m5.211s 00:20:20.320 19:10:37 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:20.320 19:10:37 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:20.320 ************************************ 00:20:20.320 END TEST ftl_trim 00:20:20.320 ************************************ 00:20:20.583 19:10:37 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:20:20.583 19:10:37 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:20.583 19:10:37 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:20.583 19:10:37 ftl -- common/autotest_common.sh@10 -- # set +x 00:20:20.583 ************************************ 00:20:20.583 START TEST ftl_restore 00:20:20.583 ************************************ 00:20:20.583 19:10:37 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:20:20.583 * Looking for test storage... 00:20:20.583 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:20:20.583 19:10:38 ftl.ftl_restore -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:20.583 19:10:38 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # lcov --version 00:20:20.583 19:10:38 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:20.583 19:10:38 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:20.583 19:10:38 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:20.583 19:10:38 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:20.583 19:10:38 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:20.583 19:10:38 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:20:20.583 19:10:38 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:20:20.583 19:10:38 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:20:20.583 19:10:38 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:20:20.583 19:10:38 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:20:20.583 19:10:38 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:20:20.583 19:10:38 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:20:20.583 19:10:38 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:20.583 19:10:38 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:20:20.583 19:10:38 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:20:20.583 19:10:38 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:20.583 19:10:38 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:20.583 19:10:38 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:20:20.583 19:10:38 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:20:20.583 19:10:38 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:20.583 19:10:38 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:20:20.583 19:10:38 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:20:20.583 19:10:38 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:20:20.583 19:10:38 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:20:20.583 19:10:38 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:20.583 19:10:38 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:20:20.583 19:10:38 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:20:20.583 19:10:38 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:20.583 19:10:38 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:20.583 19:10:38 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:20:20.583 19:10:38 ftl.ftl_restore -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:20.583 19:10:38 ftl.ftl_restore -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:20.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:20.583 --rc genhtml_branch_coverage=1 00:20:20.583 --rc genhtml_function_coverage=1 00:20:20.583 --rc genhtml_legend=1 00:20:20.583 --rc geninfo_all_blocks=1 00:20:20.583 --rc geninfo_unexecuted_blocks=1 00:20:20.583 00:20:20.583 ' 00:20:20.583 19:10:38 ftl.ftl_restore -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:20.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:20.583 --rc genhtml_branch_coverage=1 00:20:20.583 --rc genhtml_function_coverage=1 00:20:20.583 --rc genhtml_legend=1 00:20:20.583 --rc geninfo_all_blocks=1 00:20:20.583 --rc geninfo_unexecuted_blocks=1 00:20:20.583 00:20:20.583 ' 00:20:20.583 19:10:38 ftl.ftl_restore -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:20.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:20.583 --rc genhtml_branch_coverage=1 00:20:20.583 --rc genhtml_function_coverage=1 00:20:20.583 --rc genhtml_legend=1 00:20:20.583 --rc geninfo_all_blocks=1 00:20:20.583 --rc geninfo_unexecuted_blocks=1 00:20:20.583 00:20:20.583 ' 00:20:20.583 19:10:38 ftl.ftl_restore -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:20.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:20.583 --rc genhtml_branch_coverage=1 00:20:20.583 --rc genhtml_function_coverage=1 00:20:20.583 --rc genhtml_legend=1 00:20:20.583 --rc geninfo_all_blocks=1 00:20:20.583 --rc geninfo_unexecuted_blocks=1 00:20:20.583 00:20:20.583 ' 00:20:20.583 19:10:38 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:20:20.583 19:10:38 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:20:20.583 19:10:38 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:20:20.583 19:10:38 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:20:20.583 19:10:38 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:20:20.583 19:10:38 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:20.584 19:10:38 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:20.584 19:10:38 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:20:20.584 19:10:38 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:20:20.584 19:10:38 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:20.584 19:10:38 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:20.584 19:10:38 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:20:20.584 19:10:38 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:20:20.584 19:10:38 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:20.584 19:10:38 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:20.584 19:10:38 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:20:20.584 19:10:38 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:20:20.584 19:10:38 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:20.584 19:10:38 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:20.584 19:10:38 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:20:20.584 19:10:38 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:20:20.584 19:10:38 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:20.584 19:10:38 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:20.584 19:10:38 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:20.584 19:10:38 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:20.584 19:10:38 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:20:20.584 19:10:38 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:20:20.584 19:10:38 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:20.584 19:10:38 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:20.584 19:10:38 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:20.584 19:10:38 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:20:20.584 19:10:38 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.qMoYqpROIF 00:20:20.584 19:10:38 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:20:20.584 19:10:38 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:20:20.584 19:10:38 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:20:20.584 19:10:38 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:20:20.584 19:10:38 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:20:20.584 19:10:38 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:20:20.584 19:10:38 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:20:20.584 19:10:38 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:20:20.584 19:10:38 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=88109 00:20:20.584 19:10:38 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 88109 00:20:20.584 19:10:38 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:20.584 19:10:38 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 88109 ']' 00:20:20.584 19:10:38 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:20.584 19:10:38 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:20.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:20.584 19:10:38 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:20.584 19:10:38 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:20.584 19:10:38 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:20:20.845 [2024-12-05 19:10:38.202160] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:20:20.845 [2024-12-05 19:10:38.202490] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88109 ] 00:20:20.845 [2024-12-05 19:10:38.346747] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:20.845 [2024-12-05 19:10:38.375366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:21.789 19:10:39 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:21.789 19:10:39 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:20:21.789 19:10:39 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:20:21.789 19:10:39 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:20:21.789 19:10:39 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:20:21.789 19:10:39 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:20:21.789 19:10:39 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:20:21.789 19:10:39 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:20:22.050 19:10:39 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:20:22.050 19:10:39 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:20:22.050 19:10:39 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:20:22.050 19:10:39 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:20:22.050 19:10:39 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:22.050 19:10:39 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:20:22.050 19:10:39 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:20:22.050 19:10:39 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:20:22.050 19:10:39 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:22.050 { 00:20:22.050 "name": "nvme0n1", 00:20:22.050 "aliases": [ 00:20:22.050 "a4937617-8641-4feb-9105-14580198e810" 00:20:22.050 ], 00:20:22.050 "product_name": "NVMe disk", 00:20:22.050 "block_size": 4096, 00:20:22.050 "num_blocks": 1310720, 00:20:22.050 "uuid": "a4937617-8641-4feb-9105-14580198e810", 00:20:22.050 "numa_id": -1, 00:20:22.050 "assigned_rate_limits": { 00:20:22.050 "rw_ios_per_sec": 0, 00:20:22.050 "rw_mbytes_per_sec": 0, 00:20:22.050 "r_mbytes_per_sec": 0, 00:20:22.050 "w_mbytes_per_sec": 0 00:20:22.050 }, 00:20:22.050 "claimed": true, 00:20:22.050 "claim_type": "read_many_write_one", 00:20:22.050 "zoned": false, 00:20:22.050 "supported_io_types": { 00:20:22.050 "read": true, 00:20:22.050 "write": true, 00:20:22.050 "unmap": true, 00:20:22.050 "flush": true, 00:20:22.051 "reset": true, 00:20:22.051 "nvme_admin": true, 00:20:22.051 "nvme_io": true, 00:20:22.051 "nvme_io_md": false, 00:20:22.051 "write_zeroes": true, 00:20:22.051 "zcopy": false, 00:20:22.051 "get_zone_info": false, 00:20:22.051 "zone_management": false, 00:20:22.051 "zone_append": false, 00:20:22.051 "compare": true, 00:20:22.051 "compare_and_write": false, 00:20:22.051 "abort": true, 00:20:22.051 "seek_hole": false, 00:20:22.051 "seek_data": false, 00:20:22.051 "copy": true, 00:20:22.051 "nvme_iov_md": false 00:20:22.051 }, 00:20:22.051 "driver_specific": { 00:20:22.051 "nvme": [ 00:20:22.051 { 00:20:22.051 "pci_address": "0000:00:11.0", 00:20:22.051 "trid": { 00:20:22.051 "trtype": "PCIe", 00:20:22.051 "traddr": "0000:00:11.0" 00:20:22.051 }, 00:20:22.051 "ctrlr_data": { 00:20:22.051 "cntlid": 0, 00:20:22.051 "vendor_id": "0x1b36", 00:20:22.051 "model_number": "QEMU NVMe Ctrl", 00:20:22.051 "serial_number": "12341", 00:20:22.051 "firmware_revision": "8.0.0", 00:20:22.051 "subnqn": "nqn.2019-08.org.qemu:12341", 00:20:22.051 "oacs": { 00:20:22.051 "security": 0, 00:20:22.051 "format": 1, 00:20:22.051 "firmware": 0, 00:20:22.051 "ns_manage": 1 00:20:22.051 }, 00:20:22.051 "multi_ctrlr": false, 00:20:22.051 "ana_reporting": false 00:20:22.051 }, 00:20:22.051 "vs": { 00:20:22.051 "nvme_version": "1.4" 00:20:22.051 }, 00:20:22.051 "ns_data": { 00:20:22.051 "id": 1, 00:20:22.051 "can_share": false 00:20:22.051 } 00:20:22.051 } 00:20:22.051 ], 00:20:22.051 "mp_policy": "active_passive" 00:20:22.051 } 00:20:22.051 } 00:20:22.051 ]' 00:20:22.051 19:10:39 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:22.312 19:10:39 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:20:22.312 19:10:39 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:22.312 19:10:39 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:20:22.312 19:10:39 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:20:22.312 19:10:39 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:20:22.312 19:10:39 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:20:22.312 19:10:39 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:20:22.312 19:10:39 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:20:22.312 19:10:39 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:20:22.312 19:10:39 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:22.312 19:10:39 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=ba7c6a1d-3b74-4163-8b11-286544eeb94f 00:20:22.312 19:10:39 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:20:22.312 19:10:39 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ba7c6a1d-3b74-4163-8b11-286544eeb94f 00:20:22.573 19:10:40 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:20:22.834 19:10:40 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=97a3e74e-df4f-4eed-b69d-eb4e6c249cb6 00:20:22.834 19:10:40 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 97a3e74e-df4f-4eed-b69d-eb4e6c249cb6 00:20:23.095 19:10:40 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=41f2cd3c-2cb4-4e9c-a519-13ea8a09b0f4 00:20:23.095 19:10:40 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:20:23.095 19:10:40 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 41f2cd3c-2cb4-4e9c-a519-13ea8a09b0f4 00:20:23.095 19:10:40 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:20:23.095 19:10:40 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:20:23.095 19:10:40 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=41f2cd3c-2cb4-4e9c-a519-13ea8a09b0f4 00:20:23.095 19:10:40 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:20:23.095 19:10:40 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 41f2cd3c-2cb4-4e9c-a519-13ea8a09b0f4 00:20:23.095 19:10:40 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=41f2cd3c-2cb4-4e9c-a519-13ea8a09b0f4 00:20:23.095 19:10:40 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:23.095 19:10:40 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:20:23.095 19:10:40 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:20:23.095 19:10:40 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 41f2cd3c-2cb4-4e9c-a519-13ea8a09b0f4 00:20:23.355 19:10:40 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:23.355 { 00:20:23.355 "name": "41f2cd3c-2cb4-4e9c-a519-13ea8a09b0f4", 00:20:23.355 "aliases": [ 00:20:23.355 "lvs/nvme0n1p0" 00:20:23.355 ], 00:20:23.355 "product_name": "Logical Volume", 00:20:23.355 "block_size": 4096, 00:20:23.355 "num_blocks": 26476544, 00:20:23.355 "uuid": "41f2cd3c-2cb4-4e9c-a519-13ea8a09b0f4", 00:20:23.355 "assigned_rate_limits": { 00:20:23.355 "rw_ios_per_sec": 0, 00:20:23.355 "rw_mbytes_per_sec": 0, 00:20:23.355 "r_mbytes_per_sec": 0, 00:20:23.355 "w_mbytes_per_sec": 0 00:20:23.355 }, 00:20:23.355 "claimed": false, 00:20:23.355 "zoned": false, 00:20:23.355 "supported_io_types": { 00:20:23.355 "read": true, 00:20:23.355 "write": true, 00:20:23.355 "unmap": true, 00:20:23.355 "flush": false, 00:20:23.355 "reset": true, 00:20:23.355 "nvme_admin": false, 00:20:23.355 "nvme_io": false, 00:20:23.355 "nvme_io_md": false, 00:20:23.355 "write_zeroes": true, 00:20:23.355 "zcopy": false, 00:20:23.355 "get_zone_info": false, 00:20:23.355 "zone_management": false, 00:20:23.355 "zone_append": false, 00:20:23.355 "compare": false, 00:20:23.355 "compare_and_write": false, 00:20:23.355 "abort": false, 00:20:23.355 "seek_hole": true, 00:20:23.355 "seek_data": true, 00:20:23.355 "copy": false, 00:20:23.355 "nvme_iov_md": false 00:20:23.355 }, 00:20:23.355 "driver_specific": { 00:20:23.355 "lvol": { 00:20:23.355 "lvol_store_uuid": "97a3e74e-df4f-4eed-b69d-eb4e6c249cb6", 00:20:23.355 "base_bdev": "nvme0n1", 00:20:23.355 "thin_provision": true, 00:20:23.355 "num_allocated_clusters": 0, 00:20:23.355 "snapshot": false, 00:20:23.355 "clone": false, 00:20:23.355 "esnap_clone": false 00:20:23.355 } 00:20:23.355 } 00:20:23.355 } 00:20:23.355 ]' 00:20:23.355 19:10:40 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:23.355 19:10:40 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:20:23.355 19:10:40 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:23.355 19:10:40 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:23.355 19:10:40 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:23.355 19:10:40 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:20:23.355 19:10:40 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:20:23.355 19:10:40 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:20:23.355 19:10:40 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:20:23.616 19:10:41 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:20:23.616 19:10:41 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:20:23.616 19:10:41 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 41f2cd3c-2cb4-4e9c-a519-13ea8a09b0f4 00:20:23.616 19:10:41 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=41f2cd3c-2cb4-4e9c-a519-13ea8a09b0f4 00:20:23.616 19:10:41 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:23.616 19:10:41 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:20:23.616 19:10:41 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:20:23.616 19:10:41 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 41f2cd3c-2cb4-4e9c-a519-13ea8a09b0f4 00:20:23.877 19:10:41 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:23.877 { 00:20:23.877 "name": "41f2cd3c-2cb4-4e9c-a519-13ea8a09b0f4", 00:20:23.877 "aliases": [ 00:20:23.877 "lvs/nvme0n1p0" 00:20:23.877 ], 00:20:23.877 "product_name": "Logical Volume", 00:20:23.877 "block_size": 4096, 00:20:23.877 "num_blocks": 26476544, 00:20:23.877 "uuid": "41f2cd3c-2cb4-4e9c-a519-13ea8a09b0f4", 00:20:23.877 "assigned_rate_limits": { 00:20:23.877 "rw_ios_per_sec": 0, 00:20:23.877 "rw_mbytes_per_sec": 0, 00:20:23.877 "r_mbytes_per_sec": 0, 00:20:23.877 "w_mbytes_per_sec": 0 00:20:23.877 }, 00:20:23.877 "claimed": false, 00:20:23.877 "zoned": false, 00:20:23.877 "supported_io_types": { 00:20:23.877 "read": true, 00:20:23.877 "write": true, 00:20:23.877 "unmap": true, 00:20:23.877 "flush": false, 00:20:23.877 "reset": true, 00:20:23.877 "nvme_admin": false, 00:20:23.877 "nvme_io": false, 00:20:23.877 "nvme_io_md": false, 00:20:23.877 "write_zeroes": true, 00:20:23.877 "zcopy": false, 00:20:23.877 "get_zone_info": false, 00:20:23.877 "zone_management": false, 00:20:23.877 "zone_append": false, 00:20:23.877 "compare": false, 00:20:23.877 "compare_and_write": false, 00:20:23.877 "abort": false, 00:20:23.877 "seek_hole": true, 00:20:23.877 "seek_data": true, 00:20:23.877 "copy": false, 00:20:23.877 "nvme_iov_md": false 00:20:23.877 }, 00:20:23.877 "driver_specific": { 00:20:23.877 "lvol": { 00:20:23.877 "lvol_store_uuid": "97a3e74e-df4f-4eed-b69d-eb4e6c249cb6", 00:20:23.877 "base_bdev": "nvme0n1", 00:20:23.877 "thin_provision": true, 00:20:23.877 "num_allocated_clusters": 0, 00:20:23.877 "snapshot": false, 00:20:23.877 "clone": false, 00:20:23.877 "esnap_clone": false 00:20:23.877 } 00:20:23.877 } 00:20:23.877 } 00:20:23.877 ]' 00:20:23.877 19:10:41 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:23.877 19:10:41 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:20:23.877 19:10:41 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:23.877 19:10:41 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:23.877 19:10:41 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:23.877 19:10:41 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:20:23.877 19:10:41 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:20:23.877 19:10:41 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:20:24.138 19:10:41 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:20:24.138 19:10:41 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 41f2cd3c-2cb4-4e9c-a519-13ea8a09b0f4 00:20:24.138 19:10:41 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=41f2cd3c-2cb4-4e9c-a519-13ea8a09b0f4 00:20:24.138 19:10:41 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:24.138 19:10:41 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:20:24.138 19:10:41 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:20:24.138 19:10:41 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 41f2cd3c-2cb4-4e9c-a519-13ea8a09b0f4 00:20:24.398 19:10:41 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:24.398 { 00:20:24.398 "name": "41f2cd3c-2cb4-4e9c-a519-13ea8a09b0f4", 00:20:24.398 "aliases": [ 00:20:24.398 "lvs/nvme0n1p0" 00:20:24.398 ], 00:20:24.398 "product_name": "Logical Volume", 00:20:24.398 "block_size": 4096, 00:20:24.398 "num_blocks": 26476544, 00:20:24.398 "uuid": "41f2cd3c-2cb4-4e9c-a519-13ea8a09b0f4", 00:20:24.398 "assigned_rate_limits": { 00:20:24.398 "rw_ios_per_sec": 0, 00:20:24.398 "rw_mbytes_per_sec": 0, 00:20:24.398 "r_mbytes_per_sec": 0, 00:20:24.398 "w_mbytes_per_sec": 0 00:20:24.398 }, 00:20:24.398 "claimed": false, 00:20:24.398 "zoned": false, 00:20:24.398 "supported_io_types": { 00:20:24.398 "read": true, 00:20:24.398 "write": true, 00:20:24.398 "unmap": true, 00:20:24.398 "flush": false, 00:20:24.398 "reset": true, 00:20:24.398 "nvme_admin": false, 00:20:24.398 "nvme_io": false, 00:20:24.398 "nvme_io_md": false, 00:20:24.398 "write_zeroes": true, 00:20:24.398 "zcopy": false, 00:20:24.398 "get_zone_info": false, 00:20:24.398 "zone_management": false, 00:20:24.398 "zone_append": false, 00:20:24.398 "compare": false, 00:20:24.398 "compare_and_write": false, 00:20:24.398 "abort": false, 00:20:24.398 "seek_hole": true, 00:20:24.398 "seek_data": true, 00:20:24.398 "copy": false, 00:20:24.398 "nvme_iov_md": false 00:20:24.398 }, 00:20:24.398 "driver_specific": { 00:20:24.398 "lvol": { 00:20:24.398 "lvol_store_uuid": "97a3e74e-df4f-4eed-b69d-eb4e6c249cb6", 00:20:24.398 "base_bdev": "nvme0n1", 00:20:24.398 "thin_provision": true, 00:20:24.398 "num_allocated_clusters": 0, 00:20:24.398 "snapshot": false, 00:20:24.398 "clone": false, 00:20:24.398 "esnap_clone": false 00:20:24.398 } 00:20:24.398 } 00:20:24.398 } 00:20:24.398 ]' 00:20:24.398 19:10:41 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:24.398 19:10:41 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:20:24.398 19:10:41 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:24.398 19:10:41 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:24.398 19:10:41 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:24.398 19:10:41 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:20:24.398 19:10:41 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:20:24.398 19:10:41 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 41f2cd3c-2cb4-4e9c-a519-13ea8a09b0f4 --l2p_dram_limit 10' 00:20:24.398 19:10:41 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:20:24.398 19:10:41 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:20:24.398 19:10:41 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:20:24.398 19:10:41 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:20:24.398 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:20:24.399 19:10:41 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 41f2cd3c-2cb4-4e9c-a519-13ea8a09b0f4 --l2p_dram_limit 10 -c nvc0n1p0 00:20:24.659 [2024-12-05 19:10:42.070024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.659 [2024-12-05 19:10:42.070160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:24.659 [2024-12-05 19:10:42.070176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:24.659 [2024-12-05 19:10:42.070185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.659 [2024-12-05 19:10:42.070238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.659 [2024-12-05 19:10:42.070248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:24.659 [2024-12-05 19:10:42.070272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:20:24.659 [2024-12-05 19:10:42.070282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.659 [2024-12-05 19:10:42.070298] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:24.659 [2024-12-05 19:10:42.070525] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:24.659 [2024-12-05 19:10:42.070537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.659 [2024-12-05 19:10:42.070545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:24.659 [2024-12-05 19:10:42.070552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.244 ms 00:20:24.659 [2024-12-05 19:10:42.070563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.659 [2024-12-05 19:10:42.070613] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 1f0d4c67-c2eb-4ccd-bb1a-4ce91bbc32a5 00:20:24.659 [2024-12-05 19:10:42.071628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.659 [2024-12-05 19:10:42.071647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:20:24.659 [2024-12-05 19:10:42.071656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:20:24.659 [2024-12-05 19:10:42.071663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.659 [2024-12-05 19:10:42.076652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.659 [2024-12-05 19:10:42.076766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:24.659 [2024-12-05 19:10:42.076781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.955 ms 00:20:24.659 [2024-12-05 19:10:42.076787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.659 [2024-12-05 19:10:42.076852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.659 [2024-12-05 19:10:42.076860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:24.659 [2024-12-05 19:10:42.076867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:20:24.659 [2024-12-05 19:10:42.076873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.659 [2024-12-05 19:10:42.076914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.659 [2024-12-05 19:10:42.076922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:24.659 [2024-12-05 19:10:42.076929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:24.659 [2024-12-05 19:10:42.076935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.659 [2024-12-05 19:10:42.076957] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:24.659 [2024-12-05 19:10:42.078297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.659 [2024-12-05 19:10:42.078326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:24.660 [2024-12-05 19:10:42.078334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.348 ms 00:20:24.660 [2024-12-05 19:10:42.078341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.660 [2024-12-05 19:10:42.078371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.660 [2024-12-05 19:10:42.078380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:24.660 [2024-12-05 19:10:42.078386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:24.660 [2024-12-05 19:10:42.078395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.660 [2024-12-05 19:10:42.078408] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:20:24.660 [2024-12-05 19:10:42.078525] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:24.660 [2024-12-05 19:10:42.078534] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:24.660 [2024-12-05 19:10:42.078545] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:24.660 [2024-12-05 19:10:42.078553] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:24.660 [2024-12-05 19:10:42.078563] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:24.660 [2024-12-05 19:10:42.078569] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:24.660 [2024-12-05 19:10:42.078578] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:24.660 [2024-12-05 19:10:42.078583] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:24.660 [2024-12-05 19:10:42.078590] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:24.660 [2024-12-05 19:10:42.078596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.660 [2024-12-05 19:10:42.078603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:24.660 [2024-12-05 19:10:42.078608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.189 ms 00:20:24.660 [2024-12-05 19:10:42.078615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.660 [2024-12-05 19:10:42.078679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.660 [2024-12-05 19:10:42.078688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:24.660 [2024-12-05 19:10:42.078694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:20:24.660 [2024-12-05 19:10:42.078702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.660 [2024-12-05 19:10:42.078779] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:24.660 [2024-12-05 19:10:42.078788] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:24.660 [2024-12-05 19:10:42.078794] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:24.660 [2024-12-05 19:10:42.078802] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:24.660 [2024-12-05 19:10:42.078808] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:24.660 [2024-12-05 19:10:42.078815] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:24.660 [2024-12-05 19:10:42.078820] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:24.660 [2024-12-05 19:10:42.078827] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:24.660 [2024-12-05 19:10:42.078832] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:24.660 [2024-12-05 19:10:42.078839] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:24.660 [2024-12-05 19:10:42.078844] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:24.660 [2024-12-05 19:10:42.078851] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:24.660 [2024-12-05 19:10:42.078856] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:24.660 [2024-12-05 19:10:42.078865] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:24.660 [2024-12-05 19:10:42.078870] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:24.660 [2024-12-05 19:10:42.078876] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:24.660 [2024-12-05 19:10:42.078882] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:24.660 [2024-12-05 19:10:42.078889] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:24.660 [2024-12-05 19:10:42.078894] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:24.660 [2024-12-05 19:10:42.078901] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:24.660 [2024-12-05 19:10:42.078906] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:24.660 [2024-12-05 19:10:42.078913] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:24.660 [2024-12-05 19:10:42.078918] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:24.660 [2024-12-05 19:10:42.078924] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:24.660 [2024-12-05 19:10:42.078929] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:24.660 [2024-12-05 19:10:42.078935] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:24.660 [2024-12-05 19:10:42.078941] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:24.660 [2024-12-05 19:10:42.078947] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:24.660 [2024-12-05 19:10:42.078953] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:24.660 [2024-12-05 19:10:42.078963] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:24.660 [2024-12-05 19:10:42.078969] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:24.660 [2024-12-05 19:10:42.078976] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:24.660 [2024-12-05 19:10:42.078982] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:24.660 [2024-12-05 19:10:42.078989] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:24.660 [2024-12-05 19:10:42.078995] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:24.660 [2024-12-05 19:10:42.079002] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:24.660 [2024-12-05 19:10:42.079008] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:24.660 [2024-12-05 19:10:42.079015] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:24.660 [2024-12-05 19:10:42.079021] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:24.660 [2024-12-05 19:10:42.079028] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:24.660 [2024-12-05 19:10:42.079034] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:24.660 [2024-12-05 19:10:42.079041] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:24.660 [2024-12-05 19:10:42.079046] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:24.660 [2024-12-05 19:10:42.079054] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:24.660 [2024-12-05 19:10:42.079064] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:24.660 [2024-12-05 19:10:42.079073] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:24.660 [2024-12-05 19:10:42.079079] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:24.660 [2024-12-05 19:10:42.079088] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:24.660 [2024-12-05 19:10:42.079095] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:24.660 [2024-12-05 19:10:42.079102] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:24.660 [2024-12-05 19:10:42.079108] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:24.660 [2024-12-05 19:10:42.079115] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:24.660 [2024-12-05 19:10:42.079121] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:24.660 [2024-12-05 19:10:42.079129] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:24.660 [2024-12-05 19:10:42.079140] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:24.660 [2024-12-05 19:10:42.079149] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:24.660 [2024-12-05 19:10:42.079156] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:24.660 [2024-12-05 19:10:42.079163] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:24.660 [2024-12-05 19:10:42.079170] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:24.660 [2024-12-05 19:10:42.079177] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:24.660 [2024-12-05 19:10:42.079183] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:24.660 [2024-12-05 19:10:42.079192] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:24.660 [2024-12-05 19:10:42.079199] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:24.660 [2024-12-05 19:10:42.079206] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:24.660 [2024-12-05 19:10:42.079213] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:24.660 [2024-12-05 19:10:42.079221] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:24.660 [2024-12-05 19:10:42.079227] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:24.660 [2024-12-05 19:10:42.079234] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:24.660 [2024-12-05 19:10:42.079241] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:24.660 [2024-12-05 19:10:42.079258] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:24.660 [2024-12-05 19:10:42.079265] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:24.660 [2024-12-05 19:10:42.079273] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:24.660 [2024-12-05 19:10:42.079279] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:24.660 [2024-12-05 19:10:42.079287] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:24.660 [2024-12-05 19:10:42.079295] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:24.660 [2024-12-05 19:10:42.079315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.661 [2024-12-05 19:10:42.079321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:24.661 [2024-12-05 19:10:42.079331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.573 ms 00:20:24.661 [2024-12-05 19:10:42.079336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.661 [2024-12-05 19:10:42.079367] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:20:24.661 [2024-12-05 19:10:42.079375] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:20:28.867 [2024-12-05 19:10:45.898533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.867 [2024-12-05 19:10:45.898630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:20:28.867 [2024-12-05 19:10:45.898651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3819.138 ms 00:20:28.867 [2024-12-05 19:10:45.898661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.867 [2024-12-05 19:10:45.912858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.867 [2024-12-05 19:10:45.913086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:28.867 [2024-12-05 19:10:45.913114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.069 ms 00:20:28.867 [2024-12-05 19:10:45.913131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.867 [2024-12-05 19:10:45.913317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.867 [2024-12-05 19:10:45.913330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:28.867 [2024-12-05 19:10:45.913343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:20:28.867 [2024-12-05 19:10:45.913351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.867 [2024-12-05 19:10:45.926378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.867 [2024-12-05 19:10:45.926428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:28.867 [2024-12-05 19:10:45.926442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.951 ms 00:20:28.867 [2024-12-05 19:10:45.926455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.867 [2024-12-05 19:10:45.926494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.867 [2024-12-05 19:10:45.926503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:28.867 [2024-12-05 19:10:45.926514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:28.867 [2024-12-05 19:10:45.926522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.867 [2024-12-05 19:10:45.927119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.867 [2024-12-05 19:10:45.927143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:28.867 [2024-12-05 19:10:45.927157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.541 ms 00:20:28.867 [2024-12-05 19:10:45.927166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.867 [2024-12-05 19:10:45.927336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.867 [2024-12-05 19:10:45.927348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:28.867 [2024-12-05 19:10:45.927360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.136 ms 00:20:28.867 [2024-12-05 19:10:45.927369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.867 [2024-12-05 19:10:45.936229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.867 [2024-12-05 19:10:45.936293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:28.867 [2024-12-05 19:10:45.936308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.832 ms 00:20:28.867 [2024-12-05 19:10:45.936316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.867 [2024-12-05 19:10:45.958665] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:28.867 [2024-12-05 19:10:45.963147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.867 [2024-12-05 19:10:45.963211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:28.867 [2024-12-05 19:10:45.963230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.745 ms 00:20:28.867 [2024-12-05 19:10:45.963246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.867 [2024-12-05 19:10:46.048895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.867 [2024-12-05 19:10:46.049132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:20:28.867 [2024-12-05 19:10:46.049161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 85.567 ms 00:20:28.867 [2024-12-05 19:10:46.049176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.867 [2024-12-05 19:10:46.049416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.867 [2024-12-05 19:10:46.049432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:28.867 [2024-12-05 19:10:46.049443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.182 ms 00:20:28.867 [2024-12-05 19:10:46.049454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.867 [2024-12-05 19:10:46.055612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.867 [2024-12-05 19:10:46.055794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:20:28.867 [2024-12-05 19:10:46.055819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.115 ms 00:20:28.867 [2024-12-05 19:10:46.055830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.867 [2024-12-05 19:10:46.060984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.867 [2024-12-05 19:10:46.061043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:20:28.867 [2024-12-05 19:10:46.061056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.111 ms 00:20:28.868 [2024-12-05 19:10:46.061065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.868 [2024-12-05 19:10:46.061511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.868 [2024-12-05 19:10:46.061540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:28.868 [2024-12-05 19:10:46.061551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.337 ms 00:20:28.868 [2024-12-05 19:10:46.061564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.868 [2024-12-05 19:10:46.101769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.868 [2024-12-05 19:10:46.101831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:20:28.868 [2024-12-05 19:10:46.101848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.180 ms 00:20:28.868 [2024-12-05 19:10:46.101859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.868 [2024-12-05 19:10:46.109063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.868 [2024-12-05 19:10:46.109123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:20:28.868 [2024-12-05 19:10:46.109141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.126 ms 00:20:28.868 [2024-12-05 19:10:46.109152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.868 [2024-12-05 19:10:46.114961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.868 [2024-12-05 19:10:46.115018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:20:28.868 [2024-12-05 19:10:46.115030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.758 ms 00:20:28.868 [2024-12-05 19:10:46.115039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.868 [2024-12-05 19:10:46.121353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.868 [2024-12-05 19:10:46.121409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:28.868 [2024-12-05 19:10:46.121421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.266 ms 00:20:28.868 [2024-12-05 19:10:46.121434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.868 [2024-12-05 19:10:46.121488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.868 [2024-12-05 19:10:46.121500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:28.868 [2024-12-05 19:10:46.121509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:28.868 [2024-12-05 19:10:46.121520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.868 [2024-12-05 19:10:46.121618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.868 [2024-12-05 19:10:46.121631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:28.868 [2024-12-05 19:10:46.121640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:20:28.868 [2024-12-05 19:10:46.121654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.868 [2024-12-05 19:10:46.122901] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4052.359 ms, result 0 00:20:28.868 { 00:20:28.868 "name": "ftl0", 00:20:28.868 "uuid": "1f0d4c67-c2eb-4ccd-bb1a-4ce91bbc32a5" 00:20:28.868 } 00:20:28.868 19:10:46 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:20:28.868 19:10:46 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:20:28.868 19:10:46 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:20:28.868 19:10:46 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:20:29.131 [2024-12-05 19:10:46.584731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.131 [2024-12-05 19:10:46.584959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:29.131 [2024-12-05 19:10:46.584991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:29.131 [2024-12-05 19:10:46.585001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.131 [2024-12-05 19:10:46.585037] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:29.131 [2024-12-05 19:10:46.585860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.131 [2024-12-05 19:10:46.585906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:29.131 [2024-12-05 19:10:46.585919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.805 ms 00:20:29.131 [2024-12-05 19:10:46.585941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.131 [2024-12-05 19:10:46.586225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.131 [2024-12-05 19:10:46.586248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:29.131 [2024-12-05 19:10:46.586274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.256 ms 00:20:29.131 [2024-12-05 19:10:46.586288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.131 [2024-12-05 19:10:46.589682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.131 [2024-12-05 19:10:46.589708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:29.131 [2024-12-05 19:10:46.589718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.377 ms 00:20:29.131 [2024-12-05 19:10:46.589729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.131 [2024-12-05 19:10:46.596032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.131 [2024-12-05 19:10:46.596215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:29.131 [2024-12-05 19:10:46.596235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.285 ms 00:20:29.131 [2024-12-05 19:10:46.596249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.131 [2024-12-05 19:10:46.599111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.131 [2024-12-05 19:10:46.599172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:29.131 [2024-12-05 19:10:46.599183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.751 ms 00:20:29.131 [2024-12-05 19:10:46.599193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.131 [2024-12-05 19:10:46.605200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.131 [2024-12-05 19:10:46.605389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:29.131 [2024-12-05 19:10:46.605410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.959 ms 00:20:29.131 [2024-12-05 19:10:46.605420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.131 [2024-12-05 19:10:46.605570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.131 [2024-12-05 19:10:46.605585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:29.131 [2024-12-05 19:10:46.605597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:20:29.131 [2024-12-05 19:10:46.605607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.131 [2024-12-05 19:10:46.608667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.131 [2024-12-05 19:10:46.608725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:29.131 [2024-12-05 19:10:46.608737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.041 ms 00:20:29.131 [2024-12-05 19:10:46.608748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.131 [2024-12-05 19:10:46.611493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.131 [2024-12-05 19:10:46.611550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:29.131 [2024-12-05 19:10:46.611560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.698 ms 00:20:29.131 [2024-12-05 19:10:46.611570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.131 [2024-12-05 19:10:46.613801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.131 [2024-12-05 19:10:46.613854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:29.131 [2024-12-05 19:10:46.613865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.176 ms 00:20:29.131 [2024-12-05 19:10:46.613874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.131 [2024-12-05 19:10:46.616107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.131 [2024-12-05 19:10:46.616160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:29.131 [2024-12-05 19:10:46.616170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.155 ms 00:20:29.131 [2024-12-05 19:10:46.616180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.131 [2024-12-05 19:10:46.616222] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:29.131 [2024-12-05 19:10:46.616240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:29.131 [2024-12-05 19:10:46.616272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:29.131 [2024-12-05 19:10:46.616284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:29.131 [2024-12-05 19:10:46.616292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:29.131 [2024-12-05 19:10:46.616306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:29.131 [2024-12-05 19:10:46.616314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:29.131 [2024-12-05 19:10:46.616324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:29.131 [2024-12-05 19:10:46.616331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:29.131 [2024-12-05 19:10:46.616341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:29.131 [2024-12-05 19:10:46.616348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:29.131 [2024-12-05 19:10:46.616358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:29.132 [2024-12-05 19:10:46.616366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:29.132 [2024-12-05 19:10:46.616376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:29.132 [2024-12-05 19:10:46.616384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:29.132 [2024-12-05 19:10:46.616393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:29.132 [2024-12-05 19:10:46.616401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:29.132 [2024-12-05 19:10:46.616410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:29.132 [2024-12-05 19:10:46.616417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:29.132 [2024-12-05 19:10:46.616427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:29.132 [2024-12-05 19:10:46.616435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:29.132 [2024-12-05 19:10:46.616449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:29.132 [2024-12-05 19:10:46.616456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:29.132 [2024-12-05 19:10:46.616466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:29.132 [2024-12-05 19:10:46.616473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:29.132 [2024-12-05 19:10:46.616483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:29.132 [2024-12-05 19:10:46.616490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:29.132 [2024-12-05 19:10:46.616502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:29.132 [2024-12-05 19:10:46.616509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:29.132 [2024-12-05 19:10:46.616519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:29.132 [2024-12-05 19:10:46.616526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:29.132 [2024-12-05 19:10:46.616536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:29.132 [2024-12-05 19:10:46.616545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:29.132 [2024-12-05 19:10:46.616555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:29.132 [2024-12-05 19:10:46.616563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:29.132 [2024-12-05 19:10:46.616572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:29.132 [2024-12-05 19:10:46.616580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:29.132 [2024-12-05 19:10:46.616592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:29.132 [2024-12-05 19:10:46.616600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:29.132 [2024-12-05 19:10:46.616609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:29.132 [2024-12-05 19:10:46.616617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:29.132 [2024-12-05 19:10:46.616627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:29.132 [2024-12-05 19:10:46.616635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:29.132 [2024-12-05 19:10:46.616644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:29.132 [2024-12-05 19:10:46.616652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:29.132 [2024-12-05 19:10:46.616661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:29.132 [2024-12-05 19:10:46.616668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:29.132 [2024-12-05 19:10:46.616678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:29.132 [2024-12-05 19:10:46.616685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:29.132 [2024-12-05 19:10:46.616696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:29.132 [2024-12-05 19:10:46.616704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:29.132 [2024-12-05 19:10:46.616714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:29.132 [2024-12-05 19:10:46.616721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:29.132 [2024-12-05 19:10:46.616734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:29.132 [2024-12-05 19:10:46.616742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:29.132 [2024-12-05 19:10:46.616751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:29.132 [2024-12-05 19:10:46.616759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:29.132 [2024-12-05 19:10:46.616768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:29.132 [2024-12-05 19:10:46.616776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:29.132 [2024-12-05 19:10:46.616786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:29.132 [2024-12-05 19:10:46.616794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:29.132 [2024-12-05 19:10:46.616803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:29.132 [2024-12-05 19:10:46.616812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:29.132 [2024-12-05 19:10:46.616822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:29.132 [2024-12-05 19:10:46.616832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:29.132 [2024-12-05 19:10:46.616842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:29.132 [2024-12-05 19:10:46.616850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:29.132 [2024-12-05 19:10:46.616859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:29.132 [2024-12-05 19:10:46.616867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:29.132 [2024-12-05 19:10:46.616879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:29.132 [2024-12-05 19:10:46.616886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:29.132 [2024-12-05 19:10:46.616896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:29.132 [2024-12-05 19:10:46.616903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:29.132 [2024-12-05 19:10:46.616913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:29.132 [2024-12-05 19:10:46.616921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:29.132 [2024-12-05 19:10:46.616932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:29.132 [2024-12-05 19:10:46.616939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:29.132 [2024-12-05 19:10:46.616949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:29.133 [2024-12-05 19:10:46.616956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:29.133 [2024-12-05 19:10:46.616966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:29.133 [2024-12-05 19:10:46.616973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:29.133 [2024-12-05 19:10:46.616983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:29.133 [2024-12-05 19:10:46.616990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:29.133 [2024-12-05 19:10:46.616999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:29.133 [2024-12-05 19:10:46.617006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:29.133 [2024-12-05 19:10:46.617018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:29.133 [2024-12-05 19:10:46.617026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:29.133 [2024-12-05 19:10:46.617035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:29.133 [2024-12-05 19:10:46.617043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:29.133 [2024-12-05 19:10:46.617052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:29.133 [2024-12-05 19:10:46.617060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:29.133 [2024-12-05 19:10:46.617069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:29.133 [2024-12-05 19:10:46.617077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:29.133 [2024-12-05 19:10:46.617087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:29.133 [2024-12-05 19:10:46.617094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:29.133 [2024-12-05 19:10:46.617104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:29.133 [2024-12-05 19:10:46.617112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:29.133 [2024-12-05 19:10:46.617122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:29.133 [2024-12-05 19:10:46.617130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:29.133 [2024-12-05 19:10:46.617139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:29.133 [2024-12-05 19:10:46.617147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:29.133 [2024-12-05 19:10:46.617169] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:29.133 [2024-12-05 19:10:46.617177] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1f0d4c67-c2eb-4ccd-bb1a-4ce91bbc32a5 00:20:29.133 [2024-12-05 19:10:46.617187] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:29.133 [2024-12-05 19:10:46.617194] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:29.133 [2024-12-05 19:10:46.617204] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:29.133 [2024-12-05 19:10:46.617212] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:29.133 [2024-12-05 19:10:46.617222] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:29.133 [2024-12-05 19:10:46.617232] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:29.133 [2024-12-05 19:10:46.617241] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:29.133 [2024-12-05 19:10:46.617261] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:29.133 [2024-12-05 19:10:46.617270] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:29.133 [2024-12-05 19:10:46.617278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.133 [2024-12-05 19:10:46.617288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:29.133 [2024-12-05 19:10:46.617296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.057 ms 00:20:29.133 [2024-12-05 19:10:46.617306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.133 [2024-12-05 19:10:46.619706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.133 [2024-12-05 19:10:46.619749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:29.133 [2024-12-05 19:10:46.619760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.379 ms 00:20:29.133 [2024-12-05 19:10:46.619774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.133 [2024-12-05 19:10:46.619937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.133 [2024-12-05 19:10:46.619951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:29.133 [2024-12-05 19:10:46.619961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:20:29.133 [2024-12-05 19:10:46.619970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.133 [2024-12-05 19:10:46.628195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.133 [2024-12-05 19:10:46.628272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:29.133 [2024-12-05 19:10:46.628286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.133 [2024-12-05 19:10:46.628296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.133 [2024-12-05 19:10:46.628364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.133 [2024-12-05 19:10:46.628376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:29.133 [2024-12-05 19:10:46.628385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.133 [2024-12-05 19:10:46.628395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.133 [2024-12-05 19:10:46.628478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.133 [2024-12-05 19:10:46.628495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:29.133 [2024-12-05 19:10:46.628503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.133 [2024-12-05 19:10:46.628521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.133 [2024-12-05 19:10:46.628539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.133 [2024-12-05 19:10:46.628550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:29.133 [2024-12-05 19:10:46.628557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.133 [2024-12-05 19:10:46.628567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.133 [2024-12-05 19:10:46.643121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.133 [2024-12-05 19:10:46.643184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:29.133 [2024-12-05 19:10:46.643196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.133 [2024-12-05 19:10:46.643209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.133 [2024-12-05 19:10:46.654547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.133 [2024-12-05 19:10:46.654611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:29.133 [2024-12-05 19:10:46.654622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.133 [2024-12-05 19:10:46.654632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.133 [2024-12-05 19:10:46.654710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.133 [2024-12-05 19:10:46.654727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:29.133 [2024-12-05 19:10:46.654735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.133 [2024-12-05 19:10:46.654746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.133 [2024-12-05 19:10:46.654803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.133 [2024-12-05 19:10:46.654815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:29.133 [2024-12-05 19:10:46.654823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.134 [2024-12-05 19:10:46.654833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.134 [2024-12-05 19:10:46.654916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.134 [2024-12-05 19:10:46.654928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:29.134 [2024-12-05 19:10:46.654937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.134 [2024-12-05 19:10:46.654946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.134 [2024-12-05 19:10:46.654982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.134 [2024-12-05 19:10:46.654996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:29.134 [2024-12-05 19:10:46.655004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.134 [2024-12-05 19:10:46.655014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.134 [2024-12-05 19:10:46.655057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.134 [2024-12-05 19:10:46.655070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:29.134 [2024-12-05 19:10:46.655078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.134 [2024-12-05 19:10:46.655088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.134 [2024-12-05 19:10:46.655139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.134 [2024-12-05 19:10:46.655152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:29.134 [2024-12-05 19:10:46.655161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.134 [2024-12-05 19:10:46.655172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.134 [2024-12-05 19:10:46.655349] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 70.577 ms, result 0 00:20:29.134 true 00:20:29.134 19:10:46 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 88109 00:20:29.134 19:10:46 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 88109 ']' 00:20:29.134 19:10:46 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 88109 00:20:29.134 19:10:46 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:20:29.395 19:10:46 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:29.395 19:10:46 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88109 00:20:29.395 killing process with pid 88109 00:20:29.395 19:10:46 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:29.395 19:10:46 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:29.395 19:10:46 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88109' 00:20:29.395 19:10:46 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 88109 00:20:29.395 19:10:46 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 88109 00:20:35.983 19:10:53 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:20:40.213 262144+0 records in 00:20:40.213 262144+0 records out 00:20:40.213 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.28222 s, 251 MB/s 00:20:40.213 19:10:57 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:20:42.123 19:10:59 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:42.123 [2024-12-05 19:10:59.608894] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:20:42.123 [2024-12-05 19:10:59.609007] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88325 ] 00:20:42.392 [2024-12-05 19:10:59.756470] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:42.392 [2024-12-05 19:10:59.788289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:42.392 [2024-12-05 19:10:59.902521] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:42.392 [2024-12-05 19:10:59.902599] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:42.654 [2024-12-05 19:11:00.063212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.654 [2024-12-05 19:11:00.063296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:42.654 [2024-12-05 19:11:00.063313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:42.654 [2024-12-05 19:11:00.063322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.654 [2024-12-05 19:11:00.063385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.654 [2024-12-05 19:11:00.063395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:42.654 [2024-12-05 19:11:00.063404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:20:42.654 [2024-12-05 19:11:00.063419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.654 [2024-12-05 19:11:00.063475] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:42.654 [2024-12-05 19:11:00.063751] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:42.654 [2024-12-05 19:11:00.063773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.654 [2024-12-05 19:11:00.063785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:42.654 [2024-12-05 19:11:00.063797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.309 ms 00:20:42.654 [2024-12-05 19:11:00.063809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.654 [2024-12-05 19:11:00.065623] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:42.654 [2024-12-05 19:11:00.069375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.654 [2024-12-05 19:11:00.069437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:42.654 [2024-12-05 19:11:00.069450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.755 ms 00:20:42.654 [2024-12-05 19:11:00.069464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.654 [2024-12-05 19:11:00.069571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.654 [2024-12-05 19:11:00.069590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:42.654 [2024-12-05 19:11:00.069608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:20:42.654 [2024-12-05 19:11:00.069626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.654 [2024-12-05 19:11:00.078726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.654 [2024-12-05 19:11:00.078953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:42.654 [2024-12-05 19:11:00.078981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.035 ms 00:20:42.654 [2024-12-05 19:11:00.078989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.654 [2024-12-05 19:11:00.079095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.654 [2024-12-05 19:11:00.079108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:42.654 [2024-12-05 19:11:00.079118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:20:42.654 [2024-12-05 19:11:00.079125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.654 [2024-12-05 19:11:00.079203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.654 [2024-12-05 19:11:00.079214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:42.654 [2024-12-05 19:11:00.079226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:20:42.654 [2024-12-05 19:11:00.079235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.654 [2024-12-05 19:11:00.079296] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:42.654 [2024-12-05 19:11:00.081280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.654 [2024-12-05 19:11:00.081316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:42.654 [2024-12-05 19:11:00.081326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.990 ms 00:20:42.654 [2024-12-05 19:11:00.081334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.654 [2024-12-05 19:11:00.081379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.654 [2024-12-05 19:11:00.081389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:42.654 [2024-12-05 19:11:00.081398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:42.654 [2024-12-05 19:11:00.081409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.654 [2024-12-05 19:11:00.081430] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:42.654 [2024-12-05 19:11:00.081455] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:42.654 [2024-12-05 19:11:00.081496] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:42.654 [2024-12-05 19:11:00.081512] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:42.654 [2024-12-05 19:11:00.081669] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:42.654 [2024-12-05 19:11:00.081687] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:42.654 [2024-12-05 19:11:00.081707] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:42.654 [2024-12-05 19:11:00.081723] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:42.654 [2024-12-05 19:11:00.081737] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:42.654 [2024-12-05 19:11:00.081750] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:42.654 [2024-12-05 19:11:00.081761] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:42.654 [2024-12-05 19:11:00.081773] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:42.654 [2024-12-05 19:11:00.081786] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:42.654 [2024-12-05 19:11:00.081799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.654 [2024-12-05 19:11:00.081811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:42.654 [2024-12-05 19:11:00.081823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.374 ms 00:20:42.654 [2024-12-05 19:11:00.081837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.654 [2024-12-05 19:11:00.081936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.654 [2024-12-05 19:11:00.081948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:42.654 [2024-12-05 19:11:00.081957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:20:42.654 [2024-12-05 19:11:00.081970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.654 [2024-12-05 19:11:00.082074] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:42.654 [2024-12-05 19:11:00.082087] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:42.654 [2024-12-05 19:11:00.082097] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:42.654 [2024-12-05 19:11:00.082106] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:42.654 [2024-12-05 19:11:00.082116] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:42.654 [2024-12-05 19:11:00.082124] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:42.654 [2024-12-05 19:11:00.082132] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:42.654 [2024-12-05 19:11:00.082141] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:42.654 [2024-12-05 19:11:00.082150] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:42.654 [2024-12-05 19:11:00.082158] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:42.654 [2024-12-05 19:11:00.082166] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:42.654 [2024-12-05 19:11:00.082177] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:42.654 [2024-12-05 19:11:00.082184] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:42.654 [2024-12-05 19:11:00.082193] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:42.654 [2024-12-05 19:11:00.082201] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:42.654 [2024-12-05 19:11:00.082211] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:42.654 [2024-12-05 19:11:00.082219] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:42.654 [2024-12-05 19:11:00.082227] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:42.654 [2024-12-05 19:11:00.082234] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:42.654 [2024-12-05 19:11:00.082242] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:42.654 [2024-12-05 19:11:00.082471] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:42.654 [2024-12-05 19:11:00.082510] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:42.654 [2024-12-05 19:11:00.082531] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:42.654 [2024-12-05 19:11:00.082551] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:42.654 [2024-12-05 19:11:00.082570] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:42.654 [2024-12-05 19:11:00.082589] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:42.654 [2024-12-05 19:11:00.082607] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:42.654 [2024-12-05 19:11:00.082636] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:42.654 [2024-12-05 19:11:00.082654] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:42.654 [2024-12-05 19:11:00.082672] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:42.654 [2024-12-05 19:11:00.082691] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:42.654 [2024-12-05 19:11:00.082709] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:42.654 [2024-12-05 19:11:00.082728] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:42.655 [2024-12-05 19:11:00.082812] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:42.655 [2024-12-05 19:11:00.082834] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:42.655 [2024-12-05 19:11:00.082854] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:42.655 [2024-12-05 19:11:00.082872] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:42.655 [2024-12-05 19:11:00.082890] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:42.655 [2024-12-05 19:11:00.082908] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:42.655 [2024-12-05 19:11:00.082926] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:42.655 [2024-12-05 19:11:00.082945] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:42.655 [2024-12-05 19:11:00.082962] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:42.655 [2024-12-05 19:11:00.083023] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:42.655 [2024-12-05 19:11:00.083050] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:42.655 [2024-12-05 19:11:00.083073] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:42.655 [2024-12-05 19:11:00.083093] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:42.655 [2024-12-05 19:11:00.083112] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:42.655 [2024-12-05 19:11:00.083133] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:42.655 [2024-12-05 19:11:00.083152] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:42.655 [2024-12-05 19:11:00.083211] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:42.655 [2024-12-05 19:11:00.083234] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:42.655 [2024-12-05 19:11:00.083268] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:42.655 [2024-12-05 19:11:00.083289] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:42.655 [2024-12-05 19:11:00.083310] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:42.655 [2024-12-05 19:11:00.083342] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:42.655 [2024-12-05 19:11:00.083484] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:42.655 [2024-12-05 19:11:00.083513] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:42.655 [2024-12-05 19:11:00.083542] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:42.655 [2024-12-05 19:11:00.083610] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:42.655 [2024-12-05 19:11:00.083644] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:42.655 [2024-12-05 19:11:00.083674] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:42.655 [2024-12-05 19:11:00.083813] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:42.655 [2024-12-05 19:11:00.083823] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:42.655 [2024-12-05 19:11:00.083831] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:42.655 [2024-12-05 19:11:00.083848] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:42.655 [2024-12-05 19:11:00.083855] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:42.655 [2024-12-05 19:11:00.083863] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:42.655 [2024-12-05 19:11:00.083871] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:42.655 [2024-12-05 19:11:00.083878] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:42.655 [2024-12-05 19:11:00.083886] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:42.655 [2024-12-05 19:11:00.083895] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:42.655 [2024-12-05 19:11:00.083907] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:42.655 [2024-12-05 19:11:00.083915] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:42.655 [2024-12-05 19:11:00.083923] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:42.655 [2024-12-05 19:11:00.083930] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:42.655 [2024-12-05 19:11:00.083942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.655 [2024-12-05 19:11:00.083954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:42.655 [2024-12-05 19:11:00.083966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.933 ms 00:20:42.655 [2024-12-05 19:11:00.083976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.655 [2024-12-05 19:11:00.097844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.655 [2024-12-05 19:11:00.097906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:42.655 [2024-12-05 19:11:00.097918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.785 ms 00:20:42.655 [2024-12-05 19:11:00.097927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.655 [2024-12-05 19:11:00.098018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.655 [2024-12-05 19:11:00.098026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:42.655 [2024-12-05 19:11:00.098035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:20:42.655 [2024-12-05 19:11:00.098049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.655 [2024-12-05 19:11:00.117358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.655 [2024-12-05 19:11:00.117403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:42.655 [2024-12-05 19:11:00.117416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.246 ms 00:20:42.655 [2024-12-05 19:11:00.117426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.655 [2024-12-05 19:11:00.117470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.655 [2024-12-05 19:11:00.117481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:42.655 [2024-12-05 19:11:00.117491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:42.655 [2024-12-05 19:11:00.117500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.655 [2024-12-05 19:11:00.117911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.655 [2024-12-05 19:11:00.117930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:42.655 [2024-12-05 19:11:00.117941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.323 ms 00:20:42.655 [2024-12-05 19:11:00.117957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.655 [2024-12-05 19:11:00.118102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.655 [2024-12-05 19:11:00.118114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:42.655 [2024-12-05 19:11:00.118124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:20:42.655 [2024-12-05 19:11:00.118134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.655 [2024-12-05 19:11:00.123590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.655 [2024-12-05 19:11:00.123748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:42.655 [2024-12-05 19:11:00.123765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.434 ms 00:20:42.655 [2024-12-05 19:11:00.123773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.655 [2024-12-05 19:11:00.126280] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:20:42.655 [2024-12-05 19:11:00.126315] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:42.655 [2024-12-05 19:11:00.126327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.655 [2024-12-05 19:11:00.126334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:42.655 [2024-12-05 19:11:00.126342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.470 ms 00:20:42.655 [2024-12-05 19:11:00.126350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.655 [2024-12-05 19:11:00.141111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.655 [2024-12-05 19:11:00.141162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:42.655 [2024-12-05 19:11:00.141176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.722 ms 00:20:42.655 [2024-12-05 19:11:00.141183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.655 [2024-12-05 19:11:00.143421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.655 [2024-12-05 19:11:00.143455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:42.655 [2024-12-05 19:11:00.143464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.170 ms 00:20:42.655 [2024-12-05 19:11:00.143472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.655 [2024-12-05 19:11:00.145201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.655 [2024-12-05 19:11:00.145336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:42.655 [2024-12-05 19:11:00.145365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.695 ms 00:20:42.655 [2024-12-05 19:11:00.145373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.655 [2024-12-05 19:11:00.145784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.655 [2024-12-05 19:11:00.145809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:42.655 [2024-12-05 19:11:00.145818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.339 ms 00:20:42.655 [2024-12-05 19:11:00.145830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.655 [2024-12-05 19:11:00.162019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.655 [2024-12-05 19:11:00.162066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:42.655 [2024-12-05 19:11:00.162078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.170 ms 00:20:42.655 [2024-12-05 19:11:00.162086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.655 [2024-12-05 19:11:00.169795] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:42.655 [2024-12-05 19:11:00.172081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.655 [2024-12-05 19:11:00.172118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:42.655 [2024-12-05 19:11:00.172129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.957 ms 00:20:42.655 [2024-12-05 19:11:00.172140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.655 [2024-12-05 19:11:00.172191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.655 [2024-12-05 19:11:00.172202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:42.655 [2024-12-05 19:11:00.172211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:42.655 [2024-12-05 19:11:00.172225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.655 [2024-12-05 19:11:00.172311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.655 [2024-12-05 19:11:00.172327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:42.655 [2024-12-05 19:11:00.172339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:20:42.655 [2024-12-05 19:11:00.172350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.655 [2024-12-05 19:11:00.172372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.655 [2024-12-05 19:11:00.172381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:42.655 [2024-12-05 19:11:00.172389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:42.655 [2024-12-05 19:11:00.172396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.655 [2024-12-05 19:11:00.172423] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:42.655 [2024-12-05 19:11:00.172434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.655 [2024-12-05 19:11:00.172444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:42.655 [2024-12-05 19:11:00.172452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:42.655 [2024-12-05 19:11:00.172465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.655 [2024-12-05 19:11:00.176211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.655 [2024-12-05 19:11:00.176248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:42.655 [2024-12-05 19:11:00.176272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.726 ms 00:20:42.655 [2024-12-05 19:11:00.176280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.655 [2024-12-05 19:11:00.176352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.655 [2024-12-05 19:11:00.176371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:42.655 [2024-12-05 19:11:00.176379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:20:42.655 [2024-12-05 19:11:00.176387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.656 [2024-12-05 19:11:00.177444] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 113.834 ms, result 0 00:20:43.683  [2024-12-05T19:11:02.196Z] Copying: 18/1024 [MB] (18 MBps) [2024-12-05T19:11:03.588Z] Copying: 35/1024 [MB] (16 MBps) [2024-12-05T19:11:04.533Z] Copying: 57/1024 [MB] (21 MBps) [2024-12-05T19:11:05.474Z] Copying: 73/1024 [MB] (16 MBps) [2024-12-05T19:11:06.417Z] Copying: 92/1024 [MB] (18 MBps) [2024-12-05T19:11:07.360Z] Copying: 106/1024 [MB] (13 MBps) [2024-12-05T19:11:08.304Z] Copying: 119352/1048576 [kB] (10144 kBps) [2024-12-05T19:11:09.240Z] Copying: 128/1024 [MB] (11 MBps) [2024-12-05T19:11:10.630Z] Copying: 155/1024 [MB] (26 MBps) [2024-12-05T19:11:11.204Z] Copying: 169/1024 [MB] (14 MBps) [2024-12-05T19:11:12.592Z] Copying: 185/1024 [MB] (16 MBps) [2024-12-05T19:11:13.538Z] Copying: 195/1024 [MB] (10 MBps) [2024-12-05T19:11:14.487Z] Copying: 214/1024 [MB] (19 MBps) [2024-12-05T19:11:15.426Z] Copying: 226/1024 [MB] (11 MBps) [2024-12-05T19:11:16.357Z] Copying: 241856/1048576 [kB] (10192 kBps) [2024-12-05T19:11:17.289Z] Copying: 258/1024 [MB] (22 MBps) [2024-12-05T19:11:18.220Z] Copying: 285/1024 [MB] (26 MBps) [2024-12-05T19:11:19.593Z] Copying: 314/1024 [MB] (29 MBps) [2024-12-05T19:11:20.526Z] Copying: 344/1024 [MB] (29 MBps) [2024-12-05T19:11:21.471Z] Copying: 395/1024 [MB] (51 MBps) [2024-12-05T19:11:22.413Z] Copying: 427/1024 [MB] (31 MBps) [2024-12-05T19:11:23.349Z] Copying: 442/1024 [MB] (14 MBps) [2024-12-05T19:11:24.283Z] Copying: 485/1024 [MB] (43 MBps) [2024-12-05T19:11:25.224Z] Copying: 537/1024 [MB] (51 MBps) [2024-12-05T19:11:26.589Z] Copying: 588/1024 [MB] (51 MBps) [2024-12-05T19:11:27.529Z] Copying: 640/1024 [MB] (51 MBps) [2024-12-05T19:11:28.475Z] Copying: 672/1024 [MB] (32 MBps) [2024-12-05T19:11:29.419Z] Copying: 686/1024 [MB] (13 MBps) [2024-12-05T19:11:30.358Z] Copying: 703/1024 [MB] (17 MBps) [2024-12-05T19:11:31.299Z] Copying: 736/1024 [MB] (33 MBps) [2024-12-05T19:11:32.255Z] Copying: 754/1024 [MB] (18 MBps) [2024-12-05T19:11:33.258Z] Copying: 771/1024 [MB] (16 MBps) [2024-12-05T19:11:34.200Z] Copying: 784/1024 [MB] (13 MBps) [2024-12-05T19:11:35.588Z] Copying: 801/1024 [MB] (17 MBps) [2024-12-05T19:11:36.531Z] Copying: 821/1024 [MB] (19 MBps) [2024-12-05T19:11:37.473Z] Copying: 838/1024 [MB] (16 MBps) [2024-12-05T19:11:38.403Z] Copying: 849/1024 [MB] (11 MBps) [2024-12-05T19:11:39.333Z] Copying: 885/1024 [MB] (35 MBps) [2024-12-05T19:11:40.263Z] Copying: 937/1024 [MB] (52 MBps) [2024-12-05T19:11:41.198Z] Copying: 990/1024 [MB] (52 MBps) [2024-12-05T19:11:41.198Z] Copying: 1024/1024 [MB] (average 25 MBps)[2024-12-05 19:11:40.833831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.639 [2024-12-05 19:11:40.833868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:23.639 [2024-12-05 19:11:40.833879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:23.639 [2024-12-05 19:11:40.833894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.639 [2024-12-05 19:11:40.833915] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:23.639 [2024-12-05 19:11:40.834339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.639 [2024-12-05 19:11:40.834353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:23.639 [2024-12-05 19:11:40.834362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.409 ms 00:21:23.639 [2024-12-05 19:11:40.834368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.639 [2024-12-05 19:11:40.835662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.639 [2024-12-05 19:11:40.835691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:23.639 [2024-12-05 19:11:40.835699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.280 ms 00:21:23.639 [2024-12-05 19:11:40.835705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.639 [2024-12-05 19:11:40.846376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.639 [2024-12-05 19:11:40.846497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:23.639 [2024-12-05 19:11:40.846511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.656 ms 00:21:23.639 [2024-12-05 19:11:40.846517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.639 [2024-12-05 19:11:40.851396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.639 [2024-12-05 19:11:40.851421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:23.639 [2024-12-05 19:11:40.851429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.848 ms 00:21:23.639 [2024-12-05 19:11:40.851435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.639 [2024-12-05 19:11:40.852392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.639 [2024-12-05 19:11:40.852419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:23.639 [2024-12-05 19:11:40.852426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.926 ms 00:21:23.639 [2024-12-05 19:11:40.852432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.639 [2024-12-05 19:11:40.855882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.639 [2024-12-05 19:11:40.855911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:23.639 [2024-12-05 19:11:40.855924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.425 ms 00:21:23.639 [2024-12-05 19:11:40.855930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.639 [2024-12-05 19:11:40.856017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.639 [2024-12-05 19:11:40.856024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:23.639 [2024-12-05 19:11:40.856030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:21:23.639 [2024-12-05 19:11:40.856036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.639 [2024-12-05 19:11:40.857615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.639 [2024-12-05 19:11:40.857721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:23.639 [2024-12-05 19:11:40.857733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.566 ms 00:21:23.639 [2024-12-05 19:11:40.857738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.639 [2024-12-05 19:11:40.859014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.639 [2024-12-05 19:11:40.859043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:23.639 [2024-12-05 19:11:40.859050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.253 ms 00:21:23.639 [2024-12-05 19:11:40.859055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.639 [2024-12-05 19:11:40.860002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.639 [2024-12-05 19:11:40.860029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:23.639 [2024-12-05 19:11:40.860036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.924 ms 00:21:23.639 [2024-12-05 19:11:40.860041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.639 [2024-12-05 19:11:40.861005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.639 [2024-12-05 19:11:40.861100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:23.639 [2024-12-05 19:11:40.861111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.923 ms 00:21:23.639 [2024-12-05 19:11:40.861116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.639 [2024-12-05 19:11:40.861138] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:23.639 [2024-12-05 19:11:40.861154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:23.639 [2024-12-05 19:11:40.861161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:23.639 [2024-12-05 19:11:40.861167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:23.639 [2024-12-05 19:11:40.861173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:23.639 [2024-12-05 19:11:40.861179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:23.639 [2024-12-05 19:11:40.861185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:23.639 [2024-12-05 19:11:40.861190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:23.639 [2024-12-05 19:11:40.861196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:23.639 [2024-12-05 19:11:40.861202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:23.639 [2024-12-05 19:11:40.861207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:23.639 [2024-12-05 19:11:40.861213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:23.639 [2024-12-05 19:11:40.861219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:23.639 [2024-12-05 19:11:40.861224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:23.639 [2024-12-05 19:11:40.861229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:23.639 [2024-12-05 19:11:40.861235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:23.639 [2024-12-05 19:11:40.861244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:23.639 [2024-12-05 19:11:40.861263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:23.639 [2024-12-05 19:11:40.861269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:23.639 [2024-12-05 19:11:40.861275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:23.639 [2024-12-05 19:11:40.861280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:23.639 [2024-12-05 19:11:40.861286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:23.639 [2024-12-05 19:11:40.861292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:23.639 [2024-12-05 19:11:40.861298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:23.639 [2024-12-05 19:11:40.861303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:23.639 [2024-12-05 19:11:40.861309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:23.639 [2024-12-05 19:11:40.861315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:23.639 [2024-12-05 19:11:40.861321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:23.639 [2024-12-05 19:11:40.861328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:23.639 [2024-12-05 19:11:40.861333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:23.640 [2024-12-05 19:11:40.861339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:23.640 [2024-12-05 19:11:40.861345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:23.640 [2024-12-05 19:11:40.861350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:23.640 [2024-12-05 19:11:40.861356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:23.640 [2024-12-05 19:11:40.861362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:23.640 [2024-12-05 19:11:40.861368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:23.640 [2024-12-05 19:11:40.861373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:23.640 [2024-12-05 19:11:40.861379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:23.640 [2024-12-05 19:11:40.861384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:23.640 [2024-12-05 19:11:40.861390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:23.640 [2024-12-05 19:11:40.861395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:23.640 [2024-12-05 19:11:40.861401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:23.640 [2024-12-05 19:11:40.861407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:23.640 [2024-12-05 19:11:40.861412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:23.640 [2024-12-05 19:11:40.861423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:23.640 [2024-12-05 19:11:40.861429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:23.640 [2024-12-05 19:11:40.861434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:23.640 [2024-12-05 19:11:40.861440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:23.640 [2024-12-05 19:11:40.861445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:23.640 [2024-12-05 19:11:40.861451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:23.640 [2024-12-05 19:11:40.861457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:23.640 [2024-12-05 19:11:40.861462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:23.640 [2024-12-05 19:11:40.861468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:23.640 [2024-12-05 19:11:40.861474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:23.640 [2024-12-05 19:11:40.861480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:23.640 [2024-12-05 19:11:40.861486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:23.640 [2024-12-05 19:11:40.861491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:23.640 [2024-12-05 19:11:40.861498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:23.640 [2024-12-05 19:11:40.861504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:23.640 [2024-12-05 19:11:40.861509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:23.640 [2024-12-05 19:11:40.861515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:23.640 [2024-12-05 19:11:40.861520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:23.640 [2024-12-05 19:11:40.861526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:23.640 [2024-12-05 19:11:40.861540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:23.640 [2024-12-05 19:11:40.861546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:23.640 [2024-12-05 19:11:40.861553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:23.640 [2024-12-05 19:11:40.861559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:23.640 [2024-12-05 19:11:40.861564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:23.640 [2024-12-05 19:11:40.861570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:23.640 [2024-12-05 19:11:40.861576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:23.640 [2024-12-05 19:11:40.861581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:23.640 [2024-12-05 19:11:40.861587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:23.640 [2024-12-05 19:11:40.861592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:23.640 [2024-12-05 19:11:40.861599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:23.640 [2024-12-05 19:11:40.861604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:23.640 [2024-12-05 19:11:40.861610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:23.640 [2024-12-05 19:11:40.861615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:23.640 [2024-12-05 19:11:40.861621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:23.640 [2024-12-05 19:11:40.861626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:23.640 [2024-12-05 19:11:40.861632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:23.640 [2024-12-05 19:11:40.861638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:23.640 [2024-12-05 19:11:40.861643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:23.640 [2024-12-05 19:11:40.861649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:23.640 [2024-12-05 19:11:40.861655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:23.640 [2024-12-05 19:11:40.861660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:23.640 [2024-12-05 19:11:40.861666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:23.640 [2024-12-05 19:11:40.861672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:23.640 [2024-12-05 19:11:40.861678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:23.640 [2024-12-05 19:11:40.861684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:23.640 [2024-12-05 19:11:40.861689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:23.640 [2024-12-05 19:11:40.861695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:23.640 [2024-12-05 19:11:40.861701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:23.640 [2024-12-05 19:11:40.861706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:23.640 [2024-12-05 19:11:40.861712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:23.640 [2024-12-05 19:11:40.861717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:23.640 [2024-12-05 19:11:40.861723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:23.640 [2024-12-05 19:11:40.861728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:23.640 [2024-12-05 19:11:40.861734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:23.640 [2024-12-05 19:11:40.861740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:23.640 [2024-12-05 19:11:40.861745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:23.640 [2024-12-05 19:11:40.861751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:23.640 [2024-12-05 19:11:40.861763] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:23.640 [2024-12-05 19:11:40.861768] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1f0d4c67-c2eb-4ccd-bb1a-4ce91bbc32a5 00:21:23.640 [2024-12-05 19:11:40.861774] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:23.640 [2024-12-05 19:11:40.861780] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:23.640 [2024-12-05 19:11:40.861785] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:23.640 [2024-12-05 19:11:40.861790] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:23.640 [2024-12-05 19:11:40.861796] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:23.640 [2024-12-05 19:11:40.861801] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:23.640 [2024-12-05 19:11:40.861807] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:23.640 [2024-12-05 19:11:40.861812] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:23.640 [2024-12-05 19:11:40.861817] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:23.640 [2024-12-05 19:11:40.861822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.640 [2024-12-05 19:11:40.861831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:23.640 [2024-12-05 19:11:40.861840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.685 ms 00:21:23.640 [2024-12-05 19:11:40.861845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.640 [2024-12-05 19:11:40.863102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.640 [2024-12-05 19:11:40.863119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:23.640 [2024-12-05 19:11:40.863127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.245 ms 00:21:23.640 [2024-12-05 19:11:40.863138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.641 [2024-12-05 19:11:40.863205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.641 [2024-12-05 19:11:40.863212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:23.641 [2024-12-05 19:11:40.863218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:21:23.641 [2024-12-05 19:11:40.863223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.641 [2024-12-05 19:11:40.867563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:23.641 [2024-12-05 19:11:40.867665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:23.641 [2024-12-05 19:11:40.867714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:23.641 [2024-12-05 19:11:40.867731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.641 [2024-12-05 19:11:40.867791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:23.641 [2024-12-05 19:11:40.867856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:23.641 [2024-12-05 19:11:40.867875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:23.641 [2024-12-05 19:11:40.867889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.641 [2024-12-05 19:11:40.867933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:23.641 [2024-12-05 19:11:40.867990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:23.641 [2024-12-05 19:11:40.868008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:23.641 [2024-12-05 19:11:40.868022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.641 [2024-12-05 19:11:40.868043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:23.641 [2024-12-05 19:11:40.868075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:23.641 [2024-12-05 19:11:40.868109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:23.641 [2024-12-05 19:11:40.868126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.641 [2024-12-05 19:11:40.875791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:23.641 [2024-12-05 19:11:40.875911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:23.641 [2024-12-05 19:11:40.875953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:23.641 [2024-12-05 19:11:40.875970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.641 [2024-12-05 19:11:40.882078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:23.641 [2024-12-05 19:11:40.882194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:23.641 [2024-12-05 19:11:40.882236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:23.641 [2024-12-05 19:11:40.882277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.641 [2024-12-05 19:11:40.882321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:23.641 [2024-12-05 19:11:40.882408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:23.641 [2024-12-05 19:11:40.882427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:23.641 [2024-12-05 19:11:40.882442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.641 [2024-12-05 19:11:40.882475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:23.641 [2024-12-05 19:11:40.882491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:23.641 [2024-12-05 19:11:40.882508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:23.641 [2024-12-05 19:11:40.882522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.641 [2024-12-05 19:11:40.882584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:23.641 [2024-12-05 19:11:40.882692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:23.641 [2024-12-05 19:11:40.882710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:23.641 [2024-12-05 19:11:40.882725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.641 [2024-12-05 19:11:40.882764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:23.641 [2024-12-05 19:11:40.882833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:23.641 [2024-12-05 19:11:40.882851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:23.641 [2024-12-05 19:11:40.882869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.641 [2024-12-05 19:11:40.882906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:23.641 [2024-12-05 19:11:40.882951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:23.641 [2024-12-05 19:11:40.882968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:23.641 [2024-12-05 19:11:40.882982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.641 [2024-12-05 19:11:40.883023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:23.641 [2024-12-05 19:11:40.883041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:23.641 [2024-12-05 19:11:40.883081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:23.641 [2024-12-05 19:11:40.883097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.641 [2024-12-05 19:11:40.883206] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 49.350 ms, result 0 00:21:23.900 00:21:23.900 00:21:23.900 19:11:41 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:21:23.900 [2024-12-05 19:11:41.305855] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:21:23.900 [2024-12-05 19:11:41.306323] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88759 ] 00:21:23.900 [2024-12-05 19:11:41.447456] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:24.159 [2024-12-05 19:11:41.470740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:24.159 [2024-12-05 19:11:41.557694] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:24.159 [2024-12-05 19:11:41.557901] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:24.159 [2024-12-05 19:11:41.699737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.159 [2024-12-05 19:11:41.699777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:24.159 [2024-12-05 19:11:41.699788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:24.159 [2024-12-05 19:11:41.699794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.159 [2024-12-05 19:11:41.699827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.159 [2024-12-05 19:11:41.699835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:24.159 [2024-12-05 19:11:41.699846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:21:24.159 [2024-12-05 19:11:41.699855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.159 [2024-12-05 19:11:41.699871] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:24.159 [2024-12-05 19:11:41.700042] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:24.159 [2024-12-05 19:11:41.700053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.159 [2024-12-05 19:11:41.700060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:24.159 [2024-12-05 19:11:41.700068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.187 ms 00:21:24.159 [2024-12-05 19:11:41.700074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.159 [2024-12-05 19:11:41.701022] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:24.159 [2024-12-05 19:11:41.703021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.159 [2024-12-05 19:11:41.703051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:24.159 [2024-12-05 19:11:41.703064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.001 ms 00:21:24.159 [2024-12-05 19:11:41.703073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.159 [2024-12-05 19:11:41.703118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.159 [2024-12-05 19:11:41.703125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:24.159 [2024-12-05 19:11:41.703132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:21:24.159 [2024-12-05 19:11:41.703139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.159 [2024-12-05 19:11:41.707550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.159 [2024-12-05 19:11:41.707660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:24.159 [2024-12-05 19:11:41.707680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.383 ms 00:21:24.159 [2024-12-05 19:11:41.707685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.159 [2024-12-05 19:11:41.707747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.159 [2024-12-05 19:11:41.707755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:24.159 [2024-12-05 19:11:41.707761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:21:24.159 [2024-12-05 19:11:41.707767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.159 [2024-12-05 19:11:41.707811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.159 [2024-12-05 19:11:41.707819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:24.159 [2024-12-05 19:11:41.707825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:24.159 [2024-12-05 19:11:41.707833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.159 [2024-12-05 19:11:41.707850] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:24.159 [2024-12-05 19:11:41.709014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.159 [2024-12-05 19:11:41.709040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:24.160 [2024-12-05 19:11:41.709047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.167 ms 00:21:24.160 [2024-12-05 19:11:41.709053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.160 [2024-12-05 19:11:41.709079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.160 [2024-12-05 19:11:41.709086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:24.160 [2024-12-05 19:11:41.709092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:24.160 [2024-12-05 19:11:41.709099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.160 [2024-12-05 19:11:41.709115] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:24.160 [2024-12-05 19:11:41.709129] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:24.160 [2024-12-05 19:11:41.709160] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:24.160 [2024-12-05 19:11:41.709172] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:24.160 [2024-12-05 19:11:41.709266] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:24.160 [2024-12-05 19:11:41.709276] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:24.160 [2024-12-05 19:11:41.709286] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:24.160 [2024-12-05 19:11:41.709294] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:24.160 [2024-12-05 19:11:41.709300] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:24.160 [2024-12-05 19:11:41.709307] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:24.160 [2024-12-05 19:11:41.709313] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:24.160 [2024-12-05 19:11:41.709319] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:24.160 [2024-12-05 19:11:41.709324] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:24.160 [2024-12-05 19:11:41.709330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.160 [2024-12-05 19:11:41.709338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:24.160 [2024-12-05 19:11:41.709344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.218 ms 00:21:24.160 [2024-12-05 19:11:41.709349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.160 [2024-12-05 19:11:41.709415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.160 [2024-12-05 19:11:41.709421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:24.160 [2024-12-05 19:11:41.709427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:21:24.160 [2024-12-05 19:11:41.709433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.160 [2024-12-05 19:11:41.709509] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:24.160 [2024-12-05 19:11:41.709516] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:24.160 [2024-12-05 19:11:41.709522] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:24.160 [2024-12-05 19:11:41.709528] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:24.160 [2024-12-05 19:11:41.709549] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:24.160 [2024-12-05 19:11:41.709555] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:24.160 [2024-12-05 19:11:41.709560] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:24.160 [2024-12-05 19:11:41.709565] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:24.160 [2024-12-05 19:11:41.709570] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:24.160 [2024-12-05 19:11:41.709575] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:24.160 [2024-12-05 19:11:41.709581] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:24.160 [2024-12-05 19:11:41.709586] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:24.160 [2024-12-05 19:11:41.709594] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:24.160 [2024-12-05 19:11:41.709599] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:24.160 [2024-12-05 19:11:41.709605] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:24.160 [2024-12-05 19:11:41.709610] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:24.160 [2024-12-05 19:11:41.709615] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:24.160 [2024-12-05 19:11:41.709620] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:24.160 [2024-12-05 19:11:41.709626] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:24.160 [2024-12-05 19:11:41.709631] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:24.160 [2024-12-05 19:11:41.709636] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:24.160 [2024-12-05 19:11:41.709641] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:24.160 [2024-12-05 19:11:41.709646] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:24.160 [2024-12-05 19:11:41.709651] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:24.160 [2024-12-05 19:11:41.709656] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:24.160 [2024-12-05 19:11:41.709662] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:24.160 [2024-12-05 19:11:41.709667] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:24.160 [2024-12-05 19:11:41.709673] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:24.160 [2024-12-05 19:11:41.709681] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:24.160 [2024-12-05 19:11:41.709687] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:24.160 [2024-12-05 19:11:41.709693] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:24.160 [2024-12-05 19:11:41.709699] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:24.160 [2024-12-05 19:11:41.709704] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:24.160 [2024-12-05 19:11:41.709709] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:24.160 [2024-12-05 19:11:41.709715] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:24.160 [2024-12-05 19:11:41.709720] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:24.160 [2024-12-05 19:11:41.709726] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:24.160 [2024-12-05 19:11:41.709731] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:24.160 [2024-12-05 19:11:41.709737] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:24.160 [2024-12-05 19:11:41.709742] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:24.160 [2024-12-05 19:11:41.709748] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:24.160 [2024-12-05 19:11:41.709753] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:24.160 [2024-12-05 19:11:41.709760] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:24.160 [2024-12-05 19:11:41.709765] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:24.160 [2024-12-05 19:11:41.709777] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:24.160 [2024-12-05 19:11:41.709784] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:24.160 [2024-12-05 19:11:41.709790] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:24.160 [2024-12-05 19:11:41.709796] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:24.160 [2024-12-05 19:11:41.709802] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:24.160 [2024-12-05 19:11:41.709808] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:24.160 [2024-12-05 19:11:41.709813] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:24.160 [2024-12-05 19:11:41.709818] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:24.160 [2024-12-05 19:11:41.709823] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:24.160 [2024-12-05 19:11:41.709829] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:24.160 [2024-12-05 19:11:41.709836] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:24.160 [2024-12-05 19:11:41.709842] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:24.160 [2024-12-05 19:11:41.709848] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:24.160 [2024-12-05 19:11:41.709853] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:24.160 [2024-12-05 19:11:41.709858] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:24.160 [2024-12-05 19:11:41.709863] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:24.160 [2024-12-05 19:11:41.709869] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:24.160 [2024-12-05 19:11:41.709874] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:24.160 [2024-12-05 19:11:41.709880] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:24.160 [2024-12-05 19:11:41.709885] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:24.160 [2024-12-05 19:11:41.709893] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:24.160 [2024-12-05 19:11:41.709898] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:24.160 [2024-12-05 19:11:41.709903] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:24.160 [2024-12-05 19:11:41.709908] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:24.161 [2024-12-05 19:11:41.709913] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:24.161 [2024-12-05 19:11:41.709918] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:24.161 [2024-12-05 19:11:41.709923] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:24.161 [2024-12-05 19:11:41.709929] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:24.161 [2024-12-05 19:11:41.709934] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:24.161 [2024-12-05 19:11:41.709939] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:24.161 [2024-12-05 19:11:41.709946] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:24.161 [2024-12-05 19:11:41.709952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.161 [2024-12-05 19:11:41.709959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:24.161 [2024-12-05 19:11:41.709964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.494 ms 00:21:24.161 [2024-12-05 19:11:41.709971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.421 [2024-12-05 19:11:41.717959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.421 [2024-12-05 19:11:41.717988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:24.421 [2024-12-05 19:11:41.717997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.957 ms 00:21:24.421 [2024-12-05 19:11:41.718003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.421 [2024-12-05 19:11:41.718064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.421 [2024-12-05 19:11:41.718071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:24.421 [2024-12-05 19:11:41.718080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:21:24.421 [2024-12-05 19:11:41.718086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.421 [2024-12-05 19:11:41.740799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.421 [2024-12-05 19:11:41.740892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:24.421 [2024-12-05 19:11:41.740924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.668 ms 00:21:24.421 [2024-12-05 19:11:41.740954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.421 [2024-12-05 19:11:41.741052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.421 [2024-12-05 19:11:41.741077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:24.421 [2024-12-05 19:11:41.741100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:24.421 [2024-12-05 19:11:41.741129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.421 [2024-12-05 19:11:41.741803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.421 [2024-12-05 19:11:41.741866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:24.421 [2024-12-05 19:11:41.741891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.494 ms 00:21:24.421 [2024-12-05 19:11:41.741910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.421 [2024-12-05 19:11:41.742049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.421 [2024-12-05 19:11:41.742062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:24.421 [2024-12-05 19:11:41.742069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:21:24.421 [2024-12-05 19:11:41.742074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.421 [2024-12-05 19:11:41.746609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.421 [2024-12-05 19:11:41.746719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:24.421 [2024-12-05 19:11:41.746731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.520 ms 00:21:24.421 [2024-12-05 19:11:41.746737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.421 [2024-12-05 19:11:41.748819] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:21:24.421 [2024-12-05 19:11:41.748848] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:24.421 [2024-12-05 19:11:41.748862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.421 [2024-12-05 19:11:41.748868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:24.421 [2024-12-05 19:11:41.748874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.057 ms 00:21:24.421 [2024-12-05 19:11:41.748880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.421 [2024-12-05 19:11:41.760171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.421 [2024-12-05 19:11:41.760206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:24.421 [2024-12-05 19:11:41.760221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.260 ms 00:21:24.421 [2024-12-05 19:11:41.760229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.421 [2024-12-05 19:11:41.761739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.421 [2024-12-05 19:11:41.761839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:24.421 [2024-12-05 19:11:41.761851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.467 ms 00:21:24.421 [2024-12-05 19:11:41.761857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.421 [2024-12-05 19:11:41.763178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.421 [2024-12-05 19:11:41.763202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:24.421 [2024-12-05 19:11:41.763209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.297 ms 00:21:24.421 [2024-12-05 19:11:41.763216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.421 [2024-12-05 19:11:41.763459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.421 [2024-12-05 19:11:41.763471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:24.421 [2024-12-05 19:11:41.763478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.185 ms 00:21:24.421 [2024-12-05 19:11:41.763486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.421 [2024-12-05 19:11:41.777379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.421 [2024-12-05 19:11:41.777414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:24.421 [2024-12-05 19:11:41.777423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.877 ms 00:21:24.421 [2024-12-05 19:11:41.777434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.421 [2024-12-05 19:11:41.783583] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:24.421 [2024-12-05 19:11:41.785443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.421 [2024-12-05 19:11:41.785470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:24.422 [2024-12-05 19:11:41.785478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.979 ms 00:21:24.422 [2024-12-05 19:11:41.785485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.422 [2024-12-05 19:11:41.785526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.422 [2024-12-05 19:11:41.785542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:24.422 [2024-12-05 19:11:41.785555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:24.422 [2024-12-05 19:11:41.785562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.422 [2024-12-05 19:11:41.785612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.422 [2024-12-05 19:11:41.785620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:24.422 [2024-12-05 19:11:41.785628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:21:24.422 [2024-12-05 19:11:41.785635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.422 [2024-12-05 19:11:41.785650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.422 [2024-12-05 19:11:41.785657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:24.422 [2024-12-05 19:11:41.785663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:24.422 [2024-12-05 19:11:41.785669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.422 [2024-12-05 19:11:41.785696] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:24.422 [2024-12-05 19:11:41.785705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.422 [2024-12-05 19:11:41.785711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:24.422 [2024-12-05 19:11:41.785718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:24.422 [2024-12-05 19:11:41.785727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.422 [2024-12-05 19:11:41.788877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.422 [2024-12-05 19:11:41.788995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:24.422 [2024-12-05 19:11:41.789008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.136 ms 00:21:24.422 [2024-12-05 19:11:41.789022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.422 [2024-12-05 19:11:41.789072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.422 [2024-12-05 19:11:41.789081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:24.422 [2024-12-05 19:11:41.789087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:21:24.422 [2024-12-05 19:11:41.789093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.422 [2024-12-05 19:11:41.789869] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 89.814 ms, result 0 00:21:25.369  [2024-12-05T19:11:44.317Z] Copying: 15/1024 [MB] (15 MBps) [2024-12-05T19:11:45.304Z] Copying: 35/1024 [MB] (19 MBps) [2024-12-05T19:11:46.244Z] Copying: 48/1024 [MB] (13 MBps) [2024-12-05T19:11:47.182Z] Copying: 63/1024 [MB] (15 MBps) [2024-12-05T19:11:48.124Z] Copying: 83/1024 [MB] (19 MBps) [2024-12-05T19:11:49.069Z] Copying: 105/1024 [MB] (22 MBps) [2024-12-05T19:11:50.011Z] Copying: 126/1024 [MB] (20 MBps) [2024-12-05T19:11:50.952Z] Copying: 146/1024 [MB] (20 MBps) [2024-12-05T19:11:52.363Z] Copying: 169/1024 [MB] (23 MBps) [2024-12-05T19:11:52.936Z] Copying: 189/1024 [MB] (19 MBps) [2024-12-05T19:11:54.325Z] Copying: 201/1024 [MB] (11 MBps) [2024-12-05T19:11:55.272Z] Copying: 224/1024 [MB] (22 MBps) [2024-12-05T19:11:56.213Z] Copying: 240/1024 [MB] (15 MBps) [2024-12-05T19:11:57.153Z] Copying: 262/1024 [MB] (21 MBps) [2024-12-05T19:11:58.095Z] Copying: 281/1024 [MB] (19 MBps) [2024-12-05T19:11:59.042Z] Copying: 301/1024 [MB] (20 MBps) [2024-12-05T19:11:59.988Z] Copying: 318/1024 [MB] (17 MBps) [2024-12-05T19:12:00.929Z] Copying: 337/1024 [MB] (18 MBps) [2024-12-05T19:12:02.310Z] Copying: 360/1024 [MB] (23 MBps) [2024-12-05T19:12:03.251Z] Copying: 378/1024 [MB] (17 MBps) [2024-12-05T19:12:04.283Z] Copying: 391/1024 [MB] (13 MBps) [2024-12-05T19:12:05.225Z] Copying: 409/1024 [MB] (17 MBps) [2024-12-05T19:12:06.165Z] Copying: 421/1024 [MB] (12 MBps) [2024-12-05T19:12:07.110Z] Copying: 449/1024 [MB] (28 MBps) [2024-12-05T19:12:08.054Z] Copying: 462/1024 [MB] (13 MBps) [2024-12-05T19:12:08.995Z] Copying: 472/1024 [MB] (10 MBps) [2024-12-05T19:12:09.940Z] Copying: 490/1024 [MB] (17 MBps) [2024-12-05T19:12:11.321Z] Copying: 508/1024 [MB] (17 MBps) [2024-12-05T19:12:12.264Z] Copying: 527/1024 [MB] (19 MBps) [2024-12-05T19:12:13.207Z] Copying: 541/1024 [MB] (13 MBps) [2024-12-05T19:12:14.152Z] Copying: 554/1024 [MB] (13 MBps) [2024-12-05T19:12:15.097Z] Copying: 567/1024 [MB] (12 MBps) [2024-12-05T19:12:16.042Z] Copying: 583/1024 [MB] (16 MBps) [2024-12-05T19:12:16.987Z] Copying: 596/1024 [MB] (13 MBps) [2024-12-05T19:12:17.948Z] Copying: 618/1024 [MB] (21 MBps) [2024-12-05T19:12:19.337Z] Copying: 640/1024 [MB] (21 MBps) [2024-12-05T19:12:20.278Z] Copying: 656/1024 [MB] (15 MBps) [2024-12-05T19:12:21.224Z] Copying: 673/1024 [MB] (17 MBps) [2024-12-05T19:12:22.164Z] Copying: 695/1024 [MB] (21 MBps) [2024-12-05T19:12:23.105Z] Copying: 717/1024 [MB] (21 MBps) [2024-12-05T19:12:24.053Z] Copying: 733/1024 [MB] (16 MBps) [2024-12-05T19:12:24.997Z] Copying: 750/1024 [MB] (17 MBps) [2024-12-05T19:12:25.944Z] Copying: 765/1024 [MB] (15 MBps) [2024-12-05T19:12:27.329Z] Copying: 779/1024 [MB] (13 MBps) [2024-12-05T19:12:28.297Z] Copying: 799/1024 [MB] (20 MBps) [2024-12-05T19:12:29.239Z] Copying: 810/1024 [MB] (10 MBps) [2024-12-05T19:12:30.178Z] Copying: 821/1024 [MB] (10 MBps) [2024-12-05T19:12:31.121Z] Copying: 831/1024 [MB] (10 MBps) [2024-12-05T19:12:32.135Z] Copying: 842/1024 [MB] (10 MBps) [2024-12-05T19:12:33.076Z] Copying: 852/1024 [MB] (10 MBps) [2024-12-05T19:12:34.020Z] Copying: 865/1024 [MB] (12 MBps) [2024-12-05T19:12:34.968Z] Copying: 883/1024 [MB] (18 MBps) [2024-12-05T19:12:36.354Z] Copying: 897/1024 [MB] (13 MBps) [2024-12-05T19:12:37.296Z] Copying: 917/1024 [MB] (20 MBps) [2024-12-05T19:12:38.244Z] Copying: 933/1024 [MB] (16 MBps) [2024-12-05T19:12:39.189Z] Copying: 953/1024 [MB] (19 MBps) [2024-12-05T19:12:40.132Z] Copying: 972/1024 [MB] (19 MBps) [2024-12-05T19:12:41.078Z] Copying: 985/1024 [MB] (12 MBps) [2024-12-05T19:12:42.023Z] Copying: 1000/1024 [MB] (15 MBps) [2024-12-05T19:12:42.284Z] Copying: 1021/1024 [MB] (20 MBps) [2024-12-05T19:12:42.547Z] Copying: 1024/1024 [MB] (average 17 MBps)[2024-12-05 19:12:42.530661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.988 [2024-12-05 19:12:42.531013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:24.988 [2024-12-05 19:12:42.531064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:24.988 [2024-12-05 19:12:42.531074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.988 [2024-12-05 19:12:42.531193] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:24.988 [2024-12-05 19:12:42.531987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.988 [2024-12-05 19:12:42.532016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:24.988 [2024-12-05 19:12:42.532043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.774 ms 00:22:24.988 [2024-12-05 19:12:42.532054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.988 [2024-12-05 19:12:42.532315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.988 [2024-12-05 19:12:42.532328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:24.988 [2024-12-05 19:12:42.532338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.234 ms 00:22:24.988 [2024-12-05 19:12:42.532351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.988 [2024-12-05 19:12:42.535825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.988 [2024-12-05 19:12:42.535856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:24.988 [2024-12-05 19:12:42.535866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.459 ms 00:22:24.988 [2024-12-05 19:12:42.535875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.988 [2024-12-05 19:12:42.542082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.988 [2024-12-05 19:12:42.542266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:24.988 [2024-12-05 19:12:42.542286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.189 ms 00:22:24.988 [2024-12-05 19:12:42.542295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.988 [2024-12-05 19:12:42.545305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.252 [2024-12-05 19:12:42.545471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:25.252 [2024-12-05 19:12:42.545490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.931 ms 00:22:25.252 [2024-12-05 19:12:42.545499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.252 [2024-12-05 19:12:42.550597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.252 [2024-12-05 19:12:42.550651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:25.252 [2024-12-05 19:12:42.550663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.968 ms 00:22:25.252 [2024-12-05 19:12:42.550672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.252 [2024-12-05 19:12:42.550813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.252 [2024-12-05 19:12:42.550825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:25.252 [2024-12-05 19:12:42.550834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:22:25.252 [2024-12-05 19:12:42.550847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.252 [2024-12-05 19:12:42.554294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.252 [2024-12-05 19:12:42.554338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:25.252 [2024-12-05 19:12:42.554348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.430 ms 00:22:25.252 [2024-12-05 19:12:42.554355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.252 [2024-12-05 19:12:42.556962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.252 [2024-12-05 19:12:42.557129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:25.252 [2024-12-05 19:12:42.557147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.565 ms 00:22:25.252 [2024-12-05 19:12:42.557155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.252 [2024-12-05 19:12:42.559694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.252 [2024-12-05 19:12:42.559739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:25.252 [2024-12-05 19:12:42.559749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.393 ms 00:22:25.252 [2024-12-05 19:12:42.559757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.252 [2024-12-05 19:12:42.561872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.252 [2024-12-05 19:12:42.561922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:25.252 [2024-12-05 19:12:42.561932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.044 ms 00:22:25.252 [2024-12-05 19:12:42.561939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.252 [2024-12-05 19:12:42.561981] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:25.252 [2024-12-05 19:12:42.561999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:25.252 [2024-12-05 19:12:42.562010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:25.252 [2024-12-05 19:12:42.562018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:25.252 [2024-12-05 19:12:42.562026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:25.252 [2024-12-05 19:12:42.562035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:25.252 [2024-12-05 19:12:42.562043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:25.252 [2024-12-05 19:12:42.562050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:25.252 [2024-12-05 19:12:42.562059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:25.252 [2024-12-05 19:12:42.562066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:25.252 [2024-12-05 19:12:42.562073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:25.252 [2024-12-05 19:12:42.562081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:25.252 [2024-12-05 19:12:42.562090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:25.252 [2024-12-05 19:12:42.562097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:25.252 [2024-12-05 19:12:42.562105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:25.252 [2024-12-05 19:12:42.562113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:25.252 [2024-12-05 19:12:42.562120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:25.252 [2024-12-05 19:12:42.562128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:25.252 [2024-12-05 19:12:42.562136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:25.252 [2024-12-05 19:12:42.562143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:25.252 [2024-12-05 19:12:42.562151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:25.252 [2024-12-05 19:12:42.562158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:25.252 [2024-12-05 19:12:42.562165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:25.252 [2024-12-05 19:12:42.562173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:25.253 [2024-12-05 19:12:42.562180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:25.253 [2024-12-05 19:12:42.562187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:25.253 [2024-12-05 19:12:42.562194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:25.253 [2024-12-05 19:12:42.562207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:25.253 [2024-12-05 19:12:42.562216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:25.253 [2024-12-05 19:12:42.562223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:25.253 [2024-12-05 19:12:42.562231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:25.253 [2024-12-05 19:12:42.562239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:25.253 [2024-12-05 19:12:42.562247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:25.253 [2024-12-05 19:12:42.562278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:25.253 [2024-12-05 19:12:42.562286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:25.253 [2024-12-05 19:12:42.562294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:25.253 [2024-12-05 19:12:42.562303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:25.253 [2024-12-05 19:12:42.562311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:25.253 [2024-12-05 19:12:42.562319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:25.253 [2024-12-05 19:12:42.562327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:25.253 [2024-12-05 19:12:42.562335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:25.253 [2024-12-05 19:12:42.562343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:25.253 [2024-12-05 19:12:42.562350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:25.253 [2024-12-05 19:12:42.562373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:25.253 [2024-12-05 19:12:42.562381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:25.253 [2024-12-05 19:12:42.562389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:25.253 [2024-12-05 19:12:42.562397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:25.253 [2024-12-05 19:12:42.562405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:25.253 [2024-12-05 19:12:42.562413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:25.253 [2024-12-05 19:12:42.562421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:25.253 [2024-12-05 19:12:42.562429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:25.253 [2024-12-05 19:12:42.562437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:25.253 [2024-12-05 19:12:42.562445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:25.253 [2024-12-05 19:12:42.562453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:25.253 [2024-12-05 19:12:42.562461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:25.253 [2024-12-05 19:12:42.562469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:25.253 [2024-12-05 19:12:42.562478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:25.253 [2024-12-05 19:12:42.562486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:25.253 [2024-12-05 19:12:42.562494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:25.253 [2024-12-05 19:12:42.562502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:25.253 [2024-12-05 19:12:42.562510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:25.253 [2024-12-05 19:12:42.562518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:25.253 [2024-12-05 19:12:42.562527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:25.253 [2024-12-05 19:12:42.562536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:25.253 [2024-12-05 19:12:42.562544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:25.253 [2024-12-05 19:12:42.562551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:25.253 [2024-12-05 19:12:42.562559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:25.253 [2024-12-05 19:12:42.562566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:25.253 [2024-12-05 19:12:42.562575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:25.253 [2024-12-05 19:12:42.562582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:25.253 [2024-12-05 19:12:42.562589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:25.253 [2024-12-05 19:12:42.562597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:25.253 [2024-12-05 19:12:42.562604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:25.253 [2024-12-05 19:12:42.562611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:25.253 [2024-12-05 19:12:42.562620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:25.253 [2024-12-05 19:12:42.562628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:25.253 [2024-12-05 19:12:42.562635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:25.253 [2024-12-05 19:12:42.562642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:25.253 [2024-12-05 19:12:42.562650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:25.253 [2024-12-05 19:12:42.562658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:25.253 [2024-12-05 19:12:42.562665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:25.253 [2024-12-05 19:12:42.562673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:25.253 [2024-12-05 19:12:42.562680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:25.253 [2024-12-05 19:12:42.562688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:25.253 [2024-12-05 19:12:42.562695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:25.253 [2024-12-05 19:12:42.562703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:25.253 [2024-12-05 19:12:42.562710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:25.253 [2024-12-05 19:12:42.562718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:25.253 [2024-12-05 19:12:42.562725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:25.253 [2024-12-05 19:12:42.562733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:25.253 [2024-12-05 19:12:42.562740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:25.253 [2024-12-05 19:12:42.562748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:25.253 [2024-12-05 19:12:42.562755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:25.253 [2024-12-05 19:12:42.562763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:25.253 [2024-12-05 19:12:42.562770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:25.253 [2024-12-05 19:12:42.562779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:25.253 [2024-12-05 19:12:42.562788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:25.253 [2024-12-05 19:12:42.562796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:25.253 [2024-12-05 19:12:42.562804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:25.253 [2024-12-05 19:12:42.562812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:25.253 [2024-12-05 19:12:42.562819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:25.253 [2024-12-05 19:12:42.562837] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:25.253 [2024-12-05 19:12:42.562846] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1f0d4c67-c2eb-4ccd-bb1a-4ce91bbc32a5 00:22:25.253 [2024-12-05 19:12:42.562854] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:25.253 [2024-12-05 19:12:42.562863] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:25.253 [2024-12-05 19:12:42.563569] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:25.253 [2024-12-05 19:12:42.563580] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:25.253 [2024-12-05 19:12:42.563589] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:25.253 [2024-12-05 19:12:42.563598] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:25.253 [2024-12-05 19:12:42.563611] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:25.253 [2024-12-05 19:12:42.563619] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:25.254 [2024-12-05 19:12:42.563627] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:25.254 [2024-12-05 19:12:42.563637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.254 [2024-12-05 19:12:42.563650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:25.254 [2024-12-05 19:12:42.563660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.657 ms 00:22:25.254 [2024-12-05 19:12:42.563669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.254 [2024-12-05 19:12:42.566482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.254 [2024-12-05 19:12:42.566543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:25.254 [2024-12-05 19:12:42.566568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.791 ms 00:22:25.254 [2024-12-05 19:12:42.566599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.254 [2024-12-05 19:12:42.566753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.254 [2024-12-05 19:12:42.566779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:25.254 [2024-12-05 19:12:42.566802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:22:25.254 [2024-12-05 19:12:42.566821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.254 [2024-12-05 19:12:42.574201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:25.254 [2024-12-05 19:12:42.574389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:25.254 [2024-12-05 19:12:42.574450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:25.254 [2024-12-05 19:12:42.574473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.254 [2024-12-05 19:12:42.574554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:25.254 [2024-12-05 19:12:42.574577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:25.254 [2024-12-05 19:12:42.574597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:25.254 [2024-12-05 19:12:42.574616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.254 [2024-12-05 19:12:42.574695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:25.254 [2024-12-05 19:12:42.574786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:25.254 [2024-12-05 19:12:42.574807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:25.254 [2024-12-05 19:12:42.574826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.254 [2024-12-05 19:12:42.574859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:25.254 [2024-12-05 19:12:42.574881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:25.254 [2024-12-05 19:12:42.574900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:25.254 [2024-12-05 19:12:42.574956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.254 [2024-12-05 19:12:42.588167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:25.254 [2024-12-05 19:12:42.588378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:25.254 [2024-12-05 19:12:42.588437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:25.254 [2024-12-05 19:12:42.588460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.254 [2024-12-05 19:12:42.598597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:25.254 [2024-12-05 19:12:42.598755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:25.254 [2024-12-05 19:12:42.598807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:25.254 [2024-12-05 19:12:42.598830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.254 [2024-12-05 19:12:42.598889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:25.254 [2024-12-05 19:12:42.598911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:25.254 [2024-12-05 19:12:42.598931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:25.254 [2024-12-05 19:12:42.598950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.254 [2024-12-05 19:12:42.598997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:25.254 [2024-12-05 19:12:42.599069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:25.254 [2024-12-05 19:12:42.599092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:25.254 [2024-12-05 19:12:42.599111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.254 [2024-12-05 19:12:42.599202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:25.254 [2024-12-05 19:12:42.599228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:25.254 [2024-12-05 19:12:42.599264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:25.254 [2024-12-05 19:12:42.599285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.254 [2024-12-05 19:12:42.599383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:25.254 [2024-12-05 19:12:42.599410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:25.254 [2024-12-05 19:12:42.599435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:25.254 [2024-12-05 19:12:42.599455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.254 [2024-12-05 19:12:42.599508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:25.254 [2024-12-05 19:12:42.599573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:25.254 [2024-12-05 19:12:42.599596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:25.254 [2024-12-05 19:12:42.599615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.254 [2024-12-05 19:12:42.599673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:25.254 [2024-12-05 19:12:42.599762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:25.254 [2024-12-05 19:12:42.599784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:25.254 [2024-12-05 19:12:42.599812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.254 [2024-12-05 19:12:42.600002] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 69.304 ms, result 0 00:22:25.254 00:22:25.254 00:22:25.516 19:12:42 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:28.063 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:22:28.063 19:12:45 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:22:28.063 [2024-12-05 19:12:45.095642] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:22:28.063 [2024-12-05 19:12:45.095973] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89418 ] 00:22:28.063 [2024-12-05 19:12:45.243736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:28.063 [2024-12-05 19:12:45.284126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:28.063 [2024-12-05 19:12:45.400177] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:28.063 [2024-12-05 19:12:45.400297] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:28.063 [2024-12-05 19:12:45.560701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.063 [2024-12-05 19:12:45.560757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:28.063 [2024-12-05 19:12:45.560772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:28.063 [2024-12-05 19:12:45.560780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.063 [2024-12-05 19:12:45.560835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.063 [2024-12-05 19:12:45.560846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:28.063 [2024-12-05 19:12:45.560856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:22:28.063 [2024-12-05 19:12:45.560872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.063 [2024-12-05 19:12:45.560902] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:28.063 [2024-12-05 19:12:45.561171] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:28.063 [2024-12-05 19:12:45.561187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.063 [2024-12-05 19:12:45.561198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:28.063 [2024-12-05 19:12:45.561210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.297 ms 00:22:28.063 [2024-12-05 19:12:45.561222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.063 [2024-12-05 19:12:45.562973] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:28.063 [2024-12-05 19:12:45.566726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.063 [2024-12-05 19:12:45.566778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:28.063 [2024-12-05 19:12:45.566789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.755 ms 00:22:28.063 [2024-12-05 19:12:45.566810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.063 [2024-12-05 19:12:45.566884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.063 [2024-12-05 19:12:45.566896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:28.063 [2024-12-05 19:12:45.566909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:22:28.063 [2024-12-05 19:12:45.566917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.063 [2024-12-05 19:12:45.574861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.063 [2024-12-05 19:12:45.574904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:28.063 [2024-12-05 19:12:45.574920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.899 ms 00:22:28.064 [2024-12-05 19:12:45.574928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.064 [2024-12-05 19:12:45.575031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.064 [2024-12-05 19:12:45.575042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:28.064 [2024-12-05 19:12:45.575051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:22:28.064 [2024-12-05 19:12:45.575058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.064 [2024-12-05 19:12:45.575115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.064 [2024-12-05 19:12:45.575125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:28.064 [2024-12-05 19:12:45.575134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:22:28.064 [2024-12-05 19:12:45.575145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.064 [2024-12-05 19:12:45.575166] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:28.064 [2024-12-05 19:12:45.577188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.064 [2024-12-05 19:12:45.577384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:28.064 [2024-12-05 19:12:45.577402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.027 ms 00:22:28.064 [2024-12-05 19:12:45.577410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.064 [2024-12-05 19:12:45.577455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.064 [2024-12-05 19:12:45.577463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:28.064 [2024-12-05 19:12:45.577472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:22:28.064 [2024-12-05 19:12:45.577483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.064 [2024-12-05 19:12:45.577507] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:28.064 [2024-12-05 19:12:45.577530] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:28.064 [2024-12-05 19:12:45.577585] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:28.064 [2024-12-05 19:12:45.577606] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:28.064 [2024-12-05 19:12:45.577713] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:28.064 [2024-12-05 19:12:45.577724] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:28.064 [2024-12-05 19:12:45.577738] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:28.064 [2024-12-05 19:12:45.577748] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:28.064 [2024-12-05 19:12:45.577757] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:28.064 [2024-12-05 19:12:45.577766] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:28.064 [2024-12-05 19:12:45.577774] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:28.064 [2024-12-05 19:12:45.577782] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:28.064 [2024-12-05 19:12:45.577790] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:28.064 [2024-12-05 19:12:45.577799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.064 [2024-12-05 19:12:45.577806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:28.064 [2024-12-05 19:12:45.577814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.297 ms 00:22:28.064 [2024-12-05 19:12:45.577823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.064 [2024-12-05 19:12:45.577909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.064 [2024-12-05 19:12:45.577918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:28.064 [2024-12-05 19:12:45.577928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:22:28.064 [2024-12-05 19:12:45.577935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.064 [2024-12-05 19:12:45.578047] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:28.064 [2024-12-05 19:12:45.578058] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:28.064 [2024-12-05 19:12:45.578068] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:28.064 [2024-12-05 19:12:45.578083] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:28.064 [2024-12-05 19:12:45.578092] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:28.064 [2024-12-05 19:12:45.578100] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:28.064 [2024-12-05 19:12:45.578107] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:28.064 [2024-12-05 19:12:45.578116] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:28.064 [2024-12-05 19:12:45.578125] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:28.064 [2024-12-05 19:12:45.578133] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:28.064 [2024-12-05 19:12:45.578141] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:28.064 [2024-12-05 19:12:45.578151] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:28.064 [2024-12-05 19:12:45.578159] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:28.064 [2024-12-05 19:12:45.578167] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:28.064 [2024-12-05 19:12:45.578175] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:28.064 [2024-12-05 19:12:45.578186] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:28.064 [2024-12-05 19:12:45.578194] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:28.064 [2024-12-05 19:12:45.578202] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:28.064 [2024-12-05 19:12:45.578210] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:28.064 [2024-12-05 19:12:45.578218] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:28.064 [2024-12-05 19:12:45.578225] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:28.064 [2024-12-05 19:12:45.578233] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:28.064 [2024-12-05 19:12:45.578240] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:28.064 [2024-12-05 19:12:45.578265] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:28.064 [2024-12-05 19:12:45.578274] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:28.064 [2024-12-05 19:12:45.578282] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:28.064 [2024-12-05 19:12:45.578289] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:28.064 [2024-12-05 19:12:45.578302] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:28.064 [2024-12-05 19:12:45.578309] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:28.064 [2024-12-05 19:12:45.578317] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:28.064 [2024-12-05 19:12:45.578325] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:28.064 [2024-12-05 19:12:45.578332] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:28.064 [2024-12-05 19:12:45.578339] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:28.064 [2024-12-05 19:12:45.578346] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:28.064 [2024-12-05 19:12:45.578353] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:28.064 [2024-12-05 19:12:45.578359] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:28.064 [2024-12-05 19:12:45.578366] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:28.064 [2024-12-05 19:12:45.578373] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:28.064 [2024-12-05 19:12:45.578379] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:28.064 [2024-12-05 19:12:45.578387] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:28.064 [2024-12-05 19:12:45.578394] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:28.064 [2024-12-05 19:12:45.578400] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:28.064 [2024-12-05 19:12:45.578407] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:28.064 [2024-12-05 19:12:45.578416] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:28.064 [2024-12-05 19:12:45.578430] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:28.064 [2024-12-05 19:12:45.578441] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:28.064 [2024-12-05 19:12:45.578451] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:28.064 [2024-12-05 19:12:45.578459] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:28.064 [2024-12-05 19:12:45.578466] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:28.064 [2024-12-05 19:12:45.578475] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:28.064 [2024-12-05 19:12:45.578482] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:28.064 [2024-12-05 19:12:45.578488] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:28.064 [2024-12-05 19:12:45.578495] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:28.064 [2024-12-05 19:12:45.578504] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:28.064 [2024-12-05 19:12:45.578514] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:28.064 [2024-12-05 19:12:45.578522] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:28.064 [2024-12-05 19:12:45.578530] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:28.064 [2024-12-05 19:12:45.578537] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:28.064 [2024-12-05 19:12:45.578544] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:28.064 [2024-12-05 19:12:45.578553] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:28.064 [2024-12-05 19:12:45.578560] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:28.064 [2024-12-05 19:12:45.578568] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:28.064 [2024-12-05 19:12:45.578575] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:28.064 [2024-12-05 19:12:45.578582] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:28.064 [2024-12-05 19:12:45.578596] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:28.064 [2024-12-05 19:12:45.578603] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:28.064 [2024-12-05 19:12:45.578610] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:28.064 [2024-12-05 19:12:45.578617] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:28.064 [2024-12-05 19:12:45.578625] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:28.064 [2024-12-05 19:12:45.578633] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:28.064 [2024-12-05 19:12:45.578641] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:28.064 [2024-12-05 19:12:45.578650] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:28.064 [2024-12-05 19:12:45.578657] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:28.064 [2024-12-05 19:12:45.578664] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:28.064 [2024-12-05 19:12:45.578672] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:28.064 [2024-12-05 19:12:45.578682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.064 [2024-12-05 19:12:45.578692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:28.064 [2024-12-05 19:12:45.578700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.703 ms 00:22:28.064 [2024-12-05 19:12:45.578709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.064 [2024-12-05 19:12:45.592663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.064 [2024-12-05 19:12:45.592832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:28.064 [2024-12-05 19:12:45.592893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.903 ms 00:22:28.064 [2024-12-05 19:12:45.592917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.064 [2024-12-05 19:12:45.593023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.064 [2024-12-05 19:12:45.593045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:28.064 [2024-12-05 19:12:45.593066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:22:28.064 [2024-12-05 19:12:45.593085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.065 [2024-12-05 19:12:45.614791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.065 [2024-12-05 19:12:45.615016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:28.065 [2024-12-05 19:12:45.615109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.632 ms 00:22:28.065 [2024-12-05 19:12:45.615147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.065 [2024-12-05 19:12:45.615228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.065 [2024-12-05 19:12:45.615292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:28.065 [2024-12-05 19:12:45.615325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:28.065 [2024-12-05 19:12:45.615353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.065 [2024-12-05 19:12:45.615986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.065 [2024-12-05 19:12:45.616146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:28.065 [2024-12-05 19:12:45.616222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.518 ms 00:22:28.065 [2024-12-05 19:12:45.616287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.065 [2024-12-05 19:12:45.616510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.065 [2024-12-05 19:12:45.616713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:28.065 [2024-12-05 19:12:45.616816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.167 ms 00:22:28.065 [2024-12-05 19:12:45.616850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.327 [2024-12-05 19:12:45.625647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.327 [2024-12-05 19:12:45.625815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:28.327 [2024-12-05 19:12:45.625878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.310 ms 00:22:28.327 [2024-12-05 19:12:45.625911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.327 [2024-12-05 19:12:45.629763] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:28.327 [2024-12-05 19:12:45.629933] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:28.327 [2024-12-05 19:12:45.630003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.327 [2024-12-05 19:12:45.630025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:28.327 [2024-12-05 19:12:45.630045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.957 ms 00:22:28.327 [2024-12-05 19:12:45.630064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.327 [2024-12-05 19:12:45.645879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.327 [2024-12-05 19:12:45.646049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:28.327 [2024-12-05 19:12:45.646108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.683 ms 00:22:28.327 [2024-12-05 19:12:45.646130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.327 [2024-12-05 19:12:45.648922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.327 [2024-12-05 19:12:45.649069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:28.327 [2024-12-05 19:12:45.649123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.736 ms 00:22:28.327 [2024-12-05 19:12:45.649145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.327 [2024-12-05 19:12:45.651863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.327 [2024-12-05 19:12:45.652013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:28.327 [2024-12-05 19:12:45.652070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.577 ms 00:22:28.327 [2024-12-05 19:12:45.652092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.327 [2024-12-05 19:12:45.652974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.327 [2024-12-05 19:12:45.653133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:28.327 [2024-12-05 19:12:45.653197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:22:28.327 [2024-12-05 19:12:45.653229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.327 [2024-12-05 19:12:45.676093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.327 [2024-12-05 19:12:45.676323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:28.327 [2024-12-05 19:12:45.676388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.803 ms 00:22:28.327 [2024-12-05 19:12:45.676414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.327 [2024-12-05 19:12:45.684745] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:28.327 [2024-12-05 19:12:45.687772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.327 [2024-12-05 19:12:45.687917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:28.327 [2024-12-05 19:12:45.687975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.022 ms 00:22:28.328 [2024-12-05 19:12:45.688004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.328 [2024-12-05 19:12:45.688108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.328 [2024-12-05 19:12:45.688137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:28.328 [2024-12-05 19:12:45.688277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:22:28.328 [2024-12-05 19:12:45.688307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.328 [2024-12-05 19:12:45.688414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.328 [2024-12-05 19:12:45.688585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:28.328 [2024-12-05 19:12:45.688606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:22:28.328 [2024-12-05 19:12:45.688614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.328 [2024-12-05 19:12:45.688650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.328 [2024-12-05 19:12:45.688661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:28.328 [2024-12-05 19:12:45.688670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:28.328 [2024-12-05 19:12:45.688679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.328 [2024-12-05 19:12:45.688722] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:28.328 [2024-12-05 19:12:45.688733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.328 [2024-12-05 19:12:45.688741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:28.328 [2024-12-05 19:12:45.688752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:22:28.328 [2024-12-05 19:12:45.688760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.328 [2024-12-05 19:12:45.694075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.328 [2024-12-05 19:12:45.694127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:28.328 [2024-12-05 19:12:45.694139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.291 ms 00:22:28.328 [2024-12-05 19:12:45.694147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.328 [2024-12-05 19:12:45.694238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.328 [2024-12-05 19:12:45.694276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:28.328 [2024-12-05 19:12:45.694286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:22:28.328 [2024-12-05 19:12:45.694304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.328 [2024-12-05 19:12:45.695465] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 134.294 ms, result 0 00:22:29.273  [2024-12-05T19:12:47.776Z] Copying: 12/1024 [MB] (12 MBps) [2024-12-05T19:12:48.720Z] Copying: 32/1024 [MB] (20 MBps) [2024-12-05T19:12:50.096Z] Copying: 53/1024 [MB] (20 MBps) [2024-12-05T19:12:51.031Z] Copying: 92/1024 [MB] (38 MBps) [2024-12-05T19:12:51.974Z] Copying: 145/1024 [MB] (53 MBps) [2024-12-05T19:12:52.917Z] Copying: 165/1024 [MB] (20 MBps) [2024-12-05T19:12:53.858Z] Copying: 184/1024 [MB] (18 MBps) [2024-12-05T19:12:54.792Z] Copying: 195/1024 [MB] (10 MBps) [2024-12-05T19:12:55.735Z] Copying: 238/1024 [MB] (43 MBps) [2024-12-05T19:12:57.119Z] Copying: 258/1024 [MB] (20 MBps) [2024-12-05T19:12:58.062Z] Copying: 276/1024 [MB] (17 MBps) [2024-12-05T19:12:59.007Z] Copying: 288/1024 [MB] (12 MBps) [2024-12-05T19:12:59.954Z] Copying: 305/1024 [MB] (16 MBps) [2024-12-05T19:13:00.961Z] Copying: 315/1024 [MB] (10 MBps) [2024-12-05T19:13:01.906Z] Copying: 331/1024 [MB] (16 MBps) [2024-12-05T19:13:02.849Z] Copying: 346/1024 [MB] (14 MBps) [2024-12-05T19:13:03.793Z] Copying: 365/1024 [MB] (19 MBps) [2024-12-05T19:13:04.736Z] Copying: 382/1024 [MB] (16 MBps) [2024-12-05T19:13:06.120Z] Copying: 403/1024 [MB] (20 MBps) [2024-12-05T19:13:07.062Z] Copying: 421/1024 [MB] (18 MBps) [2024-12-05T19:13:08.006Z] Copying: 434/1024 [MB] (12 MBps) [2024-12-05T19:13:08.949Z] Copying: 451/1024 [MB] (17 MBps) [2024-12-05T19:13:09.894Z] Copying: 469/1024 [MB] (17 MBps) [2024-12-05T19:13:10.838Z] Copying: 487/1024 [MB] (18 MBps) [2024-12-05T19:13:11.782Z] Copying: 506/1024 [MB] (19 MBps) [2024-12-05T19:13:12.729Z] Copying: 518/1024 [MB] (11 MBps) [2024-12-05T19:13:14.115Z] Copying: 528/1024 [MB] (10 MBps) [2024-12-05T19:13:15.058Z] Copying: 565/1024 [MB] (36 MBps) [2024-12-05T19:13:16.013Z] Copying: 578/1024 [MB] (13 MBps) [2024-12-05T19:13:16.954Z] Copying: 594/1024 [MB] (15 MBps) [2024-12-05T19:13:17.894Z] Copying: 615/1024 [MB] (21 MBps) [2024-12-05T19:13:18.835Z] Copying: 634/1024 [MB] (18 MBps) [2024-12-05T19:13:19.776Z] Copying: 649/1024 [MB] (15 MBps) [2024-12-05T19:13:20.717Z] Copying: 665/1024 [MB] (15 MBps) [2024-12-05T19:13:22.105Z] Copying: 678/1024 [MB] (13 MBps) [2024-12-05T19:13:23.049Z] Copying: 688/1024 [MB] (10 MBps) [2024-12-05T19:13:23.992Z] Copying: 704/1024 [MB] (15 MBps) [2024-12-05T19:13:24.937Z] Copying: 723/1024 [MB] (18 MBps) [2024-12-05T19:13:25.880Z] Copying: 743/1024 [MB] (19 MBps) [2024-12-05T19:13:26.822Z] Copying: 759/1024 [MB] (16 MBps) [2024-12-05T19:13:27.767Z] Copying: 775/1024 [MB] (15 MBps) [2024-12-05T19:13:28.711Z] Copying: 795/1024 [MB] (19 MBps) [2024-12-05T19:13:30.154Z] Copying: 812/1024 [MB] (17 MBps) [2024-12-05T19:13:30.720Z] Copying: 825/1024 [MB] (12 MBps) [2024-12-05T19:13:32.103Z] Copying: 852/1024 [MB] (27 MBps) [2024-12-05T19:13:33.039Z] Copying: 878/1024 [MB] (25 MBps) [2024-12-05T19:13:33.974Z] Copying: 910/1024 [MB] (32 MBps) [2024-12-05T19:13:34.918Z] Copying: 961/1024 [MB] (51 MBps) [2024-12-05T19:13:35.859Z] Copying: 982/1024 [MB] (20 MBps) [2024-12-05T19:13:36.799Z] Copying: 992/1024 [MB] (10 MBps) [2024-12-05T19:13:37.740Z] Copying: 1011/1024 [MB] (18 MBps) [2024-12-05T19:13:38.002Z] Copying: 1023/1024 [MB] (12 MBps) [2024-12-05T19:13:38.002Z] Copying: 1024/1024 [MB] (average 19 MBps)[2024-12-05 19:13:37.826634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.443 [2024-12-05 19:13:37.826732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:20.443 [2024-12-05 19:13:37.826751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:20.443 [2024-12-05 19:13:37.826760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.443 [2024-12-05 19:13:37.826898] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:20.443 [2024-12-05 19:13:37.827702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.443 [2024-12-05 19:13:37.827729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:20.443 [2024-12-05 19:13:37.827750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.781 ms 00:23:20.443 [2024-12-05 19:13:37.827772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.443 [2024-12-05 19:13:37.839719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.443 [2024-12-05 19:13:37.839780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:20.443 [2024-12-05 19:13:37.839793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.516 ms 00:23:20.443 [2024-12-05 19:13:37.839802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.443 [2024-12-05 19:13:37.863639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.443 [2024-12-05 19:13:37.863699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:20.443 [2024-12-05 19:13:37.863711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.817 ms 00:23:20.443 [2024-12-05 19:13:37.863724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.443 [2024-12-05 19:13:37.870123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.443 [2024-12-05 19:13:37.870319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:20.443 [2024-12-05 19:13:37.870340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.360 ms 00:23:20.443 [2024-12-05 19:13:37.870349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.443 [2024-12-05 19:13:37.872936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.443 [2024-12-05 19:13:37.872988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:20.443 [2024-12-05 19:13:37.872999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.522 ms 00:23:20.444 [2024-12-05 19:13:37.873007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.444 [2024-12-05 19:13:37.877407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.444 [2024-12-05 19:13:37.877455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:20.444 [2024-12-05 19:13:37.877467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.356 ms 00:23:20.444 [2024-12-05 19:13:37.877483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.708 [2024-12-05 19:13:38.048028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.708 [2024-12-05 19:13:38.048095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:20.708 [2024-12-05 19:13:38.048110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 170.498 ms 00:23:20.708 [2024-12-05 19:13:38.048119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.708 [2024-12-05 19:13:38.051342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.708 [2024-12-05 19:13:38.051390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:20.708 [2024-12-05 19:13:38.051401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.207 ms 00:23:20.708 [2024-12-05 19:13:38.051408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.708 [2024-12-05 19:13:38.054362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.708 [2024-12-05 19:13:38.054407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:20.708 [2024-12-05 19:13:38.054418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.911 ms 00:23:20.708 [2024-12-05 19:13:38.054425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.708 [2024-12-05 19:13:38.056767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.708 [2024-12-05 19:13:38.056814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:20.708 [2024-12-05 19:13:38.056824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.300 ms 00:23:20.708 [2024-12-05 19:13:38.056832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.708 [2024-12-05 19:13:38.059196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.708 [2024-12-05 19:13:38.059391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:20.708 [2024-12-05 19:13:38.059411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.298 ms 00:23:20.708 [2024-12-05 19:13:38.059419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.708 [2024-12-05 19:13:38.059511] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:20.708 [2024-12-05 19:13:38.059527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 110336 / 261120 wr_cnt: 1 state: open 00:23:20.708 [2024-12-05 19:13:38.059538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:20.708 [2024-12-05 19:13:38.059547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:20.708 [2024-12-05 19:13:38.059555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:20.708 [2024-12-05 19:13:38.059565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:20.708 [2024-12-05 19:13:38.059573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:20.708 [2024-12-05 19:13:38.059582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:20.708 [2024-12-05 19:13:38.059589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:20.708 [2024-12-05 19:13:38.059597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:20.708 [2024-12-05 19:13:38.059604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:20.708 [2024-12-05 19:13:38.059612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:20.708 [2024-12-05 19:13:38.059620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:20.708 [2024-12-05 19:13:38.059627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:20.708 [2024-12-05 19:13:38.059634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:20.708 [2024-12-05 19:13:38.059641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:20.708 [2024-12-05 19:13:38.059649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:20.708 [2024-12-05 19:13:38.059657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:20.708 [2024-12-05 19:13:38.059664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:20.708 [2024-12-05 19:13:38.059672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:20.708 [2024-12-05 19:13:38.059680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:20.708 [2024-12-05 19:13:38.059687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:20.708 [2024-12-05 19:13:38.059694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:20.708 [2024-12-05 19:13:38.059701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:20.708 [2024-12-05 19:13:38.059709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:20.708 [2024-12-05 19:13:38.059716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:20.708 [2024-12-05 19:13:38.059723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:20.708 [2024-12-05 19:13:38.059731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:20.708 [2024-12-05 19:13:38.059740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:20.708 [2024-12-05 19:13:38.059748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:20.708 [2024-12-05 19:13:38.059755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:20.708 [2024-12-05 19:13:38.059765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:20.708 [2024-12-05 19:13:38.059773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:20.708 [2024-12-05 19:13:38.059781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:20.708 [2024-12-05 19:13:38.059789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:20.708 [2024-12-05 19:13:38.059796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:20.708 [2024-12-05 19:13:38.059805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:20.708 [2024-12-05 19:13:38.059812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:20.708 [2024-12-05 19:13:38.059820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:20.708 [2024-12-05 19:13:38.059827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:20.708 [2024-12-05 19:13:38.059835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:20.708 [2024-12-05 19:13:38.059842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:20.708 [2024-12-05 19:13:38.059850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:20.708 [2024-12-05 19:13:38.059871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:20.708 [2024-12-05 19:13:38.059879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:20.708 [2024-12-05 19:13:38.059887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:20.708 [2024-12-05 19:13:38.059896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:20.708 [2024-12-05 19:13:38.059904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:20.708 [2024-12-05 19:13:38.059912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:20.708 [2024-12-05 19:13:38.059920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:20.708 [2024-12-05 19:13:38.059928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:20.708 [2024-12-05 19:13:38.059936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:20.708 [2024-12-05 19:13:38.059944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:20.708 [2024-12-05 19:13:38.059951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:20.708 [2024-12-05 19:13:38.059959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:20.708 [2024-12-05 19:13:38.059967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:20.708 [2024-12-05 19:13:38.059975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:20.708 [2024-12-05 19:13:38.059984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:20.709 [2024-12-05 19:13:38.059992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:20.709 [2024-12-05 19:13:38.060000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:20.709 [2024-12-05 19:13:38.060008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:20.709 [2024-12-05 19:13:38.060015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:20.709 [2024-12-05 19:13:38.060023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:20.709 [2024-12-05 19:13:38.060035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:20.709 [2024-12-05 19:13:38.060043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:20.709 [2024-12-05 19:13:38.060051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:20.709 [2024-12-05 19:13:38.060058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:20.709 [2024-12-05 19:13:38.060065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:20.709 [2024-12-05 19:13:38.060073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:20.709 [2024-12-05 19:13:38.060081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:20.709 [2024-12-05 19:13:38.060089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:20.709 [2024-12-05 19:13:38.060097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:20.709 [2024-12-05 19:13:38.060104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:20.709 [2024-12-05 19:13:38.060111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:20.709 [2024-12-05 19:13:38.060119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:20.709 [2024-12-05 19:13:38.060126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:20.709 [2024-12-05 19:13:38.060134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:20.709 [2024-12-05 19:13:38.060141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:20.709 [2024-12-05 19:13:38.060149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:20.709 [2024-12-05 19:13:38.060157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:20.709 [2024-12-05 19:13:38.060165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:20.709 [2024-12-05 19:13:38.060173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:20.709 [2024-12-05 19:13:38.060180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:20.709 [2024-12-05 19:13:38.060188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:20.709 [2024-12-05 19:13:38.060196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:20.709 [2024-12-05 19:13:38.060204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:20.709 [2024-12-05 19:13:38.060213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:20.709 [2024-12-05 19:13:38.060221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:20.709 [2024-12-05 19:13:38.060229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:20.709 [2024-12-05 19:13:38.060237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:20.709 [2024-12-05 19:13:38.060266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:20.709 [2024-12-05 19:13:38.060274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:20.709 [2024-12-05 19:13:38.060282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:20.709 [2024-12-05 19:13:38.060289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:20.709 [2024-12-05 19:13:38.060298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:20.709 [2024-12-05 19:13:38.060307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:20.709 [2024-12-05 19:13:38.060315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:20.709 [2024-12-05 19:13:38.060324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:20.709 [2024-12-05 19:13:38.060332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:20.709 [2024-12-05 19:13:38.060340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:20.709 [2024-12-05 19:13:38.060348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:20.709 [2024-12-05 19:13:38.060365] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:20.709 [2024-12-05 19:13:38.060375] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1f0d4c67-c2eb-4ccd-bb1a-4ce91bbc32a5 00:23:20.709 [2024-12-05 19:13:38.060384] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 110336 00:23:20.709 [2024-12-05 19:13:38.060395] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 111296 00:23:20.709 [2024-12-05 19:13:38.060407] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 110336 00:23:20.709 [2024-12-05 19:13:38.060416] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0087 00:23:20.709 [2024-12-05 19:13:38.060425] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:20.709 [2024-12-05 19:13:38.060433] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:20.709 [2024-12-05 19:13:38.060441] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:20.709 [2024-12-05 19:13:38.060448] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:20.709 [2024-12-05 19:13:38.060455] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:20.709 [2024-12-05 19:13:38.060463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.709 [2024-12-05 19:13:38.060471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:20.709 [2024-12-05 19:13:38.060480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.953 ms 00:23:20.709 [2024-12-05 19:13:38.060487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.709 [2024-12-05 19:13:38.062886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.709 [2024-12-05 19:13:38.063037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:20.709 [2024-12-05 19:13:38.063063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.379 ms 00:23:20.709 [2024-12-05 19:13:38.063075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.709 [2024-12-05 19:13:38.063210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.709 [2024-12-05 19:13:38.063222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:20.709 [2024-12-05 19:13:38.063232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:23:20.709 [2024-12-05 19:13:38.063242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.709 [2024-12-05 19:13:38.070552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:20.709 [2024-12-05 19:13:38.070600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:20.709 [2024-12-05 19:13:38.070612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:20.709 [2024-12-05 19:13:38.070622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.709 [2024-12-05 19:13:38.070683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:20.709 [2024-12-05 19:13:38.070692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:20.709 [2024-12-05 19:13:38.070700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:20.709 [2024-12-05 19:13:38.070713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.709 [2024-12-05 19:13:38.070774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:20.709 [2024-12-05 19:13:38.070786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:20.709 [2024-12-05 19:13:38.070799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:20.709 [2024-12-05 19:13:38.070807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.709 [2024-12-05 19:13:38.070827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:20.709 [2024-12-05 19:13:38.070835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:20.709 [2024-12-05 19:13:38.070844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:20.709 [2024-12-05 19:13:38.070851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.709 [2024-12-05 19:13:38.083758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:20.709 [2024-12-05 19:13:38.083806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:20.709 [2024-12-05 19:13:38.083818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:20.709 [2024-12-05 19:13:38.083826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.709 [2024-12-05 19:13:38.093634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:20.709 [2024-12-05 19:13:38.093843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:20.709 [2024-12-05 19:13:38.093861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:20.709 [2024-12-05 19:13:38.093870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.709 [2024-12-05 19:13:38.093950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:20.709 [2024-12-05 19:13:38.093960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:20.709 [2024-12-05 19:13:38.093969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:20.709 [2024-12-05 19:13:38.093977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.709 [2024-12-05 19:13:38.094013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:20.709 [2024-12-05 19:13:38.094023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:20.709 [2024-12-05 19:13:38.094033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:20.710 [2024-12-05 19:13:38.094049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.710 [2024-12-05 19:13:38.094127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:20.710 [2024-12-05 19:13:38.094141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:20.710 [2024-12-05 19:13:38.094150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:20.710 [2024-12-05 19:13:38.094158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.710 [2024-12-05 19:13:38.094188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:20.710 [2024-12-05 19:13:38.094197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:20.710 [2024-12-05 19:13:38.094206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:20.710 [2024-12-05 19:13:38.094220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.710 [2024-12-05 19:13:38.094306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:20.710 [2024-12-05 19:13:38.094320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:20.710 [2024-12-05 19:13:38.094328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:20.710 [2024-12-05 19:13:38.094341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.710 [2024-12-05 19:13:38.094389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:20.710 [2024-12-05 19:13:38.094400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:20.710 [2024-12-05 19:13:38.094412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:20.710 [2024-12-05 19:13:38.094429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.710 [2024-12-05 19:13:38.094560] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 269.275 ms, result 0 00:23:21.652 00:23:21.652 00:23:21.652 19:13:39 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:23:21.652 [2024-12-05 19:13:39.179990] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:23:21.652 [2024-12-05 19:13:39.180099] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89976 ] 00:23:21.914 [2024-12-05 19:13:39.324509] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:21.914 [2024-12-05 19:13:39.352944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:21.914 [2024-12-05 19:13:39.466935] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:21.914 [2024-12-05 19:13:39.467279] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:22.177 [2024-12-05 19:13:39.628142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.177 [2024-12-05 19:13:39.628201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:22.177 [2024-12-05 19:13:39.628216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:22.178 [2024-12-05 19:13:39.628225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.178 [2024-12-05 19:13:39.628307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.178 [2024-12-05 19:13:39.628320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:22.178 [2024-12-05 19:13:39.628329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:23:22.178 [2024-12-05 19:13:39.628348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.178 [2024-12-05 19:13:39.628377] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:22.178 [2024-12-05 19:13:39.628668] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:22.178 [2024-12-05 19:13:39.628690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.178 [2024-12-05 19:13:39.628700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:22.178 [2024-12-05 19:13:39.628713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.318 ms 00:23:22.178 [2024-12-05 19:13:39.628721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.178 [2024-12-05 19:13:39.630463] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:22.178 [2024-12-05 19:13:39.634325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.178 [2024-12-05 19:13:39.634372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:22.178 [2024-12-05 19:13:39.634384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.863 ms 00:23:22.178 [2024-12-05 19:13:39.634405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.178 [2024-12-05 19:13:39.634482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.178 [2024-12-05 19:13:39.634495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:22.178 [2024-12-05 19:13:39.634504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:23:22.178 [2024-12-05 19:13:39.634511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.178 [2024-12-05 19:13:39.642471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.178 [2024-12-05 19:13:39.642511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:22.178 [2024-12-05 19:13:39.642525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.917 ms 00:23:22.178 [2024-12-05 19:13:39.642533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.178 [2024-12-05 19:13:39.642633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.178 [2024-12-05 19:13:39.642643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:22.178 [2024-12-05 19:13:39.642653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:23:22.178 [2024-12-05 19:13:39.642664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.178 [2024-12-05 19:13:39.642721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.178 [2024-12-05 19:13:39.642735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:22.178 [2024-12-05 19:13:39.642745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:23:22.178 [2024-12-05 19:13:39.642756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.178 [2024-12-05 19:13:39.642778] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:22.178 [2024-12-05 19:13:39.644903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.178 [2024-12-05 19:13:39.644941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:22.178 [2024-12-05 19:13:39.644951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.130 ms 00:23:22.178 [2024-12-05 19:13:39.644959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.178 [2024-12-05 19:13:39.644998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.178 [2024-12-05 19:13:39.645007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:22.178 [2024-12-05 19:13:39.645016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:23:22.178 [2024-12-05 19:13:39.645027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.178 [2024-12-05 19:13:39.645055] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:22.178 [2024-12-05 19:13:39.645077] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:22.178 [2024-12-05 19:13:39.645120] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:22.178 [2024-12-05 19:13:39.645138] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:22.178 [2024-12-05 19:13:39.645247] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:22.178 [2024-12-05 19:13:39.645285] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:22.178 [2024-12-05 19:13:39.645300] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:22.178 [2024-12-05 19:13:39.645317] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:22.178 [2024-12-05 19:13:39.645327] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:22.178 [2024-12-05 19:13:39.645336] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:22.178 [2024-12-05 19:13:39.645344] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:22.178 [2024-12-05 19:13:39.645352] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:22.178 [2024-12-05 19:13:39.645363] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:22.178 [2024-12-05 19:13:39.645372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.178 [2024-12-05 19:13:39.645381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:22.178 [2024-12-05 19:13:39.645390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.322 ms 00:23:22.178 [2024-12-05 19:13:39.645401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.178 [2024-12-05 19:13:39.645487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.178 [2024-12-05 19:13:39.645497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:22.178 [2024-12-05 19:13:39.645505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:23:22.178 [2024-12-05 19:13:39.645513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.178 [2024-12-05 19:13:39.645634] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:22.178 [2024-12-05 19:13:39.645648] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:22.178 [2024-12-05 19:13:39.645661] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:22.178 [2024-12-05 19:13:39.645669] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:22.178 [2024-12-05 19:13:39.645683] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:22.178 [2024-12-05 19:13:39.645690] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:22.178 [2024-12-05 19:13:39.645697] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:22.178 [2024-12-05 19:13:39.645705] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:22.178 [2024-12-05 19:13:39.645712] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:22.178 [2024-12-05 19:13:39.645721] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:22.178 [2024-12-05 19:13:39.645731] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:22.178 [2024-12-05 19:13:39.645739] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:22.178 [2024-12-05 19:13:39.645746] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:22.178 [2024-12-05 19:13:39.645753] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:22.179 [2024-12-05 19:13:39.645760] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:22.179 [2024-12-05 19:13:39.645766] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:22.179 [2024-12-05 19:13:39.645773] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:22.179 [2024-12-05 19:13:39.645779] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:22.179 [2024-12-05 19:13:39.645785] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:22.179 [2024-12-05 19:13:39.645792] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:22.179 [2024-12-05 19:13:39.645799] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:22.179 [2024-12-05 19:13:39.645805] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:22.179 [2024-12-05 19:13:39.645812] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:22.179 [2024-12-05 19:13:39.645818] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:22.179 [2024-12-05 19:13:39.645824] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:22.179 [2024-12-05 19:13:39.645831] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:22.179 [2024-12-05 19:13:39.645839] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:22.179 [2024-12-05 19:13:39.645846] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:22.179 [2024-12-05 19:13:39.645852] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:22.179 [2024-12-05 19:13:39.645858] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:22.179 [2024-12-05 19:13:39.645865] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:22.179 [2024-12-05 19:13:39.645871] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:22.179 [2024-12-05 19:13:39.645878] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:22.179 [2024-12-05 19:13:39.645884] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:22.179 [2024-12-05 19:13:39.645890] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:22.179 [2024-12-05 19:13:39.645897] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:22.179 [2024-12-05 19:13:39.645904] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:22.179 [2024-12-05 19:13:39.645911] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:22.179 [2024-12-05 19:13:39.645917] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:22.179 [2024-12-05 19:13:39.645923] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:22.179 [2024-12-05 19:13:39.645929] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:22.179 [2024-12-05 19:13:39.645935] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:22.179 [2024-12-05 19:13:39.645944] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:22.179 [2024-12-05 19:13:39.645952] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:22.179 [2024-12-05 19:13:39.645963] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:22.179 [2024-12-05 19:13:39.645974] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:22.179 [2024-12-05 19:13:39.645981] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:22.179 [2024-12-05 19:13:39.645994] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:22.179 [2024-12-05 19:13:39.646002] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:22.179 [2024-12-05 19:13:39.646008] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:22.179 [2024-12-05 19:13:39.646015] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:22.179 [2024-12-05 19:13:39.646021] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:22.179 [2024-12-05 19:13:39.646027] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:22.179 [2024-12-05 19:13:39.646036] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:22.179 [2024-12-05 19:13:39.646045] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:22.179 [2024-12-05 19:13:39.646058] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:22.179 [2024-12-05 19:13:39.646066] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:22.179 [2024-12-05 19:13:39.646073] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:22.179 [2024-12-05 19:13:39.646082] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:22.179 [2024-12-05 19:13:39.646089] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:22.179 [2024-12-05 19:13:39.646096] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:22.179 [2024-12-05 19:13:39.646103] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:22.179 [2024-12-05 19:13:39.646110] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:22.179 [2024-12-05 19:13:39.646117] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:22.179 [2024-12-05 19:13:39.646130] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:22.179 [2024-12-05 19:13:39.646137] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:22.179 [2024-12-05 19:13:39.646144] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:22.179 [2024-12-05 19:13:39.646151] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:22.179 [2024-12-05 19:13:39.646158] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:22.179 [2024-12-05 19:13:39.646165] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:22.179 [2024-12-05 19:13:39.646173] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:22.179 [2024-12-05 19:13:39.646181] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:22.179 [2024-12-05 19:13:39.646189] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:22.179 [2024-12-05 19:13:39.646196] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:22.179 [2024-12-05 19:13:39.646205] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:22.179 [2024-12-05 19:13:39.646220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.179 [2024-12-05 19:13:39.646228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:22.179 [2024-12-05 19:13:39.646241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.670 ms 00:23:22.179 [2024-12-05 19:13:39.646250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.179 [2024-12-05 19:13:39.660040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.179 [2024-12-05 19:13:39.660091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:22.179 [2024-12-05 19:13:39.660103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.722 ms 00:23:22.179 [2024-12-05 19:13:39.660112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.180 [2024-12-05 19:13:39.660201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.180 [2024-12-05 19:13:39.660209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:22.180 [2024-12-05 19:13:39.660217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:23:22.180 [2024-12-05 19:13:39.660225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.180 [2024-12-05 19:13:39.686146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.180 [2024-12-05 19:13:39.686233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:22.180 [2024-12-05 19:13:39.686317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.822 ms 00:23:22.180 [2024-12-05 19:13:39.686336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.180 [2024-12-05 19:13:39.686420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.180 [2024-12-05 19:13:39.686444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:22.180 [2024-12-05 19:13:39.686462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:22.180 [2024-12-05 19:13:39.686486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.180 [2024-12-05 19:13:39.687168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.180 [2024-12-05 19:13:39.687234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:22.180 [2024-12-05 19:13:39.687292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.565 ms 00:23:22.180 [2024-12-05 19:13:39.687310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.180 [2024-12-05 19:13:39.687604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.180 [2024-12-05 19:13:39.687630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:22.180 [2024-12-05 19:13:39.687650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.245 ms 00:23:22.180 [2024-12-05 19:13:39.687667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.180 [2024-12-05 19:13:39.696133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.180 [2024-12-05 19:13:39.696179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:22.180 [2024-12-05 19:13:39.696191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.426 ms 00:23:22.180 [2024-12-05 19:13:39.696199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.180 [2024-12-05 19:13:39.700040] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:23:22.180 [2024-12-05 19:13:39.700089] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:22.180 [2024-12-05 19:13:39.700106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.180 [2024-12-05 19:13:39.700115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:22.180 [2024-12-05 19:13:39.700124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.787 ms 00:23:22.180 [2024-12-05 19:13:39.700132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.180 [2024-12-05 19:13:39.715950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.180 [2024-12-05 19:13:39.715998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:22.180 [2024-12-05 19:13:39.716013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.759 ms 00:23:22.180 [2024-12-05 19:13:39.716022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.180 [2024-12-05 19:13:39.719136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.180 [2024-12-05 19:13:39.719336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:22.180 [2024-12-05 19:13:39.719356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.083 ms 00:23:22.180 [2024-12-05 19:13:39.719364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.180 [2024-12-05 19:13:39.722077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.180 [2024-12-05 19:13:39.722125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:22.180 [2024-12-05 19:13:39.722135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.674 ms 00:23:22.180 [2024-12-05 19:13:39.722143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.180 [2024-12-05 19:13:39.722507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.180 [2024-12-05 19:13:39.722521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:22.180 [2024-12-05 19:13:39.722531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 00:23:22.180 [2024-12-05 19:13:39.722541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.441 [2024-12-05 19:13:39.746552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.441 [2024-12-05 19:13:39.746613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:22.441 [2024-12-05 19:13:39.746626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.988 ms 00:23:22.441 [2024-12-05 19:13:39.746635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.441 [2024-12-05 19:13:39.754865] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:22.441 [2024-12-05 19:13:39.757792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.441 [2024-12-05 19:13:39.757993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:22.441 [2024-12-05 19:13:39.758012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.106 ms 00:23:22.441 [2024-12-05 19:13:39.758028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.441 [2024-12-05 19:13:39.758109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.441 [2024-12-05 19:13:39.758127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:22.441 [2024-12-05 19:13:39.758136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:23:22.441 [2024-12-05 19:13:39.758151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.441 [2024-12-05 19:13:39.759929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.441 [2024-12-05 19:13:39.759976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:22.441 [2024-12-05 19:13:39.759987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.741 ms 00:23:22.441 [2024-12-05 19:13:39.759996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.441 [2024-12-05 19:13:39.760025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.441 [2024-12-05 19:13:39.760034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:22.441 [2024-12-05 19:13:39.760043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:22.441 [2024-12-05 19:13:39.760052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.441 [2024-12-05 19:13:39.760090] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:22.441 [2024-12-05 19:13:39.760106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.441 [2024-12-05 19:13:39.760121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:22.441 [2024-12-05 19:13:39.760133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:23:22.441 [2024-12-05 19:13:39.760141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.441 [2024-12-05 19:13:39.765481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.441 [2024-12-05 19:13:39.765528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:22.442 [2024-12-05 19:13:39.765539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.320 ms 00:23:22.442 [2024-12-05 19:13:39.765569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.442 [2024-12-05 19:13:39.765650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.442 [2024-12-05 19:13:39.765660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:22.442 [2024-12-05 19:13:39.765670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:23:22.442 [2024-12-05 19:13:39.765682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.442 [2024-12-05 19:13:39.766829] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 138.239 ms, result 0 00:23:23.827  [2024-12-05T19:13:41.960Z] Copying: 10120/1048576 [kB] (10120 kBps) [2024-12-05T19:13:43.347Z] Copying: 28/1024 [MB] (19 MBps) [2024-12-05T19:13:44.290Z] Copying: 45/1024 [MB] (16 MBps) [2024-12-05T19:13:45.234Z] Copying: 61/1024 [MB] (16 MBps) [2024-12-05T19:13:46.173Z] Copying: 82/1024 [MB] (21 MBps) [2024-12-05T19:13:47.116Z] Copying: 104/1024 [MB] (21 MBps) [2024-12-05T19:13:48.059Z] Copying: 122/1024 [MB] (17 MBps) [2024-12-05T19:13:49.001Z] Copying: 143/1024 [MB] (21 MBps) [2024-12-05T19:13:50.391Z] Copying: 158/1024 [MB] (15 MBps) [2024-12-05T19:13:50.962Z] Copying: 175/1024 [MB] (16 MBps) [2024-12-05T19:13:52.349Z] Copying: 199/1024 [MB] (24 MBps) [2024-12-05T19:13:53.289Z] Copying: 209/1024 [MB] (10 MBps) [2024-12-05T19:13:54.234Z] Copying: 222/1024 [MB] (13 MBps) [2024-12-05T19:13:55.177Z] Copying: 239/1024 [MB] (16 MBps) [2024-12-05T19:13:56.120Z] Copying: 257/1024 [MB] (18 MBps) [2024-12-05T19:13:57.061Z] Copying: 275/1024 [MB] (17 MBps) [2024-12-05T19:13:58.098Z] Copying: 291/1024 [MB] (16 MBps) [2024-12-05T19:13:59.050Z] Copying: 305/1024 [MB] (13 MBps) [2024-12-05T19:13:59.992Z] Copying: 319/1024 [MB] (13 MBps) [2024-12-05T19:14:01.377Z] Copying: 333/1024 [MB] (14 MBps) [2024-12-05T19:14:02.319Z] Copying: 343/1024 [MB] (10 MBps) [2024-12-05T19:14:03.263Z] Copying: 358/1024 [MB] (14 MBps) [2024-12-05T19:14:04.209Z] Copying: 377/1024 [MB] (19 MBps) [2024-12-05T19:14:05.153Z] Copying: 389/1024 [MB] (11 MBps) [2024-12-05T19:14:06.095Z] Copying: 408/1024 [MB] (18 MBps) [2024-12-05T19:14:07.038Z] Copying: 428/1024 [MB] (20 MBps) [2024-12-05T19:14:07.981Z] Copying: 450/1024 [MB] (22 MBps) [2024-12-05T19:14:09.369Z] Copying: 471/1024 [MB] (20 MBps) [2024-12-05T19:14:10.307Z] Copying: 483/1024 [MB] (11 MBps) [2024-12-05T19:14:11.305Z] Copying: 500/1024 [MB] (17 MBps) [2024-12-05T19:14:12.246Z] Copying: 520/1024 [MB] (19 MBps) [2024-12-05T19:14:13.187Z] Copying: 532/1024 [MB] (12 MBps) [2024-12-05T19:14:14.131Z] Copying: 550/1024 [MB] (17 MBps) [2024-12-05T19:14:15.073Z] Copying: 571/1024 [MB] (21 MBps) [2024-12-05T19:14:16.016Z] Copying: 588/1024 [MB] (16 MBps) [2024-12-05T19:14:16.962Z] Copying: 606/1024 [MB] (18 MBps) [2024-12-05T19:14:18.352Z] Copying: 619/1024 [MB] (12 MBps) [2024-12-05T19:14:19.298Z] Copying: 630/1024 [MB] (11 MBps) [2024-12-05T19:14:20.245Z] Copying: 641/1024 [MB] (10 MBps) [2024-12-05T19:14:21.191Z] Copying: 651/1024 [MB] (10 MBps) [2024-12-05T19:14:22.137Z] Copying: 666/1024 [MB] (14 MBps) [2024-12-05T19:14:23.083Z] Copying: 682/1024 [MB] (15 MBps) [2024-12-05T19:14:24.027Z] Copying: 696/1024 [MB] (14 MBps) [2024-12-05T19:14:24.972Z] Copying: 718/1024 [MB] (21 MBps) [2024-12-05T19:14:26.368Z] Copying: 734/1024 [MB] (15 MBps) [2024-12-05T19:14:26.976Z] Copying: 751/1024 [MB] (16 MBps) [2024-12-05T19:14:28.395Z] Copying: 768/1024 [MB] (17 MBps) [2024-12-05T19:14:28.969Z] Copying: 782/1024 [MB] (14 MBps) [2024-12-05T19:14:30.355Z] Copying: 801/1024 [MB] (18 MBps) [2024-12-05T19:14:31.296Z] Copying: 814/1024 [MB] (12 MBps) [2024-12-05T19:14:32.236Z] Copying: 825/1024 [MB] (10 MBps) [2024-12-05T19:14:33.177Z] Copying: 835/1024 [MB] (10 MBps) [2024-12-05T19:14:34.117Z] Copying: 845/1024 [MB] (10 MBps) [2024-12-05T19:14:35.060Z] Copying: 859/1024 [MB] (14 MBps) [2024-12-05T19:14:36.000Z] Copying: 869/1024 [MB] (10 MBps) [2024-12-05T19:14:37.385Z] Copying: 891/1024 [MB] (21 MBps) [2024-12-05T19:14:37.956Z] Copying: 902/1024 [MB] (10 MBps) [2024-12-05T19:14:39.344Z] Copying: 913/1024 [MB] (11 MBps) [2024-12-05T19:14:40.289Z] Copying: 928/1024 [MB] (14 MBps) [2024-12-05T19:14:41.227Z] Copying: 941/1024 [MB] (12 MBps) [2024-12-05T19:14:42.171Z] Copying: 955/1024 [MB] (14 MBps) [2024-12-05T19:14:43.116Z] Copying: 970/1024 [MB] (15 MBps) [2024-12-05T19:14:44.062Z] Copying: 982/1024 [MB] (11 MBps) [2024-12-05T19:14:45.006Z] Copying: 999/1024 [MB] (17 MBps) [2024-12-05T19:14:45.580Z] Copying: 1013/1024 [MB] (14 MBps) [2024-12-05T19:14:45.839Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-12-05 19:14:45.773218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.280 [2024-12-05 19:14:45.773342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:28.280 [2024-12-05 19:14:45.773362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:28.280 [2024-12-05 19:14:45.773372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.280 [2024-12-05 19:14:45.773399] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:28.280 [2024-12-05 19:14:45.774451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.280 [2024-12-05 19:14:45.774501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:28.280 [2024-12-05 19:14:45.774520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.032 ms 00:24:28.280 [2024-12-05 19:14:45.774533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.280 [2024-12-05 19:14:45.774781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.280 [2024-12-05 19:14:45.774805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:28.280 [2024-12-05 19:14:45.774817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.218 ms 00:24:28.280 [2024-12-05 19:14:45.774827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.280 [2024-12-05 19:14:45.780865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.280 [2024-12-05 19:14:45.780924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:28.280 [2024-12-05 19:14:45.780935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.018 ms 00:24:28.280 [2024-12-05 19:14:45.780944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.280 [2024-12-05 19:14:45.787380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.280 [2024-12-05 19:14:45.787431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:28.280 [2024-12-05 19:14:45.787444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.377 ms 00:24:28.280 [2024-12-05 19:14:45.787453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.280 [2024-12-05 19:14:45.790713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.280 [2024-12-05 19:14:45.790776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:28.280 [2024-12-05 19:14:45.790789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.183 ms 00:24:28.280 [2024-12-05 19:14:45.790797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.280 [2024-12-05 19:14:45.796975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.280 [2024-12-05 19:14:45.797033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:28.280 [2024-12-05 19:14:45.797047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.126 ms 00:24:28.280 [2024-12-05 19:14:45.797067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.542 [2024-12-05 19:14:45.990372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.542 [2024-12-05 19:14:45.990453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:28.542 [2024-12-05 19:14:45.990468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 193.247 ms 00:24:28.542 [2024-12-05 19:14:45.990476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.542 [2024-12-05 19:14:45.993536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.542 [2024-12-05 19:14:45.993814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:28.542 [2024-12-05 19:14:45.993836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.041 ms 00:24:28.542 [2024-12-05 19:14:45.993844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.542 [2024-12-05 19:14:45.997021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.542 [2024-12-05 19:14:45.997214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:28.542 [2024-12-05 19:14:45.997234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.133 ms 00:24:28.542 [2024-12-05 19:14:45.997242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.542 [2024-12-05 19:14:45.999801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.543 [2024-12-05 19:14:45.999859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:28.543 [2024-12-05 19:14:45.999871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.376 ms 00:24:28.543 [2024-12-05 19:14:45.999880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.543 [2024-12-05 19:14:46.002492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.543 [2024-12-05 19:14:46.002666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:28.543 [2024-12-05 19:14:46.002742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.532 ms 00:24:28.543 [2024-12-05 19:14:46.002766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.543 [2024-12-05 19:14:46.002995] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:28.543 [2024-12-05 19:14:46.003039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:24:28.543 [2024-12-05 19:14:46.003088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:28.543 [2024-12-05 19:14:46.003191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:28.543 [2024-12-05 19:14:46.003201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:28.543 [2024-12-05 19:14:46.003210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:28.543 [2024-12-05 19:14:46.003219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:28.543 [2024-12-05 19:14:46.003227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:28.543 [2024-12-05 19:14:46.003238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:28.543 [2024-12-05 19:14:46.003247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:28.543 [2024-12-05 19:14:46.003280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:28.543 [2024-12-05 19:14:46.003290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:28.543 [2024-12-05 19:14:46.003299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:28.543 [2024-12-05 19:14:46.003308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:28.543 [2024-12-05 19:14:46.003316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:28.543 [2024-12-05 19:14:46.003327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:28.543 [2024-12-05 19:14:46.003336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:28.543 [2024-12-05 19:14:46.003344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:28.543 [2024-12-05 19:14:46.003352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:28.543 [2024-12-05 19:14:46.003360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:28.543 [2024-12-05 19:14:46.003368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:28.543 [2024-12-05 19:14:46.003377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:28.543 [2024-12-05 19:14:46.003387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:28.543 [2024-12-05 19:14:46.003395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:28.543 [2024-12-05 19:14:46.003403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:28.543 [2024-12-05 19:14:46.003411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:28.543 [2024-12-05 19:14:46.003419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:28.543 [2024-12-05 19:14:46.003427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:28.543 [2024-12-05 19:14:46.003438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:28.543 [2024-12-05 19:14:46.003447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:28.543 [2024-12-05 19:14:46.003455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:28.543 [2024-12-05 19:14:46.003463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:28.543 [2024-12-05 19:14:46.003471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:28.543 [2024-12-05 19:14:46.003480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:28.543 [2024-12-05 19:14:46.003490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:28.543 [2024-12-05 19:14:46.003499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:28.543 [2024-12-05 19:14:46.003507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:28.543 [2024-12-05 19:14:46.003514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:28.543 [2024-12-05 19:14:46.003522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:28.543 [2024-12-05 19:14:46.003533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:28.543 [2024-12-05 19:14:46.003542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:28.543 [2024-12-05 19:14:46.003549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:28.543 [2024-12-05 19:14:46.003557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:28.543 [2024-12-05 19:14:46.003564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:28.543 [2024-12-05 19:14:46.003586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:28.543 [2024-12-05 19:14:46.003594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:28.543 [2024-12-05 19:14:46.003602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:28.543 [2024-12-05 19:14:46.003609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:28.543 [2024-12-05 19:14:46.003617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:28.543 [2024-12-05 19:14:46.003625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:28.543 [2024-12-05 19:14:46.003636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:28.543 [2024-12-05 19:14:46.003645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:28.543 [2024-12-05 19:14:46.003653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:28.543 [2024-12-05 19:14:46.003661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:28.543 [2024-12-05 19:14:46.003668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:28.543 [2024-12-05 19:14:46.003676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:28.543 [2024-12-05 19:14:46.003685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:28.543 [2024-12-05 19:14:46.003695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:28.543 [2024-12-05 19:14:46.003703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:28.543 [2024-12-05 19:14:46.003711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:28.543 [2024-12-05 19:14:46.003719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:28.543 [2024-12-05 19:14:46.003728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:28.543 [2024-12-05 19:14:46.003738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:28.543 [2024-12-05 19:14:46.003748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:28.543 [2024-12-05 19:14:46.003757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:28.543 [2024-12-05 19:14:46.003765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:28.543 [2024-12-05 19:14:46.003772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:28.543 [2024-12-05 19:14:46.003780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:28.543 [2024-12-05 19:14:46.003789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:28.543 [2024-12-05 19:14:46.003797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:28.543 [2024-12-05 19:14:46.003804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:28.543 [2024-12-05 19:14:46.003813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:28.543 [2024-12-05 19:14:46.003821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:28.543 [2024-12-05 19:14:46.003830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:28.543 [2024-12-05 19:14:46.003838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:28.543 [2024-12-05 19:14:46.003846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:28.543 [2024-12-05 19:14:46.003854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:28.543 [2024-12-05 19:14:46.003861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:28.544 [2024-12-05 19:14:46.003869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:28.544 [2024-12-05 19:14:46.003878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:28.544 [2024-12-05 19:14:46.003887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:28.544 [2024-12-05 19:14:46.003895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:28.544 [2024-12-05 19:14:46.003902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:28.544 [2024-12-05 19:14:46.003910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:28.544 [2024-12-05 19:14:46.003918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:28.544 [2024-12-05 19:14:46.003928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:28.544 [2024-12-05 19:14:46.003937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:28.544 [2024-12-05 19:14:46.003944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:28.544 [2024-12-05 19:14:46.003952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:28.544 [2024-12-05 19:14:46.003960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:28.544 [2024-12-05 19:14:46.003967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:28.544 [2024-12-05 19:14:46.003977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:28.544 [2024-12-05 19:14:46.003987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:28.544 [2024-12-05 19:14:46.003995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:28.544 [2024-12-05 19:14:46.004003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:28.544 [2024-12-05 19:14:46.004011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:28.544 [2024-12-05 19:14:46.004019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:28.544 [2024-12-05 19:14:46.004026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:28.544 [2024-12-05 19:14:46.004035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:28.544 [2024-12-05 19:14:46.004043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:28.544 [2024-12-05 19:14:46.004051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:28.544 [2024-12-05 19:14:46.004068] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:28.544 [2024-12-05 19:14:46.004078] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1f0d4c67-c2eb-4ccd-bb1a-4ce91bbc32a5 00:24:28.544 [2024-12-05 19:14:46.004088] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:24:28.544 [2024-12-05 19:14:46.004100] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 21696 00:24:28.544 [2024-12-05 19:14:46.004115] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 20736 00:24:28.544 [2024-12-05 19:14:46.004126] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0463 00:24:28.544 [2024-12-05 19:14:46.004135] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:28.544 [2024-12-05 19:14:46.004144] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:28.544 [2024-12-05 19:14:46.004157] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:28.544 [2024-12-05 19:14:46.004164] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:28.544 [2024-12-05 19:14:46.004173] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:28.544 [2024-12-05 19:14:46.004181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.544 [2024-12-05 19:14:46.004191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:28.544 [2024-12-05 19:14:46.004200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.188 ms 00:24:28.544 [2024-12-05 19:14:46.004208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.544 [2024-12-05 19:14:46.007728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.544 [2024-12-05 19:14:46.007896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:28.544 [2024-12-05 19:14:46.007956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.488 ms 00:24:28.544 [2024-12-05 19:14:46.007982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.544 [2024-12-05 19:14:46.008153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.544 [2024-12-05 19:14:46.008181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:28.544 [2024-12-05 19:14:46.008294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.133 ms 00:24:28.544 [2024-12-05 19:14:46.008322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.544 [2024-12-05 19:14:46.018799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:28.544 [2024-12-05 19:14:46.018982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:28.544 [2024-12-05 19:14:46.019039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:28.544 [2024-12-05 19:14:46.019065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.544 [2024-12-05 19:14:46.019146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:28.544 [2024-12-05 19:14:46.019184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:28.544 [2024-12-05 19:14:46.019205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:28.544 [2024-12-05 19:14:46.019225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.544 [2024-12-05 19:14:46.019343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:28.544 [2024-12-05 19:14:46.019375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:28.544 [2024-12-05 19:14:46.019399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:28.544 [2024-12-05 19:14:46.019472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.544 [2024-12-05 19:14:46.019509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:28.544 [2024-12-05 19:14:46.019533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:28.544 [2024-12-05 19:14:46.019555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:28.544 [2024-12-05 19:14:46.019577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.544 [2024-12-05 19:14:46.039033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:28.544 [2024-12-05 19:14:46.039241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:28.544 [2024-12-05 19:14:46.039535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:28.544 [2024-12-05 19:14:46.039550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.544 [2024-12-05 19:14:46.054730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:28.544 [2024-12-05 19:14:46.054791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:28.544 [2024-12-05 19:14:46.054803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:28.544 [2024-12-05 19:14:46.054813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.544 [2024-12-05 19:14:46.054886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:28.544 [2024-12-05 19:14:46.054897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:28.544 [2024-12-05 19:14:46.054913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:28.544 [2024-12-05 19:14:46.054922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.544 [2024-12-05 19:14:46.054964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:28.544 [2024-12-05 19:14:46.054976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:28.544 [2024-12-05 19:14:46.054986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:28.544 [2024-12-05 19:14:46.054995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.544 [2024-12-05 19:14:46.055084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:28.544 [2024-12-05 19:14:46.055098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:28.544 [2024-12-05 19:14:46.055108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:28.544 [2024-12-05 19:14:46.055117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.544 [2024-12-05 19:14:46.055150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:28.544 [2024-12-05 19:14:46.055163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:28.544 [2024-12-05 19:14:46.055173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:28.544 [2024-12-05 19:14:46.055182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.544 [2024-12-05 19:14:46.055230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:28.544 [2024-12-05 19:14:46.055244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:28.544 [2024-12-05 19:14:46.055290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:28.544 [2024-12-05 19:14:46.055299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.544 [2024-12-05 19:14:46.055354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:28.544 [2024-12-05 19:14:46.055367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:28.544 [2024-12-05 19:14:46.055379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:28.544 [2024-12-05 19:14:46.055389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.544 [2024-12-05 19:14:46.055564] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 282.297 ms, result 0 00:24:28.805 00:24:28.805 00:24:28.805 19:14:46 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:31.348 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:24:31.348 19:14:48 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:24:31.348 19:14:48 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:24:31.348 19:14:48 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:24:31.348 19:14:48 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:31.348 19:14:48 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:31.348 Process with pid 88109 is not found 00:24:31.348 Remove shared memory files 00:24:31.348 19:14:48 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 88109 00:24:31.348 19:14:48 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 88109 ']' 00:24:31.348 19:14:48 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 88109 00:24:31.348 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (88109) - No such process 00:24:31.348 19:14:48 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 88109 is not found' 00:24:31.348 19:14:48 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:24:31.348 19:14:48 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:24:31.348 19:14:48 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:24:31.348 19:14:48 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:24:31.348 19:14:48 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:24:31.348 19:14:48 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:24:31.348 19:14:48 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:24:31.348 ************************************ 00:24:31.348 END TEST ftl_restore 00:24:31.348 ************************************ 00:24:31.348 00:24:31.348 real 4m10.789s 00:24:31.348 user 3m59.007s 00:24:31.348 sys 0m11.822s 00:24:31.348 19:14:48 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:31.348 19:14:48 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:24:31.348 19:14:48 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:24:31.348 19:14:48 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:24:31.348 19:14:48 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:31.348 19:14:48 ftl -- common/autotest_common.sh@10 -- # set +x 00:24:31.348 ************************************ 00:24:31.348 START TEST ftl_dirty_shutdown 00:24:31.348 ************************************ 00:24:31.348 19:14:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:24:31.348 * Looking for test storage... 00:24:31.348 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:24:31.348 19:14:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:31.348 19:14:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:24:31.348 19:14:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:31.610 19:14:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:31.610 19:14:48 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:31.610 19:14:48 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:31.610 19:14:48 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:31.610 19:14:48 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:24:31.610 19:14:48 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:24:31.610 19:14:48 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:24:31.610 19:14:48 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:24:31.610 19:14:48 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:24:31.610 19:14:48 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:24:31.610 19:14:48 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:24:31.610 19:14:48 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:31.610 19:14:48 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:24:31.610 19:14:48 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:24:31.610 19:14:48 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:31.610 19:14:48 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:31.610 19:14:48 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:24:31.610 19:14:48 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:24:31.610 19:14:48 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:31.610 19:14:48 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:24:31.610 19:14:48 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:24:31.610 19:14:48 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:24:31.610 19:14:48 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:24:31.610 19:14:48 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:31.610 19:14:48 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:24:31.610 19:14:48 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:24:31.610 19:14:48 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:31.610 19:14:48 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:31.610 19:14:48 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:24:31.610 19:14:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:31.610 19:14:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:31.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.610 --rc genhtml_branch_coverage=1 00:24:31.610 --rc genhtml_function_coverage=1 00:24:31.610 --rc genhtml_legend=1 00:24:31.610 --rc geninfo_all_blocks=1 00:24:31.610 --rc geninfo_unexecuted_blocks=1 00:24:31.610 00:24:31.610 ' 00:24:31.610 19:14:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:31.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.610 --rc genhtml_branch_coverage=1 00:24:31.610 --rc genhtml_function_coverage=1 00:24:31.610 --rc genhtml_legend=1 00:24:31.610 --rc geninfo_all_blocks=1 00:24:31.610 --rc geninfo_unexecuted_blocks=1 00:24:31.610 00:24:31.610 ' 00:24:31.610 19:14:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:31.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.610 --rc genhtml_branch_coverage=1 00:24:31.610 --rc genhtml_function_coverage=1 00:24:31.610 --rc genhtml_legend=1 00:24:31.610 --rc geninfo_all_blocks=1 00:24:31.610 --rc geninfo_unexecuted_blocks=1 00:24:31.610 00:24:31.610 ' 00:24:31.610 19:14:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:31.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.611 --rc genhtml_branch_coverage=1 00:24:31.611 --rc genhtml_function_coverage=1 00:24:31.611 --rc genhtml_legend=1 00:24:31.611 --rc geninfo_all_blocks=1 00:24:31.611 --rc geninfo_unexecuted_blocks=1 00:24:31.611 00:24:31.611 ' 00:24:31.611 19:14:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:24:31.611 19:14:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:24:31.611 19:14:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:24:31.611 19:14:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:24:31.611 19:14:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:24:31.611 19:14:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:24:31.611 19:14:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:31.611 19:14:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:24:31.611 19:14:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:24:31.611 19:14:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:31.611 19:14:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:31.611 19:14:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:24:31.611 19:14:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:24:31.611 19:14:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:31.611 19:14:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:31.611 19:14:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:24:31.611 19:14:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:24:31.611 19:14:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:31.611 19:14:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:31.611 19:14:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:24:31.611 19:14:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:24:31.611 19:14:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:31.611 19:14:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:31.611 19:14:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:31.611 19:14:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:31.611 19:14:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:24:31.611 19:14:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:24:31.611 19:14:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:31.611 19:14:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:31.611 19:14:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:31.611 19:14:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:31.611 19:14:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:24:31.611 19:14:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:24:31.611 19:14:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:24:31.611 19:14:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:24:31.611 19:14:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:24:31.611 19:14:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:24:31.611 19:14:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:24:31.611 19:14:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:24:31.611 19:14:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:24:31.611 19:14:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:24:31.611 19:14:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:24:31.611 19:14:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=90751 00:24:31.611 19:14:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 90751 00:24:31.611 19:14:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:24:31.611 19:14:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 90751 ']' 00:24:31.611 19:14:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:31.611 19:14:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:31.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:31.611 19:14:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:31.611 19:14:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:31.611 19:14:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:31.611 [2024-12-05 19:14:49.061658] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:24:31.611 [2024-12-05 19:14:49.062031] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90751 ] 00:24:31.873 [2024-12-05 19:14:49.209347] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:31.873 [2024-12-05 19:14:49.250668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:32.443 19:14:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:32.443 19:14:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:24:32.443 19:14:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:24:32.443 19:14:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:24:32.443 19:14:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:24:32.443 19:14:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:24:32.443 19:14:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:24:32.443 19:14:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:24:32.704 19:14:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:24:32.704 19:14:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:24:32.704 19:14:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:24:32.704 19:14:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:24:32.704 19:14:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:32.704 19:14:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:24:32.704 19:14:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:24:32.704 19:14:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:24:32.965 19:14:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:32.965 { 00:24:32.965 "name": "nvme0n1", 00:24:32.965 "aliases": [ 00:24:32.965 "eafd4c8f-736f-4415-9182-541e04c06475" 00:24:32.965 ], 00:24:32.965 "product_name": "NVMe disk", 00:24:32.965 "block_size": 4096, 00:24:32.965 "num_blocks": 1310720, 00:24:32.965 "uuid": "eafd4c8f-736f-4415-9182-541e04c06475", 00:24:32.965 "numa_id": -1, 00:24:32.965 "assigned_rate_limits": { 00:24:32.965 "rw_ios_per_sec": 0, 00:24:32.965 "rw_mbytes_per_sec": 0, 00:24:32.965 "r_mbytes_per_sec": 0, 00:24:32.965 "w_mbytes_per_sec": 0 00:24:32.965 }, 00:24:32.965 "claimed": true, 00:24:32.965 "claim_type": "read_many_write_one", 00:24:32.965 "zoned": false, 00:24:32.965 "supported_io_types": { 00:24:32.965 "read": true, 00:24:32.965 "write": true, 00:24:32.965 "unmap": true, 00:24:32.965 "flush": true, 00:24:32.965 "reset": true, 00:24:32.965 "nvme_admin": true, 00:24:32.965 "nvme_io": true, 00:24:32.965 "nvme_io_md": false, 00:24:32.965 "write_zeroes": true, 00:24:32.965 "zcopy": false, 00:24:32.965 "get_zone_info": false, 00:24:32.965 "zone_management": false, 00:24:32.965 "zone_append": false, 00:24:32.965 "compare": true, 00:24:32.965 "compare_and_write": false, 00:24:32.965 "abort": true, 00:24:32.965 "seek_hole": false, 00:24:32.965 "seek_data": false, 00:24:32.965 "copy": true, 00:24:32.965 "nvme_iov_md": false 00:24:32.965 }, 00:24:32.965 "driver_specific": { 00:24:32.965 "nvme": [ 00:24:32.965 { 00:24:32.965 "pci_address": "0000:00:11.0", 00:24:32.965 "trid": { 00:24:32.965 "trtype": "PCIe", 00:24:32.965 "traddr": "0000:00:11.0" 00:24:32.965 }, 00:24:32.965 "ctrlr_data": { 00:24:32.965 "cntlid": 0, 00:24:32.965 "vendor_id": "0x1b36", 00:24:32.965 "model_number": "QEMU NVMe Ctrl", 00:24:32.965 "serial_number": "12341", 00:24:32.965 "firmware_revision": "8.0.0", 00:24:32.965 "subnqn": "nqn.2019-08.org.qemu:12341", 00:24:32.965 "oacs": { 00:24:32.965 "security": 0, 00:24:32.965 "format": 1, 00:24:32.965 "firmware": 0, 00:24:32.965 "ns_manage": 1 00:24:32.965 }, 00:24:32.965 "multi_ctrlr": false, 00:24:32.965 "ana_reporting": false 00:24:32.965 }, 00:24:32.965 "vs": { 00:24:32.965 "nvme_version": "1.4" 00:24:32.965 }, 00:24:32.965 "ns_data": { 00:24:32.965 "id": 1, 00:24:32.965 "can_share": false 00:24:32.965 } 00:24:32.965 } 00:24:32.965 ], 00:24:32.965 "mp_policy": "active_passive" 00:24:32.965 } 00:24:32.965 } 00:24:32.965 ]' 00:24:32.965 19:14:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:32.965 19:14:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:24:32.965 19:14:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:32.965 19:14:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:24:32.965 19:14:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:24:32.965 19:14:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:24:32.965 19:14:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:24:32.965 19:14:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:24:32.965 19:14:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:24:32.965 19:14:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:24:32.965 19:14:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:24:33.226 19:14:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=97a3e74e-df4f-4eed-b69d-eb4e6c249cb6 00:24:33.226 19:14:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:24:33.226 19:14:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 97a3e74e-df4f-4eed-b69d-eb4e6c249cb6 00:24:33.486 19:14:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:24:33.745 19:14:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=32587d71-1472-4033-90cb-3b7c3232841d 00:24:33.745 19:14:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 32587d71-1472-4033-90cb-3b7c3232841d 00:24:34.006 19:14:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=c6bb62cc-3439-4e60-a214-35ff938109fa 00:24:34.006 19:14:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:24:34.006 19:14:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 c6bb62cc-3439-4e60-a214-35ff938109fa 00:24:34.006 19:14:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:24:34.006 19:14:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:24:34.006 19:14:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=c6bb62cc-3439-4e60-a214-35ff938109fa 00:24:34.006 19:14:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:24:34.006 19:14:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size c6bb62cc-3439-4e60-a214-35ff938109fa 00:24:34.006 19:14:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=c6bb62cc-3439-4e60-a214-35ff938109fa 00:24:34.006 19:14:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:34.006 19:14:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:24:34.006 19:14:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:24:34.006 19:14:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c6bb62cc-3439-4e60-a214-35ff938109fa 00:24:34.006 19:14:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:34.006 { 00:24:34.006 "name": "c6bb62cc-3439-4e60-a214-35ff938109fa", 00:24:34.006 "aliases": [ 00:24:34.006 "lvs/nvme0n1p0" 00:24:34.006 ], 00:24:34.006 "product_name": "Logical Volume", 00:24:34.006 "block_size": 4096, 00:24:34.006 "num_blocks": 26476544, 00:24:34.006 "uuid": "c6bb62cc-3439-4e60-a214-35ff938109fa", 00:24:34.006 "assigned_rate_limits": { 00:24:34.006 "rw_ios_per_sec": 0, 00:24:34.006 "rw_mbytes_per_sec": 0, 00:24:34.006 "r_mbytes_per_sec": 0, 00:24:34.006 "w_mbytes_per_sec": 0 00:24:34.006 }, 00:24:34.006 "claimed": false, 00:24:34.006 "zoned": false, 00:24:34.006 "supported_io_types": { 00:24:34.006 "read": true, 00:24:34.006 "write": true, 00:24:34.006 "unmap": true, 00:24:34.006 "flush": false, 00:24:34.006 "reset": true, 00:24:34.006 "nvme_admin": false, 00:24:34.006 "nvme_io": false, 00:24:34.006 "nvme_io_md": false, 00:24:34.006 "write_zeroes": true, 00:24:34.006 "zcopy": false, 00:24:34.006 "get_zone_info": false, 00:24:34.006 "zone_management": false, 00:24:34.006 "zone_append": false, 00:24:34.006 "compare": false, 00:24:34.006 "compare_and_write": false, 00:24:34.006 "abort": false, 00:24:34.006 "seek_hole": true, 00:24:34.006 "seek_data": true, 00:24:34.006 "copy": false, 00:24:34.006 "nvme_iov_md": false 00:24:34.006 }, 00:24:34.006 "driver_specific": { 00:24:34.006 "lvol": { 00:24:34.006 "lvol_store_uuid": "32587d71-1472-4033-90cb-3b7c3232841d", 00:24:34.006 "base_bdev": "nvme0n1", 00:24:34.006 "thin_provision": true, 00:24:34.006 "num_allocated_clusters": 0, 00:24:34.006 "snapshot": false, 00:24:34.006 "clone": false, 00:24:34.006 "esnap_clone": false 00:24:34.006 } 00:24:34.006 } 00:24:34.006 } 00:24:34.006 ]' 00:24:34.268 19:14:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:34.268 19:14:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:24:34.268 19:14:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:34.268 19:14:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:34.268 19:14:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:34.268 19:14:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:24:34.268 19:14:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:24:34.268 19:14:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:24:34.268 19:14:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:24:34.529 19:14:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:24:34.529 19:14:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:24:34.529 19:14:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size c6bb62cc-3439-4e60-a214-35ff938109fa 00:24:34.529 19:14:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=c6bb62cc-3439-4e60-a214-35ff938109fa 00:24:34.529 19:14:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:34.529 19:14:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:24:34.529 19:14:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:24:34.529 19:14:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c6bb62cc-3439-4e60-a214-35ff938109fa 00:24:34.529 19:14:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:34.529 { 00:24:34.529 "name": "c6bb62cc-3439-4e60-a214-35ff938109fa", 00:24:34.529 "aliases": [ 00:24:34.529 "lvs/nvme0n1p0" 00:24:34.529 ], 00:24:34.529 "product_name": "Logical Volume", 00:24:34.529 "block_size": 4096, 00:24:34.529 "num_blocks": 26476544, 00:24:34.529 "uuid": "c6bb62cc-3439-4e60-a214-35ff938109fa", 00:24:34.529 "assigned_rate_limits": { 00:24:34.529 "rw_ios_per_sec": 0, 00:24:34.529 "rw_mbytes_per_sec": 0, 00:24:34.529 "r_mbytes_per_sec": 0, 00:24:34.529 "w_mbytes_per_sec": 0 00:24:34.529 }, 00:24:34.529 "claimed": false, 00:24:34.529 "zoned": false, 00:24:34.529 "supported_io_types": { 00:24:34.529 "read": true, 00:24:34.529 "write": true, 00:24:34.529 "unmap": true, 00:24:34.529 "flush": false, 00:24:34.529 "reset": true, 00:24:34.529 "nvme_admin": false, 00:24:34.529 "nvme_io": false, 00:24:34.529 "nvme_io_md": false, 00:24:34.529 "write_zeroes": true, 00:24:34.529 "zcopy": false, 00:24:34.529 "get_zone_info": false, 00:24:34.529 "zone_management": false, 00:24:34.529 "zone_append": false, 00:24:34.529 "compare": false, 00:24:34.529 "compare_and_write": false, 00:24:34.529 "abort": false, 00:24:34.529 "seek_hole": true, 00:24:34.529 "seek_data": true, 00:24:34.529 "copy": false, 00:24:34.529 "nvme_iov_md": false 00:24:34.529 }, 00:24:34.529 "driver_specific": { 00:24:34.529 "lvol": { 00:24:34.529 "lvol_store_uuid": "32587d71-1472-4033-90cb-3b7c3232841d", 00:24:34.529 "base_bdev": "nvme0n1", 00:24:34.529 "thin_provision": true, 00:24:34.529 "num_allocated_clusters": 0, 00:24:34.529 "snapshot": false, 00:24:34.529 "clone": false, 00:24:34.529 "esnap_clone": false 00:24:34.529 } 00:24:34.529 } 00:24:34.529 } 00:24:34.529 ]' 00:24:34.529 19:14:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:34.790 19:14:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:24:34.790 19:14:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:34.790 19:14:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:34.790 19:14:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:34.790 19:14:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:24:34.790 19:14:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:24:34.790 19:14:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:24:34.790 19:14:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:24:35.048 19:14:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size c6bb62cc-3439-4e60-a214-35ff938109fa 00:24:35.048 19:14:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=c6bb62cc-3439-4e60-a214-35ff938109fa 00:24:35.048 19:14:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:35.048 19:14:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:24:35.048 19:14:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:24:35.048 19:14:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c6bb62cc-3439-4e60-a214-35ff938109fa 00:24:35.048 19:14:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:35.048 { 00:24:35.048 "name": "c6bb62cc-3439-4e60-a214-35ff938109fa", 00:24:35.048 "aliases": [ 00:24:35.048 "lvs/nvme0n1p0" 00:24:35.048 ], 00:24:35.048 "product_name": "Logical Volume", 00:24:35.048 "block_size": 4096, 00:24:35.048 "num_blocks": 26476544, 00:24:35.048 "uuid": "c6bb62cc-3439-4e60-a214-35ff938109fa", 00:24:35.048 "assigned_rate_limits": { 00:24:35.048 "rw_ios_per_sec": 0, 00:24:35.048 "rw_mbytes_per_sec": 0, 00:24:35.048 "r_mbytes_per_sec": 0, 00:24:35.048 "w_mbytes_per_sec": 0 00:24:35.048 }, 00:24:35.048 "claimed": false, 00:24:35.048 "zoned": false, 00:24:35.048 "supported_io_types": { 00:24:35.048 "read": true, 00:24:35.048 "write": true, 00:24:35.048 "unmap": true, 00:24:35.048 "flush": false, 00:24:35.048 "reset": true, 00:24:35.048 "nvme_admin": false, 00:24:35.048 "nvme_io": false, 00:24:35.048 "nvme_io_md": false, 00:24:35.048 "write_zeroes": true, 00:24:35.048 "zcopy": false, 00:24:35.048 "get_zone_info": false, 00:24:35.048 "zone_management": false, 00:24:35.048 "zone_append": false, 00:24:35.048 "compare": false, 00:24:35.048 "compare_and_write": false, 00:24:35.048 "abort": false, 00:24:35.048 "seek_hole": true, 00:24:35.048 "seek_data": true, 00:24:35.048 "copy": false, 00:24:35.048 "nvme_iov_md": false 00:24:35.048 }, 00:24:35.048 "driver_specific": { 00:24:35.048 "lvol": { 00:24:35.048 "lvol_store_uuid": "32587d71-1472-4033-90cb-3b7c3232841d", 00:24:35.048 "base_bdev": "nvme0n1", 00:24:35.048 "thin_provision": true, 00:24:35.048 "num_allocated_clusters": 0, 00:24:35.048 "snapshot": false, 00:24:35.048 "clone": false, 00:24:35.048 "esnap_clone": false 00:24:35.048 } 00:24:35.048 } 00:24:35.048 } 00:24:35.048 ]' 00:24:35.048 19:14:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:35.048 19:14:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:24:35.048 19:14:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:35.310 19:14:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:35.310 19:14:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:35.310 19:14:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:24:35.310 19:14:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:24:35.310 19:14:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d c6bb62cc-3439-4e60-a214-35ff938109fa --l2p_dram_limit 10' 00:24:35.310 19:14:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:24:35.310 19:14:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:24:35.310 19:14:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:24:35.310 19:14:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d c6bb62cc-3439-4e60-a214-35ff938109fa --l2p_dram_limit 10 -c nvc0n1p0 00:24:35.310 [2024-12-05 19:14:52.784420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.310 [2024-12-05 19:14:52.784467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:35.310 [2024-12-05 19:14:52.784479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:35.310 [2024-12-05 19:14:52.784488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.310 [2024-12-05 19:14:52.784524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.310 [2024-12-05 19:14:52.784534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:35.310 [2024-12-05 19:14:52.784542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:24:35.310 [2024-12-05 19:14:52.784551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.310 [2024-12-05 19:14:52.784569] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:35.310 [2024-12-05 19:14:52.784752] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:35.310 [2024-12-05 19:14:52.784764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.310 [2024-12-05 19:14:52.784771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:35.310 [2024-12-05 19:14:52.784781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.202 ms 00:24:35.310 [2024-12-05 19:14:52.784788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.310 [2024-12-05 19:14:52.784808] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID a560a9df-7547-43f8-95d1-de6f5f67663b 00:24:35.310 [2024-12-05 19:14:52.786114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.310 [2024-12-05 19:14:52.786144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:24:35.310 [2024-12-05 19:14:52.786154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:24:35.310 [2024-12-05 19:14:52.786160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.310 [2024-12-05 19:14:52.793091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.310 [2024-12-05 19:14:52.793118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:35.310 [2024-12-05 19:14:52.793128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.891 ms 00:24:35.310 [2024-12-05 19:14:52.793134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.310 [2024-12-05 19:14:52.793234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.311 [2024-12-05 19:14:52.793242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:35.311 [2024-12-05 19:14:52.793270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:24:35.311 [2024-12-05 19:14:52.793277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.311 [2024-12-05 19:14:52.793314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.311 [2024-12-05 19:14:52.793322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:35.311 [2024-12-05 19:14:52.793331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:35.311 [2024-12-05 19:14:52.793340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.311 [2024-12-05 19:14:52.793362] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:35.311 [2024-12-05 19:14:52.795033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.311 [2024-12-05 19:14:52.795061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:35.311 [2024-12-05 19:14:52.795068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.677 ms 00:24:35.311 [2024-12-05 19:14:52.795076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.311 [2024-12-05 19:14:52.795110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.311 [2024-12-05 19:14:52.795121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:35.311 [2024-12-05 19:14:52.795128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:24:35.311 [2024-12-05 19:14:52.795137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.311 [2024-12-05 19:14:52.795150] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:24:35.311 [2024-12-05 19:14:52.795286] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:35.311 [2024-12-05 19:14:52.795297] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:35.311 [2024-12-05 19:14:52.795308] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:35.311 [2024-12-05 19:14:52.795316] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:35.311 [2024-12-05 19:14:52.795327] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:35.311 [2024-12-05 19:14:52.795334] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:35.311 [2024-12-05 19:14:52.795343] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:35.311 [2024-12-05 19:14:52.795350] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:35.311 [2024-12-05 19:14:52.795357] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:35.311 [2024-12-05 19:14:52.795363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.311 [2024-12-05 19:14:52.795370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:35.311 [2024-12-05 19:14:52.795376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.214 ms 00:24:35.311 [2024-12-05 19:14:52.795385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.311 [2024-12-05 19:14:52.795451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.311 [2024-12-05 19:14:52.795461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:35.311 [2024-12-05 19:14:52.795466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:24:35.311 [2024-12-05 19:14:52.795478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.311 [2024-12-05 19:14:52.795550] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:35.311 [2024-12-05 19:14:52.795561] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:35.311 [2024-12-05 19:14:52.795567] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:35.311 [2024-12-05 19:14:52.795575] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:35.311 [2024-12-05 19:14:52.795581] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:35.311 [2024-12-05 19:14:52.795587] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:35.311 [2024-12-05 19:14:52.795593] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:35.311 [2024-12-05 19:14:52.795600] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:35.311 [2024-12-05 19:14:52.795605] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:35.311 [2024-12-05 19:14:52.795612] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:35.311 [2024-12-05 19:14:52.795617] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:35.311 [2024-12-05 19:14:52.795625] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:35.311 [2024-12-05 19:14:52.795630] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:35.311 [2024-12-05 19:14:52.795639] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:35.311 [2024-12-05 19:14:52.795644] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:35.311 [2024-12-05 19:14:52.795651] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:35.311 [2024-12-05 19:14:52.795656] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:35.311 [2024-12-05 19:14:52.795663] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:35.311 [2024-12-05 19:14:52.795668] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:35.311 [2024-12-05 19:14:52.795677] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:35.311 [2024-12-05 19:14:52.795682] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:35.311 [2024-12-05 19:14:52.795690] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:35.311 [2024-12-05 19:14:52.795696] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:35.311 [2024-12-05 19:14:52.795704] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:35.311 [2024-12-05 19:14:52.795710] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:35.311 [2024-12-05 19:14:52.795717] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:35.311 [2024-12-05 19:14:52.795723] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:35.311 [2024-12-05 19:14:52.795731] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:35.311 [2024-12-05 19:14:52.795737] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:35.311 [2024-12-05 19:14:52.795747] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:35.311 [2024-12-05 19:14:52.795753] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:35.311 [2024-12-05 19:14:52.795760] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:35.311 [2024-12-05 19:14:52.795767] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:35.311 [2024-12-05 19:14:52.795774] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:35.311 [2024-12-05 19:14:52.795780] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:35.311 [2024-12-05 19:14:52.795787] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:35.311 [2024-12-05 19:14:52.795792] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:35.311 [2024-12-05 19:14:52.795799] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:35.311 [2024-12-05 19:14:52.795805] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:35.311 [2024-12-05 19:14:52.795813] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:35.311 [2024-12-05 19:14:52.795819] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:35.311 [2024-12-05 19:14:52.795826] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:35.311 [2024-12-05 19:14:52.795832] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:35.311 [2024-12-05 19:14:52.795839] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:35.311 [2024-12-05 19:14:52.795850] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:35.311 [2024-12-05 19:14:52.795860] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:35.311 [2024-12-05 19:14:52.795867] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:35.311 [2024-12-05 19:14:52.795878] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:35.311 [2024-12-05 19:14:52.795884] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:35.311 [2024-12-05 19:14:52.795891] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:35.311 [2024-12-05 19:14:52.795897] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:35.311 [2024-12-05 19:14:52.795908] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:35.311 [2024-12-05 19:14:52.795914] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:35.311 [2024-12-05 19:14:52.795924] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:35.311 [2024-12-05 19:14:52.795934] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:35.311 [2024-12-05 19:14:52.795943] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:35.311 [2024-12-05 19:14:52.795950] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:35.311 [2024-12-05 19:14:52.795958] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:35.311 [2024-12-05 19:14:52.795964] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:35.311 [2024-12-05 19:14:52.795972] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:35.311 [2024-12-05 19:14:52.795978] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:35.311 [2024-12-05 19:14:52.795988] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:35.311 [2024-12-05 19:14:52.795995] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:35.312 [2024-12-05 19:14:52.796003] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:35.312 [2024-12-05 19:14:52.796009] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:35.312 [2024-12-05 19:14:52.796037] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:35.312 [2024-12-05 19:14:52.796044] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:35.312 [2024-12-05 19:14:52.796052] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:35.312 [2024-12-05 19:14:52.796059] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:35.312 [2024-12-05 19:14:52.796066] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:35.312 [2024-12-05 19:14:52.796073] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:35.312 [2024-12-05 19:14:52.796082] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:35.312 [2024-12-05 19:14:52.796088] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:35.312 [2024-12-05 19:14:52.796094] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:35.312 [2024-12-05 19:14:52.796100] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:35.312 [2024-12-05 19:14:52.796107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.312 [2024-12-05 19:14:52.796113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:35.312 [2024-12-05 19:14:52.796123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.607 ms 00:24:35.312 [2024-12-05 19:14:52.796129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.312 [2024-12-05 19:14:52.796158] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:24:35.312 [2024-12-05 19:14:52.796165] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:24:39.636 [2024-12-05 19:14:56.383268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.636 [2024-12-05 19:14:56.383571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:24:39.636 [2024-12-05 19:14:56.383606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3587.067 ms 00:24:39.636 [2024-12-05 19:14:56.383617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.636 [2024-12-05 19:14:56.402808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.636 [2024-12-05 19:14:56.403018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:39.636 [2024-12-05 19:14:56.403048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.026 ms 00:24:39.636 [2024-12-05 19:14:56.403059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.636 [2024-12-05 19:14:56.403211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.636 [2024-12-05 19:14:56.403225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:39.636 [2024-12-05 19:14:56.403238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:24:39.636 [2024-12-05 19:14:56.403247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.636 [2024-12-05 19:14:56.420592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.636 [2024-12-05 19:14:56.420645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:39.636 [2024-12-05 19:14:56.420660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.258 ms 00:24:39.636 [2024-12-05 19:14:56.420674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.636 [2024-12-05 19:14:56.420714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.636 [2024-12-05 19:14:56.420723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:39.636 [2024-12-05 19:14:56.420736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:39.636 [2024-12-05 19:14:56.420744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.636 [2024-12-05 19:14:56.421521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.636 [2024-12-05 19:14:56.421557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:39.636 [2024-12-05 19:14:56.421573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.718 ms 00:24:39.636 [2024-12-05 19:14:56.421601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.636 [2024-12-05 19:14:56.421745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.636 [2024-12-05 19:14:56.421767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:39.636 [2024-12-05 19:14:56.421780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:24:39.636 [2024-12-05 19:14:56.421789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.636 [2024-12-05 19:14:56.433682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.636 [2024-12-05 19:14:56.433731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:39.636 [2024-12-05 19:14:56.433747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.866 ms 00:24:39.636 [2024-12-05 19:14:56.433755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.636 [2024-12-05 19:14:56.461822] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:39.636 [2024-12-05 19:14:56.466920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.636 [2024-12-05 19:14:56.467158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:39.636 [2024-12-05 19:14:56.467180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.063 ms 00:24:39.636 [2024-12-05 19:14:56.467194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.636 [2024-12-05 19:14:56.553720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.636 [2024-12-05 19:14:56.553924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:24:39.636 [2024-12-05 19:14:56.554335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.478 ms 00:24:39.636 [2024-12-05 19:14:56.554395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.636 [2024-12-05 19:14:56.554720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.636 [2024-12-05 19:14:56.554830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:39.636 [2024-12-05 19:14:56.554892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.170 ms 00:24:39.636 [2024-12-05 19:14:56.554919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.636 [2024-12-05 19:14:56.560949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.636 [2024-12-05 19:14:56.561138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:24:39.636 [2024-12-05 19:14:56.561361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.989 ms 00:24:39.636 [2024-12-05 19:14:56.561413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.636 [2024-12-05 19:14:56.566593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.636 [2024-12-05 19:14:56.566773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:24:39.636 [2024-12-05 19:14:56.567238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.117 ms 00:24:39.636 [2024-12-05 19:14:56.567299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.636 [2024-12-05 19:14:56.567682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.636 [2024-12-05 19:14:56.567823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:39.636 [2024-12-05 19:14:56.567896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.305 ms 00:24:39.636 [2024-12-05 19:14:56.567930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.636 [2024-12-05 19:14:56.613708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.636 [2024-12-05 19:14:56.613908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:24:39.636 [2024-12-05 19:14:56.613979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.674 ms 00:24:39.636 [2024-12-05 19:14:56.614006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.636 [2024-12-05 19:14:56.622418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.636 [2024-12-05 19:14:56.622597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:24:39.636 [2024-12-05 19:14:56.622658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.322 ms 00:24:39.636 [2024-12-05 19:14:56.622686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.636 [2024-12-05 19:14:56.628680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.636 [2024-12-05 19:14:56.628852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:24:39.636 [2024-12-05 19:14:56.628911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.942 ms 00:24:39.636 [2024-12-05 19:14:56.628936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.636 [2024-12-05 19:14:56.635611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.636 [2024-12-05 19:14:56.635790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:39.636 [2024-12-05 19:14:56.635849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.623 ms 00:24:39.637 [2024-12-05 19:14:56.635878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.637 [2024-12-05 19:14:56.636166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.637 [2024-12-05 19:14:56.636198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:39.637 [2024-12-05 19:14:56.636220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:39.637 [2024-12-05 19:14:56.636243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.637 [2024-12-05 19:14:56.636535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.637 [2024-12-05 19:14:56.636572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:39.637 [2024-12-05 19:14:56.636596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:24:39.637 [2024-12-05 19:14:56.636688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.637 [2024-12-05 19:14:56.638190] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3853.146 ms, result 0 00:24:39.637 { 00:24:39.637 "name": "ftl0", 00:24:39.637 "uuid": "a560a9df-7547-43f8-95d1-de6f5f67663b" 00:24:39.637 } 00:24:39.637 19:14:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:24:39.637 19:14:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:24:39.637 19:14:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:24:39.637 19:14:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:24:39.637 19:14:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:24:39.637 /dev/nbd0 00:24:39.637 19:14:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:24:39.637 19:14:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:24:39.637 19:14:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:24:39.637 19:14:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:39.637 19:14:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:39.637 19:14:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:24:39.637 19:14:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:24:39.637 19:14:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:39.637 19:14:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:39.637 19:14:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:24:39.637 1+0 records in 00:24:39.637 1+0 records out 00:24:39.637 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000906963 s, 4.5 MB/s 00:24:39.637 19:14:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:24:39.637 19:14:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:24:39.637 19:14:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:24:39.637 19:14:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:39.637 19:14:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:24:39.637 19:14:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:24:39.898 [2024-12-05 19:14:57.193312] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:24:39.899 [2024-12-05 19:14:57.193447] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90893 ] 00:24:39.899 [2024-12-05 19:14:57.340136] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:39.899 [2024-12-05 19:14:57.368930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:41.284  [2024-12-05T19:14:59.786Z] Copying: 193/1024 [MB] (193 MBps) [2024-12-05T19:15:00.726Z] Copying: 389/1024 [MB] (196 MBps) [2024-12-05T19:15:01.658Z] Copying: 586/1024 [MB] (196 MBps) [2024-12-05T19:15:02.258Z] Copying: 838/1024 [MB] (252 MBps) [2024-12-05T19:15:02.517Z] Copying: 1024/1024 [MB] (average 217 MBps) 00:24:44.958 00:24:44.958 19:15:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:24:46.867 19:15:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:24:46.867 [2024-12-05 19:15:04.370732] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:24:46.867 [2024-12-05 19:15:04.370851] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90969 ] 00:24:47.129 [2024-12-05 19:15:04.515937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:47.129 [2024-12-05 19:15:04.535172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:48.071  [2024-12-05T19:15:07.012Z] Copying: 18/1024 [MB] (18 MBps) [2024-12-05T19:15:07.946Z] Copying: 39/1024 [MB] (21 MBps) [2024-12-05T19:15:08.880Z] Copying: 60/1024 [MB] (20 MBps) [2024-12-05T19:15:09.813Z] Copying: 89/1024 [MB] (28 MBps) [2024-12-05T19:15:10.747Z] Copying: 118/1024 [MB] (28 MBps) [2024-12-05T19:15:11.681Z] Copying: 148/1024 [MB] (30 MBps) [2024-12-05T19:15:12.615Z] Copying: 181/1024 [MB] (32 MBps) [2024-12-05T19:15:13.989Z] Copying: 212/1024 [MB] (30 MBps) [2024-12-05T19:15:14.922Z] Copying: 249/1024 [MB] (37 MBps) [2024-12-05T19:15:15.854Z] Copying: 281/1024 [MB] (32 MBps) [2024-12-05T19:15:16.788Z] Copying: 316/1024 [MB] (34 MBps) [2024-12-05T19:15:17.721Z] Copying: 347/1024 [MB] (31 MBps) [2024-12-05T19:15:18.653Z] Copying: 378/1024 [MB] (30 MBps) [2024-12-05T19:15:20.025Z] Copying: 411/1024 [MB] (33 MBps) [2024-12-05T19:15:20.591Z] Copying: 441/1024 [MB] (30 MBps) [2024-12-05T19:15:21.967Z] Copying: 474/1024 [MB] (32 MBps) [2024-12-05T19:15:22.901Z] Copying: 508/1024 [MB] (34 MBps) [2024-12-05T19:15:23.836Z] Copying: 539/1024 [MB] (31 MBps) [2024-12-05T19:15:24.771Z] Copying: 574/1024 [MB] (34 MBps) [2024-12-05T19:15:25.722Z] Copying: 612/1024 [MB] (38 MBps) [2024-12-05T19:15:26.671Z] Copying: 650/1024 [MB] (37 MBps) [2024-12-05T19:15:27.605Z] Copying: 683/1024 [MB] (32 MBps) [2024-12-05T19:15:28.979Z] Copying: 721/1024 [MB] (38 MBps) [2024-12-05T19:15:29.913Z] Copying: 753/1024 [MB] (32 MBps) [2024-12-05T19:15:30.846Z] Copying: 787/1024 [MB] (33 MBps) [2024-12-05T19:15:31.778Z] Copying: 820/1024 [MB] (32 MBps) [2024-12-05T19:15:32.711Z] Copying: 853/1024 [MB] (33 MBps) [2024-12-05T19:15:33.644Z] Copying: 886/1024 [MB] (32 MBps) [2024-12-05T19:15:35.017Z] Copying: 924/1024 [MB] (38 MBps) [2024-12-05T19:15:35.949Z] Copying: 962/1024 [MB] (37 MBps) [2024-12-05T19:15:36.514Z] Copying: 999/1024 [MB] (37 MBps) [2024-12-05T19:15:36.774Z] Copying: 1024/1024 [MB] (average 32 MBps) 00:25:19.215 00:25:19.215 19:15:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:25:19.215 19:15:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:25:19.215 19:15:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:25:19.475 [2024-12-05 19:15:36.881086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.475 [2024-12-05 19:15:36.881132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:19.475 [2024-12-05 19:15:36.881145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:19.475 [2024-12-05 19:15:36.881152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.475 [2024-12-05 19:15:36.881172] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:19.475 [2024-12-05 19:15:36.881725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.475 [2024-12-05 19:15:36.881746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:19.475 [2024-12-05 19:15:36.881753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.540 ms 00:25:19.475 [2024-12-05 19:15:36.881762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.475 [2024-12-05 19:15:36.884361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.475 [2024-12-05 19:15:36.884390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:19.475 [2024-12-05 19:15:36.884398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.582 ms 00:25:19.475 [2024-12-05 19:15:36.884407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.475 [2024-12-05 19:15:36.900651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.475 [2024-12-05 19:15:36.900682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:19.475 [2024-12-05 19:15:36.900693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.231 ms 00:25:19.475 [2024-12-05 19:15:36.900701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.475 [2024-12-05 19:15:36.905313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.475 [2024-12-05 19:15:36.905346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:19.475 [2024-12-05 19:15:36.905354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.586 ms 00:25:19.475 [2024-12-05 19:15:36.905362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.475 [2024-12-05 19:15:36.907511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.475 [2024-12-05 19:15:36.907653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:19.475 [2024-12-05 19:15:36.907666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.097 ms 00:25:19.475 [2024-12-05 19:15:36.907674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.475 [2024-12-05 19:15:36.912595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.475 [2024-12-05 19:15:36.912627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:19.475 [2024-12-05 19:15:36.912634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.894 ms 00:25:19.475 [2024-12-05 19:15:36.912645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.475 [2024-12-05 19:15:36.912741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.475 [2024-12-05 19:15:36.912750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:19.475 [2024-12-05 19:15:36.912757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:25:19.475 [2024-12-05 19:15:36.912770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.475 [2024-12-05 19:15:36.915404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.475 [2024-12-05 19:15:36.915433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:19.475 [2024-12-05 19:15:36.915440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.621 ms 00:25:19.475 [2024-12-05 19:15:36.915448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.475 [2024-12-05 19:15:36.917566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.475 [2024-12-05 19:15:36.917600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:19.476 [2024-12-05 19:15:36.917608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.092 ms 00:25:19.476 [2024-12-05 19:15:36.917617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.476 [2024-12-05 19:15:36.919393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.476 [2024-12-05 19:15:36.919423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:19.476 [2024-12-05 19:15:36.919430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.750 ms 00:25:19.476 [2024-12-05 19:15:36.919437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.476 [2024-12-05 19:15:36.921284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.476 [2024-12-05 19:15:36.921311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:19.476 [2024-12-05 19:15:36.921318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.802 ms 00:25:19.476 [2024-12-05 19:15:36.921326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.476 [2024-12-05 19:15:36.921350] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:19.476 [2024-12-05 19:15:36.921369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:19.476 [2024-12-05 19:15:36.921377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:19.476 [2024-12-05 19:15:36.921385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:19.476 [2024-12-05 19:15:36.921392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:19.476 [2024-12-05 19:15:36.921402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:19.476 [2024-12-05 19:15:36.921407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:19.476 [2024-12-05 19:15:36.921415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:19.476 [2024-12-05 19:15:36.921421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:19.476 [2024-12-05 19:15:36.921429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:19.476 [2024-12-05 19:15:36.921436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:19.476 [2024-12-05 19:15:36.921443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:19.476 [2024-12-05 19:15:36.921450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:19.476 [2024-12-05 19:15:36.921457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:19.476 [2024-12-05 19:15:36.921462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:19.476 [2024-12-05 19:15:36.921471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:19.476 [2024-12-05 19:15:36.921477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:19.476 [2024-12-05 19:15:36.921484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:19.476 [2024-12-05 19:15:36.921490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:19.476 [2024-12-05 19:15:36.921497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:19.476 [2024-12-05 19:15:36.921503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:19.476 [2024-12-05 19:15:36.921511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:19.476 [2024-12-05 19:15:36.921517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:19.476 [2024-12-05 19:15:36.921524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:19.476 [2024-12-05 19:15:36.921529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:19.476 [2024-12-05 19:15:36.921538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:19.476 [2024-12-05 19:15:36.921544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:19.476 [2024-12-05 19:15:36.921551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:19.476 [2024-12-05 19:15:36.921557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:19.476 [2024-12-05 19:15:36.921565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:19.476 [2024-12-05 19:15:36.921571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:19.476 [2024-12-05 19:15:36.921580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:19.476 [2024-12-05 19:15:36.921586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:19.476 [2024-12-05 19:15:36.921593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:19.476 [2024-12-05 19:15:36.921598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:19.476 [2024-12-05 19:15:36.921605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:19.476 [2024-12-05 19:15:36.921612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:19.476 [2024-12-05 19:15:36.921622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:19.476 [2024-12-05 19:15:36.921628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:19.476 [2024-12-05 19:15:36.921635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:19.476 [2024-12-05 19:15:36.921640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:19.476 [2024-12-05 19:15:36.921648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:19.476 [2024-12-05 19:15:36.921661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:19.476 [2024-12-05 19:15:36.921669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:19.476 [2024-12-05 19:15:36.921675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:19.476 [2024-12-05 19:15:36.921683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:19.476 [2024-12-05 19:15:36.921689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:19.476 [2024-12-05 19:15:36.921696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:19.476 [2024-12-05 19:15:36.921701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:19.476 [2024-12-05 19:15:36.921708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:19.476 [2024-12-05 19:15:36.921714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:19.476 [2024-12-05 19:15:36.921721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:19.476 [2024-12-05 19:15:36.921728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:19.476 [2024-12-05 19:15:36.921738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:19.476 [2024-12-05 19:15:36.921744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:19.476 [2024-12-05 19:15:36.921752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:19.476 [2024-12-05 19:15:36.921758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:19.476 [2024-12-05 19:15:36.921765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:19.476 [2024-12-05 19:15:36.921771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:19.476 [2024-12-05 19:15:36.921779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:19.476 [2024-12-05 19:15:36.921785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:19.476 [2024-12-05 19:15:36.921792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:19.476 [2024-12-05 19:15:36.921799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:19.476 [2024-12-05 19:15:36.921808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:19.476 [2024-12-05 19:15:36.921814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:19.476 [2024-12-05 19:15:36.921822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:19.476 [2024-12-05 19:15:36.921829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:19.476 [2024-12-05 19:15:36.921837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:19.476 [2024-12-05 19:15:36.921842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:19.476 [2024-12-05 19:15:36.921852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:19.476 [2024-12-05 19:15:36.921858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:19.476 [2024-12-05 19:15:36.921866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:19.476 [2024-12-05 19:15:36.921871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:19.476 [2024-12-05 19:15:36.921878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:19.476 [2024-12-05 19:15:36.921885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:19.476 [2024-12-05 19:15:36.921892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:19.476 [2024-12-05 19:15:36.921898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:19.476 [2024-12-05 19:15:36.921905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:19.477 [2024-12-05 19:15:36.921910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:19.477 [2024-12-05 19:15:36.921918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:19.477 [2024-12-05 19:15:36.921923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:19.477 [2024-12-05 19:15:36.921931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:19.477 [2024-12-05 19:15:36.921938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:19.477 [2024-12-05 19:15:36.921945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:19.477 [2024-12-05 19:15:36.921950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:19.477 [2024-12-05 19:15:36.921959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:19.477 [2024-12-05 19:15:36.921965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:19.477 [2024-12-05 19:15:36.921972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:19.477 [2024-12-05 19:15:36.921978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:19.477 [2024-12-05 19:15:36.921986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:19.477 [2024-12-05 19:15:36.921992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:19.477 [2024-12-05 19:15:36.921999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:19.477 [2024-12-05 19:15:36.922005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:19.477 [2024-12-05 19:15:36.922012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:19.477 [2024-12-05 19:15:36.922019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:19.477 [2024-12-05 19:15:36.922027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:19.477 [2024-12-05 19:15:36.922034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:19.477 [2024-12-05 19:15:36.922041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:19.477 [2024-12-05 19:15:36.922047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:19.477 [2024-12-05 19:15:36.922055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:19.477 [2024-12-05 19:15:36.922061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:19.477 [2024-12-05 19:15:36.922077] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:19.477 [2024-12-05 19:15:36.922085] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a560a9df-7547-43f8-95d1-de6f5f67663b 00:25:19.477 [2024-12-05 19:15:36.922093] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:19.477 [2024-12-05 19:15:36.922099] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:19.477 [2024-12-05 19:15:36.922108] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:19.477 [2024-12-05 19:15:36.922115] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:19.477 [2024-12-05 19:15:36.922122] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:19.477 [2024-12-05 19:15:36.922129] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:19.477 [2024-12-05 19:15:36.922136] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:19.477 [2024-12-05 19:15:36.922141] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:19.477 [2024-12-05 19:15:36.922152] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:19.477 [2024-12-05 19:15:36.922158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.477 [2024-12-05 19:15:36.922165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:19.477 [2024-12-05 19:15:36.922171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.808 ms 00:25:19.477 [2024-12-05 19:15:36.922181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.477 [2024-12-05 19:15:36.923918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.477 [2024-12-05 19:15:36.923942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:19.477 [2024-12-05 19:15:36.923950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.724 ms 00:25:19.477 [2024-12-05 19:15:36.923961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.477 [2024-12-05 19:15:36.924055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.477 [2024-12-05 19:15:36.924067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:19.477 [2024-12-05 19:15:36.924074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:25:19.477 [2024-12-05 19:15:36.924082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.477 [2024-12-05 19:15:36.930130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:19.477 [2024-12-05 19:15:36.930168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:19.477 [2024-12-05 19:15:36.930175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:19.477 [2024-12-05 19:15:36.930184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.477 [2024-12-05 19:15:36.930232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:19.477 [2024-12-05 19:15:36.930242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:19.477 [2024-12-05 19:15:36.930248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:19.477 [2024-12-05 19:15:36.930272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.477 [2024-12-05 19:15:36.930339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:19.477 [2024-12-05 19:15:36.930352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:19.477 [2024-12-05 19:15:36.930359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:19.477 [2024-12-05 19:15:36.930366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.477 [2024-12-05 19:15:36.930380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:19.477 [2024-12-05 19:15:36.930389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:19.477 [2024-12-05 19:15:36.930397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:19.477 [2024-12-05 19:15:36.930407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.477 [2024-12-05 19:15:36.941487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:19.477 [2024-12-05 19:15:36.941658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:19.477 [2024-12-05 19:15:36.941673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:19.477 [2024-12-05 19:15:36.941682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.477 [2024-12-05 19:15:36.950579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:19.477 [2024-12-05 19:15:36.950617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:19.477 [2024-12-05 19:15:36.950628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:19.477 [2024-12-05 19:15:36.950640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.477 [2024-12-05 19:15:36.950703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:19.477 [2024-12-05 19:15:36.950715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:19.477 [2024-12-05 19:15:36.950723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:19.477 [2024-12-05 19:15:36.950734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.477 [2024-12-05 19:15:36.950766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:19.477 [2024-12-05 19:15:36.950775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:19.477 [2024-12-05 19:15:36.950781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:19.477 [2024-12-05 19:15:36.950789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.477 [2024-12-05 19:15:36.950855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:19.477 [2024-12-05 19:15:36.950865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:19.477 [2024-12-05 19:15:36.950872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:19.477 [2024-12-05 19:15:36.950880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.477 [2024-12-05 19:15:36.950905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:19.477 [2024-12-05 19:15:36.950914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:19.477 [2024-12-05 19:15:36.950921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:19.477 [2024-12-05 19:15:36.950928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.477 [2024-12-05 19:15:36.950966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:19.477 [2024-12-05 19:15:36.950978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:19.477 [2024-12-05 19:15:36.950984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:19.477 [2024-12-05 19:15:36.950992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.477 [2024-12-05 19:15:36.951032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:19.477 [2024-12-05 19:15:36.951045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:19.477 [2024-12-05 19:15:36.951052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:19.477 [2024-12-05 19:15:36.951060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.477 [2024-12-05 19:15:36.951185] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 70.061 ms, result 0 00:25:19.477 true 00:25:19.477 19:15:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 90751 00:25:19.477 19:15:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid90751 00:25:19.477 19:15:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:25:19.737 [2024-12-05 19:15:37.034750] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:25:19.737 [2024-12-05 19:15:37.034868] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91310 ] 00:25:19.737 [2024-12-05 19:15:37.174849] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:19.737 [2024-12-05 19:15:37.199743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:21.124  [2024-12-05T19:15:39.625Z] Copying: 255/1024 [MB] (255 MBps) [2024-12-05T19:15:40.565Z] Copying: 511/1024 [MB] (255 MBps) [2024-12-05T19:15:41.505Z] Copying: 763/1024 [MB] (252 MBps) [2024-12-05T19:15:41.505Z] Copying: 1010/1024 [MB] (246 MBps) [2024-12-05T19:15:41.505Z] Copying: 1024/1024 [MB] (average 252 MBps) 00:25:23.946 00:25:23.946 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 90751 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:25:23.946 19:15:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:24.206 [2024-12-05 19:15:41.553164] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:25:24.206 [2024-12-05 19:15:41.553466] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91357 ] 00:25:24.206 [2024-12-05 19:15:41.695744] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:24.206 [2024-12-05 19:15:41.728040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:24.467 [2024-12-05 19:15:41.832247] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:24.467 [2024-12-05 19:15:41.832449] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:24.468 [2024-12-05 19:15:41.894796] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:25:24.468 [2024-12-05 19:15:41.895228] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:25:24.468 [2024-12-05 19:15:41.895555] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:25:25.041 [2024-12-05 19:15:42.360946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.041 [2024-12-05 19:15:42.361072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:25.041 [2024-12-05 19:15:42.361089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:25.041 [2024-12-05 19:15:42.361111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.041 [2024-12-05 19:15:42.361166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.041 [2024-12-05 19:15:42.361175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:25.041 [2024-12-05 19:15:42.361181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:25:25.041 [2024-12-05 19:15:42.361187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.041 [2024-12-05 19:15:42.361212] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:25.041 [2024-12-05 19:15:42.361419] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:25.041 [2024-12-05 19:15:42.361433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.041 [2024-12-05 19:15:42.361442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:25.041 [2024-12-05 19:15:42.361449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.231 ms 00:25:25.041 [2024-12-05 19:15:42.361456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.041 [2024-12-05 19:15:42.362857] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:25.041 [2024-12-05 19:15:42.365726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.041 [2024-12-05 19:15:42.365753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:25.041 [2024-12-05 19:15:42.365761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.872 ms 00:25:25.041 [2024-12-05 19:15:42.365767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.041 [2024-12-05 19:15:42.365814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.041 [2024-12-05 19:15:42.365821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:25.041 [2024-12-05 19:15:42.365828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:25:25.041 [2024-12-05 19:15:42.365834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.041 [2024-12-05 19:15:42.372060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.041 [2024-12-05 19:15:42.372175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:25.041 [2024-12-05 19:15:42.372191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.191 ms 00:25:25.041 [2024-12-05 19:15:42.372198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.041 [2024-12-05 19:15:42.372280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.041 [2024-12-05 19:15:42.372288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:25.041 [2024-12-05 19:15:42.372295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:25:25.041 [2024-12-05 19:15:42.372305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.041 [2024-12-05 19:15:42.372337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.041 [2024-12-05 19:15:42.372344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:25.041 [2024-12-05 19:15:42.372351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:25.041 [2024-12-05 19:15:42.372360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.041 [2024-12-05 19:15:42.372380] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:25.041 [2024-12-05 19:15:42.373922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.041 [2024-12-05 19:15:42.373946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:25.041 [2024-12-05 19:15:42.373953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.548 ms 00:25:25.041 [2024-12-05 19:15:42.373961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.041 [2024-12-05 19:15:42.373989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.041 [2024-12-05 19:15:42.373995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:25.041 [2024-12-05 19:15:42.374001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:25:25.041 [2024-12-05 19:15:42.374007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.041 [2024-12-05 19:15:42.374023] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:25.041 [2024-12-05 19:15:42.374042] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:25.041 [2024-12-05 19:15:42.374077] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:25.041 [2024-12-05 19:15:42.374092] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:25.041 [2024-12-05 19:15:42.374182] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:25.041 [2024-12-05 19:15:42.374190] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:25.041 [2024-12-05 19:15:42.374198] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:25.041 [2024-12-05 19:15:42.374206] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:25.041 [2024-12-05 19:15:42.374214] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:25.041 [2024-12-05 19:15:42.374220] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:25.041 [2024-12-05 19:15:42.374227] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:25.041 [2024-12-05 19:15:42.374235] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:25.041 [2024-12-05 19:15:42.374243] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:25.041 [2024-12-05 19:15:42.374263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.041 [2024-12-05 19:15:42.374272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:25.041 [2024-12-05 19:15:42.374278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.228 ms 00:25:25.041 [2024-12-05 19:15:42.374285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.041 [2024-12-05 19:15:42.374349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.041 [2024-12-05 19:15:42.374357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:25.041 [2024-12-05 19:15:42.374367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:25:25.041 [2024-12-05 19:15:42.374381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.041 [2024-12-05 19:15:42.374467] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:25.041 [2024-12-05 19:15:42.374477] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:25.041 [2024-12-05 19:15:42.374484] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:25.041 [2024-12-05 19:15:42.374490] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:25.041 [2024-12-05 19:15:42.374497] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:25.041 [2024-12-05 19:15:42.374502] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:25.041 [2024-12-05 19:15:42.374507] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:25.041 [2024-12-05 19:15:42.374512] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:25.041 [2024-12-05 19:15:42.374518] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:25.041 [2024-12-05 19:15:42.374523] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:25.041 [2024-12-05 19:15:42.374529] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:25.041 [2024-12-05 19:15:42.374534] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:25.041 [2024-12-05 19:15:42.374540] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:25.041 [2024-12-05 19:15:42.374547] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:25.041 [2024-12-05 19:15:42.374557] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:25.041 [2024-12-05 19:15:42.374563] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:25.041 [2024-12-05 19:15:42.374569] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:25.041 [2024-12-05 19:15:42.374575] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:25.041 [2024-12-05 19:15:42.374581] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:25.041 [2024-12-05 19:15:42.374587] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:25.041 [2024-12-05 19:15:42.374592] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:25.041 [2024-12-05 19:15:42.374598] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:25.041 [2024-12-05 19:15:42.374604] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:25.041 [2024-12-05 19:15:42.374609] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:25.041 [2024-12-05 19:15:42.374615] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:25.041 [2024-12-05 19:15:42.374621] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:25.041 [2024-12-05 19:15:42.374627] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:25.041 [2024-12-05 19:15:42.374633] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:25.041 [2024-12-05 19:15:42.374639] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:25.041 [2024-12-05 19:15:42.374645] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:25.041 [2024-12-05 19:15:42.374652] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:25.041 [2024-12-05 19:15:42.374658] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:25.041 [2024-12-05 19:15:42.374664] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:25.041 [2024-12-05 19:15:42.374669] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:25.042 [2024-12-05 19:15:42.374675] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:25.042 [2024-12-05 19:15:42.374681] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:25.042 [2024-12-05 19:15:42.374687] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:25.042 [2024-12-05 19:15:42.374692] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:25.042 [2024-12-05 19:15:42.374698] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:25.042 [2024-12-05 19:15:42.374704] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:25.042 [2024-12-05 19:15:42.374710] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:25.042 [2024-12-05 19:15:42.374716] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:25.042 [2024-12-05 19:15:42.374721] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:25.042 [2024-12-05 19:15:42.374726] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:25.042 [2024-12-05 19:15:42.374733] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:25.042 [2024-12-05 19:15:42.374741] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:25.042 [2024-12-05 19:15:42.374750] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:25.042 [2024-12-05 19:15:42.374757] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:25.042 [2024-12-05 19:15:42.374763] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:25.042 [2024-12-05 19:15:42.374769] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:25.042 [2024-12-05 19:15:42.374775] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:25.042 [2024-12-05 19:15:42.374781] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:25.042 [2024-12-05 19:15:42.374788] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:25.042 [2024-12-05 19:15:42.374795] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:25.042 [2024-12-05 19:15:42.374803] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:25.042 [2024-12-05 19:15:42.374810] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:25.042 [2024-12-05 19:15:42.374817] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:25.042 [2024-12-05 19:15:42.374823] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:25.042 [2024-12-05 19:15:42.374829] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:25.042 [2024-12-05 19:15:42.374836] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:25.042 [2024-12-05 19:15:42.374847] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:25.042 [2024-12-05 19:15:42.374854] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:25.042 [2024-12-05 19:15:42.374862] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:25.042 [2024-12-05 19:15:42.374869] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:25.042 [2024-12-05 19:15:42.374875] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:25.042 [2024-12-05 19:15:42.374880] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:25.042 [2024-12-05 19:15:42.374887] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:25.042 [2024-12-05 19:15:42.374893] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:25.042 [2024-12-05 19:15:42.374899] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:25.042 [2024-12-05 19:15:42.374906] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:25.042 [2024-12-05 19:15:42.374915] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:25.042 [2024-12-05 19:15:42.374922] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:25.042 [2024-12-05 19:15:42.374928] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:25.042 [2024-12-05 19:15:42.374934] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:25.042 [2024-12-05 19:15:42.374940] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:25.042 [2024-12-05 19:15:42.374947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.042 [2024-12-05 19:15:42.374954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:25.042 [2024-12-05 19:15:42.374961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.533 ms 00:25:25.042 [2024-12-05 19:15:42.374970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.042 [2024-12-05 19:15:42.386161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.042 [2024-12-05 19:15:42.386187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:25.042 [2024-12-05 19:15:42.386196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.159 ms 00:25:25.042 [2024-12-05 19:15:42.386201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.042 [2024-12-05 19:15:42.386283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.042 [2024-12-05 19:15:42.386292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:25.042 [2024-12-05 19:15:42.386298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:25:25.042 [2024-12-05 19:15:42.386304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.042 [2024-12-05 19:15:42.405085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.042 [2024-12-05 19:15:42.405140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:25.042 [2024-12-05 19:15:42.405157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.728 ms 00:25:25.042 [2024-12-05 19:15:42.405170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.042 [2024-12-05 19:15:42.405233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.042 [2024-12-05 19:15:42.405264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:25.042 [2024-12-05 19:15:42.405279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:25.042 [2024-12-05 19:15:42.405291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.042 [2024-12-05 19:15:42.405860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.042 [2024-12-05 19:15:42.405914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:25.042 [2024-12-05 19:15:42.405950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.482 ms 00:25:25.042 [2024-12-05 19:15:42.405968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.042 [2024-12-05 19:15:42.406177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.042 [2024-12-05 19:15:42.406200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:25.042 [2024-12-05 19:15:42.406212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.176 ms 00:25:25.042 [2024-12-05 19:15:42.406224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.042 [2024-12-05 19:15:42.413921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.042 [2024-12-05 19:15:42.414050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:25.042 [2024-12-05 19:15:42.414063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.672 ms 00:25:25.042 [2024-12-05 19:15:42.414075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.042 [2024-12-05 19:15:42.417062] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:25.042 [2024-12-05 19:15:42.417162] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:25.042 [2024-12-05 19:15:42.417174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.042 [2024-12-05 19:15:42.417188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:25.042 [2024-12-05 19:15:42.417195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.028 ms 00:25:25.042 [2024-12-05 19:15:42.417201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.042 [2024-12-05 19:15:42.428925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.042 [2024-12-05 19:15:42.428951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:25.042 [2024-12-05 19:15:42.428960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.696 ms 00:25:25.042 [2024-12-05 19:15:42.428967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.042 [2024-12-05 19:15:42.430917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.042 [2024-12-05 19:15:42.430943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:25.043 [2024-12-05 19:15:42.430951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.920 ms 00:25:25.043 [2024-12-05 19:15:42.430957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.043 [2024-12-05 19:15:42.432711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.043 [2024-12-05 19:15:42.432735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:25.043 [2024-12-05 19:15:42.432742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.727 ms 00:25:25.043 [2024-12-05 19:15:42.432748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.043 [2024-12-05 19:15:42.433002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.043 [2024-12-05 19:15:42.433015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:25.043 [2024-12-05 19:15:42.433023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.205 ms 00:25:25.043 [2024-12-05 19:15:42.433029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.043 [2024-12-05 19:15:42.450927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.043 [2024-12-05 19:15:42.451043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:25.043 [2024-12-05 19:15:42.451057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.886 ms 00:25:25.043 [2024-12-05 19:15:42.451064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.043 [2024-12-05 19:15:42.456821] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:25.043 [2024-12-05 19:15:42.459124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.043 [2024-12-05 19:15:42.459143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:25.043 [2024-12-05 19:15:42.459160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.033 ms 00:25:25.043 [2024-12-05 19:15:42.459167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.043 [2024-12-05 19:15:42.459210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.043 [2024-12-05 19:15:42.459218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:25.043 [2024-12-05 19:15:42.459227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:25.043 [2024-12-05 19:15:42.459235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.043 [2024-12-05 19:15:42.459318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.043 [2024-12-05 19:15:42.459327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:25.043 [2024-12-05 19:15:42.459333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:25:25.043 [2024-12-05 19:15:42.459339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.043 [2024-12-05 19:15:42.459354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.043 [2024-12-05 19:15:42.459365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:25.043 [2024-12-05 19:15:42.459371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:25.043 [2024-12-05 19:15:42.459379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.043 [2024-12-05 19:15:42.459412] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:25.043 [2024-12-05 19:15:42.459420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.043 [2024-12-05 19:15:42.459429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:25.043 [2024-12-05 19:15:42.459438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:25.043 [2024-12-05 19:15:42.459444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.043 [2024-12-05 19:15:42.463284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.043 [2024-12-05 19:15:42.463369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:25.043 [2024-12-05 19:15:42.463412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.826 ms 00:25:25.043 [2024-12-05 19:15:42.463430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.043 [2024-12-05 19:15:42.463495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.043 [2024-12-05 19:15:42.463519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:25.043 [2024-12-05 19:15:42.463535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:25:25.043 [2024-12-05 19:15:42.463550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.043 [2024-12-05 19:15:42.464453] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 103.133 ms, result 0 00:25:25.986  [2024-12-05T19:15:44.489Z] Copying: 12/1024 [MB] (12 MBps) [2024-12-05T19:15:45.889Z] Copying: 24/1024 [MB] (11 MBps) [2024-12-05T19:15:46.832Z] Copying: 35296/1048576 [kB] (10160 kBps) [2024-12-05T19:15:47.777Z] Copying: 45/1024 [MB] (11 MBps) [2024-12-05T19:15:48.720Z] Copying: 57/1024 [MB] (11 MBps) [2024-12-05T19:15:49.664Z] Copying: 68/1024 [MB] (11 MBps) [2024-12-05T19:15:50.609Z] Copying: 79/1024 [MB] (11 MBps) [2024-12-05T19:15:51.552Z] Copying: 91/1024 [MB] (11 MBps) [2024-12-05T19:15:52.494Z] Copying: 102/1024 [MB] (11 MBps) [2024-12-05T19:15:53.931Z] Copying: 114/1024 [MB] (11 MBps) [2024-12-05T19:15:54.505Z] Copying: 125/1024 [MB] (11 MBps) [2024-12-05T19:15:55.890Z] Copying: 138456/1048576 [kB] (10140 kBps) [2024-12-05T19:15:56.832Z] Copying: 145/1024 [MB] (10 MBps) [2024-12-05T19:15:57.777Z] Copying: 155/1024 [MB] (10 MBps) [2024-12-05T19:15:58.724Z] Copying: 166/1024 [MB] (11 MBps) [2024-12-05T19:15:59.670Z] Copying: 178/1024 [MB] (12 MBps) [2024-12-05T19:16:00.612Z] Copying: 194/1024 [MB] (15 MBps) [2024-12-05T19:16:01.556Z] Copying: 206/1024 [MB] (11 MBps) [2024-12-05T19:16:02.500Z] Copying: 216/1024 [MB] (10 MBps) [2024-12-05T19:16:03.888Z] Copying: 226/1024 [MB] (10 MBps) [2024-12-05T19:16:04.831Z] Copying: 237/1024 [MB] (11 MBps) [2024-12-05T19:16:05.771Z] Copying: 248/1024 [MB] (11 MBps) [2024-12-05T19:16:06.711Z] Copying: 260/1024 [MB] (11 MBps) [2024-12-05T19:16:07.655Z] Copying: 271/1024 [MB] (11 MBps) [2024-12-05T19:16:08.600Z] Copying: 282/1024 [MB] (11 MBps) [2024-12-05T19:16:09.547Z] Copying: 293/1024 [MB] (11 MBps) [2024-12-05T19:16:10.491Z] Copying: 304/1024 [MB] (10 MBps) [2024-12-05T19:16:11.879Z] Copying: 315/1024 [MB] (11 MBps) [2024-12-05T19:16:12.823Z] Copying: 326/1024 [MB] (11 MBps) [2024-12-05T19:16:13.767Z] Copying: 338/1024 [MB] (11 MBps) [2024-12-05T19:16:14.712Z] Copying: 349/1024 [MB] (11 MBps) [2024-12-05T19:16:15.656Z] Copying: 360/1024 [MB] (11 MBps) [2024-12-05T19:16:16.595Z] Copying: 371/1024 [MB] (10 MBps) [2024-12-05T19:16:17.536Z] Copying: 382/1024 [MB] (11 MBps) [2024-12-05T19:16:18.479Z] Copying: 393/1024 [MB] (11 MBps) [2024-12-05T19:16:19.867Z] Copying: 404/1024 [MB] (11 MBps) [2024-12-05T19:16:20.811Z] Copying: 415/1024 [MB] (11 MBps) [2024-12-05T19:16:21.755Z] Copying: 426/1024 [MB] (10 MBps) [2024-12-05T19:16:22.717Z] Copying: 437/1024 [MB] (11 MBps) [2024-12-05T19:16:23.669Z] Copying: 448/1024 [MB] (11 MBps) [2024-12-05T19:16:24.614Z] Copying: 460/1024 [MB] (11 MBps) [2024-12-05T19:16:25.557Z] Copying: 471/1024 [MB] (11 MBps) [2024-12-05T19:16:26.495Z] Copying: 481/1024 [MB] (10 MBps) [2024-12-05T19:16:27.874Z] Copying: 493/1024 [MB] (11 MBps) [2024-12-05T19:16:28.835Z] Copying: 504/1024 [MB] (11 MBps) [2024-12-05T19:16:29.778Z] Copying: 515/1024 [MB] (10 MBps) [2024-12-05T19:16:30.721Z] Copying: 525/1024 [MB] (10 MBps) [2024-12-05T19:16:31.666Z] Copying: 536/1024 [MB] (11 MBps) [2024-12-05T19:16:32.609Z] Copying: 548/1024 [MB] (11 MBps) [2024-12-05T19:16:33.551Z] Copying: 559/1024 [MB] (11 MBps) [2024-12-05T19:16:34.495Z] Copying: 570/1024 [MB] (11 MBps) [2024-12-05T19:16:35.880Z] Copying: 581/1024 [MB] (11 MBps) [2024-12-05T19:16:36.820Z] Copying: 592/1024 [MB] (10 MBps) [2024-12-05T19:16:37.764Z] Copying: 603/1024 [MB] (10 MBps) [2024-12-05T19:16:38.708Z] Copying: 614/1024 [MB] (11 MBps) [2024-12-05T19:16:39.653Z] Copying: 624/1024 [MB] (10 MBps) [2024-12-05T19:16:40.597Z] Copying: 650000/1048576 [kB] (10192 kBps) [2024-12-05T19:16:41.549Z] Copying: 645/1024 [MB] (10 MBps) [2024-12-05T19:16:42.495Z] Copying: 655/1024 [MB] (10 MBps) [2024-12-05T19:16:43.905Z] Copying: 665/1024 [MB] (10 MBps) [2024-12-05T19:16:44.478Z] Copying: 691352/1048576 [kB] (9928 kBps) [2024-12-05T19:16:45.863Z] Copying: 695/1024 [MB] (19 MBps) [2024-12-05T19:16:46.804Z] Copying: 718/1024 [MB] (22 MBps) [2024-12-05T19:16:47.746Z] Copying: 735/1024 [MB] (16 MBps) [2024-12-05T19:16:48.688Z] Copying: 750/1024 [MB] (15 MBps) [2024-12-05T19:16:49.628Z] Copying: 762/1024 [MB] (11 MBps) [2024-12-05T19:16:50.568Z] Copying: 775/1024 [MB] (13 MBps) [2024-12-05T19:16:51.579Z] Copying: 804000/1048576 [kB] (10072 kBps) [2024-12-05T19:16:52.525Z] Copying: 802/1024 [MB] (17 MBps) [2024-12-05T19:16:53.909Z] Copying: 814/1024 [MB] (12 MBps) [2024-12-05T19:16:54.482Z] Copying: 829/1024 [MB] (14 MBps) [2024-12-05T19:16:55.895Z] Copying: 847/1024 [MB] (18 MBps) [2024-12-05T19:16:56.835Z] Copying: 862/1024 [MB] (15 MBps) [2024-12-05T19:16:57.775Z] Copying: 878/1024 [MB] (15 MBps) [2024-12-05T19:16:58.718Z] Copying: 894/1024 [MB] (15 MBps) [2024-12-05T19:16:59.663Z] Copying: 906/1024 [MB] (12 MBps) [2024-12-05T19:17:00.609Z] Copying: 916/1024 [MB] (10 MBps) [2024-12-05T19:17:01.555Z] Copying: 949040/1048576 [kB] (10192 kBps) [2024-12-05T19:17:02.500Z] Copying: 937/1024 [MB] (10 MBps) [2024-12-05T19:17:03.883Z] Copying: 969536/1048576 [kB] (10024 kBps) [2024-12-05T19:17:04.817Z] Copying: 960/1024 [MB] (13 MBps) [2024-12-05T19:17:05.758Z] Copying: 1013/1024 [MB] (53 MBps) [2024-12-05T19:17:05.758Z] Copying: 1023/1024 [MB] (10 MBps) [2024-12-05T19:17:05.758Z] Copying: 1024/1024 [MB] (average 12 MBps)[2024-12-05 19:17:05.541442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:48.199 [2024-12-05 19:17:05.541516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:48.199 [2024-12-05 19:17:05.541534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:48.199 [2024-12-05 19:17:05.541544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.199 [2024-12-05 19:17:05.543862] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:48.199 [2024-12-05 19:17:05.546597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:48.199 [2024-12-05 19:17:05.546645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:48.199 [2024-12-05 19:17:05.546656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.683 ms 00:26:48.199 [2024-12-05 19:17:05.546664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.199 [2024-12-05 19:17:05.559431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:48.199 [2024-12-05 19:17:05.559478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:48.199 [2024-12-05 19:17:05.559492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.611 ms 00:26:48.199 [2024-12-05 19:17:05.559501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.199 [2024-12-05 19:17:05.583956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:48.199 [2024-12-05 19:17:05.584002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:48.199 [2024-12-05 19:17:05.584014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.428 ms 00:26:48.199 [2024-12-05 19:17:05.584023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.199 [2024-12-05 19:17:05.590204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:48.199 [2024-12-05 19:17:05.590242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:48.199 [2024-12-05 19:17:05.590267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.143 ms 00:26:48.199 [2024-12-05 19:17:05.590276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.199 [2024-12-05 19:17:05.592902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:48.199 [2024-12-05 19:17:05.592950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:48.199 [2024-12-05 19:17:05.592961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.575 ms 00:26:48.199 [2024-12-05 19:17:05.592968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.199 [2024-12-05 19:17:05.597237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:48.199 [2024-12-05 19:17:05.597296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:48.199 [2024-12-05 19:17:05.597308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.229 ms 00:26:48.199 [2024-12-05 19:17:05.597316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.461 [2024-12-05 19:17:05.930727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:48.461 [2024-12-05 19:17:05.930787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:48.461 [2024-12-05 19:17:05.930813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 333.384 ms 00:26:48.461 [2024-12-05 19:17:05.930823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.461 [2024-12-05 19:17:05.934027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:48.461 [2024-12-05 19:17:05.934071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:48.461 [2024-12-05 19:17:05.934081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.187 ms 00:26:48.461 [2024-12-05 19:17:05.934088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.461 [2024-12-05 19:17:05.936726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:48.461 [2024-12-05 19:17:05.936771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:48.461 [2024-12-05 19:17:05.936781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.598 ms 00:26:48.461 [2024-12-05 19:17:05.936788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.461 [2024-12-05 19:17:05.938779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:48.461 [2024-12-05 19:17:05.938823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:48.461 [2024-12-05 19:17:05.938832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.954 ms 00:26:48.461 [2024-12-05 19:17:05.938839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.461 [2024-12-05 19:17:05.940762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:48.461 [2024-12-05 19:17:05.940807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:48.461 [2024-12-05 19:17:05.940817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.857 ms 00:26:48.461 [2024-12-05 19:17:05.940824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.461 [2024-12-05 19:17:05.940860] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:48.461 [2024-12-05 19:17:05.940882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 119296 / 261120 wr_cnt: 1 state: open 00:26:48.461 [2024-12-05 19:17:05.940893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:48.461 [2024-12-05 19:17:05.940901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:48.461 [2024-12-05 19:17:05.940910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:48.461 [2024-12-05 19:17:05.940918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:48.461 [2024-12-05 19:17:05.940926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:48.461 [2024-12-05 19:17:05.940934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:48.461 [2024-12-05 19:17:05.940941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:48.461 [2024-12-05 19:17:05.940950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:48.461 [2024-12-05 19:17:05.940957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:48.461 [2024-12-05 19:17:05.940965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:48.461 [2024-12-05 19:17:05.940973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:48.461 [2024-12-05 19:17:05.940981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:48.462 [2024-12-05 19:17:05.940989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:48.462 [2024-12-05 19:17:05.940997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:48.462 [2024-12-05 19:17:05.941005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:48.462 [2024-12-05 19:17:05.941013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:48.462 [2024-12-05 19:17:05.941020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:48.462 [2024-12-05 19:17:05.941028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:48.462 [2024-12-05 19:17:05.941035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:48.462 [2024-12-05 19:17:05.941042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:48.462 [2024-12-05 19:17:05.941049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:48.462 [2024-12-05 19:17:05.941056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:48.462 [2024-12-05 19:17:05.941064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:48.462 [2024-12-05 19:17:05.941072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:48.462 [2024-12-05 19:17:05.941080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:48.462 [2024-12-05 19:17:05.941089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:48.462 [2024-12-05 19:17:05.941096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:48.462 [2024-12-05 19:17:05.941104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:48.462 [2024-12-05 19:17:05.941116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:48.462 [2024-12-05 19:17:05.941124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:48.462 [2024-12-05 19:17:05.941132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:48.462 [2024-12-05 19:17:05.941140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:48.462 [2024-12-05 19:17:05.941148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:48.462 [2024-12-05 19:17:05.941156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:48.462 [2024-12-05 19:17:05.941164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:48.462 [2024-12-05 19:17:05.941172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:48.462 [2024-12-05 19:17:05.941180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:48.462 [2024-12-05 19:17:05.941198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:48.462 [2024-12-05 19:17:05.941207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:48.462 [2024-12-05 19:17:05.941215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:48.462 [2024-12-05 19:17:05.941222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:48.462 [2024-12-05 19:17:05.941230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:48.462 [2024-12-05 19:17:05.941238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:48.462 [2024-12-05 19:17:05.941246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:48.462 [2024-12-05 19:17:05.941274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:48.462 [2024-12-05 19:17:05.941283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:48.462 [2024-12-05 19:17:05.941290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:48.462 [2024-12-05 19:17:05.941298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:48.462 [2024-12-05 19:17:05.941306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:48.462 [2024-12-05 19:17:05.941315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:48.462 [2024-12-05 19:17:05.941323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:48.462 [2024-12-05 19:17:05.941331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:48.462 [2024-12-05 19:17:05.941338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:48.462 [2024-12-05 19:17:05.941346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:48.462 [2024-12-05 19:17:05.941355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:48.462 [2024-12-05 19:17:05.941363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:48.462 [2024-12-05 19:17:05.941372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:48.462 [2024-12-05 19:17:05.941380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:48.462 [2024-12-05 19:17:05.941387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:48.462 [2024-12-05 19:17:05.941395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:48.462 [2024-12-05 19:17:05.941425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:48.462 [2024-12-05 19:17:05.941433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:48.462 [2024-12-05 19:17:05.941440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:48.462 [2024-12-05 19:17:05.941449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:48.462 [2024-12-05 19:17:05.941457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:48.462 [2024-12-05 19:17:05.941465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:48.462 [2024-12-05 19:17:05.941473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:48.462 [2024-12-05 19:17:05.941481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:48.462 [2024-12-05 19:17:05.941489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:48.462 [2024-12-05 19:17:05.941496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:48.462 [2024-12-05 19:17:05.941504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:48.462 [2024-12-05 19:17:05.941512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:48.462 [2024-12-05 19:17:05.941520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:48.462 [2024-12-05 19:17:05.941529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:48.462 [2024-12-05 19:17:05.941536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:48.462 [2024-12-05 19:17:05.941544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:48.462 [2024-12-05 19:17:05.941552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:48.462 [2024-12-05 19:17:05.941560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:48.462 [2024-12-05 19:17:05.941568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:48.462 [2024-12-05 19:17:05.941576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:48.462 [2024-12-05 19:17:05.941584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:48.462 [2024-12-05 19:17:05.941593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:48.462 [2024-12-05 19:17:05.941601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:48.462 [2024-12-05 19:17:05.941609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:48.462 [2024-12-05 19:17:05.941627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:48.462 [2024-12-05 19:17:05.941635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:48.462 [2024-12-05 19:17:05.941642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:48.462 [2024-12-05 19:17:05.941649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:48.462 [2024-12-05 19:17:05.941657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:48.462 [2024-12-05 19:17:05.941664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:48.462 [2024-12-05 19:17:05.941672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:48.462 [2024-12-05 19:17:05.941679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:48.462 [2024-12-05 19:17:05.941687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:48.462 [2024-12-05 19:17:05.941695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:48.462 [2024-12-05 19:17:05.941702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:48.462 [2024-12-05 19:17:05.941710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:48.462 [2024-12-05 19:17:05.941717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:48.463 [2024-12-05 19:17:05.941726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:48.463 [2024-12-05 19:17:05.941734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:48.463 [2024-12-05 19:17:05.941749] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:48.463 [2024-12-05 19:17:05.941763] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a560a9df-7547-43f8-95d1-de6f5f67663b 00:26:48.463 [2024-12-05 19:17:05.941771] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 119296 00:26:48.463 [2024-12-05 19:17:05.941783] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 120256 00:26:48.463 [2024-12-05 19:17:05.941796] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 119296 00:26:48.463 [2024-12-05 19:17:05.941805] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0080 00:26:48.463 [2024-12-05 19:17:05.941812] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:48.463 [2024-12-05 19:17:05.941824] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:48.463 [2024-12-05 19:17:05.941832] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:48.463 [2024-12-05 19:17:05.941838] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:48.463 [2024-12-05 19:17:05.941845] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:48.463 [2024-12-05 19:17:05.941853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:48.463 [2024-12-05 19:17:05.941861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:48.463 [2024-12-05 19:17:05.941896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.994 ms 00:26:48.463 [2024-12-05 19:17:05.941904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.463 [2024-12-05 19:17:05.944104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:48.463 [2024-12-05 19:17:05.944134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:48.463 [2024-12-05 19:17:05.944145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.182 ms 00:26:48.463 [2024-12-05 19:17:05.944154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.463 [2024-12-05 19:17:05.944298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:48.463 [2024-12-05 19:17:05.944313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:48.463 [2024-12-05 19:17:05.944322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:26:48.463 [2024-12-05 19:17:05.944330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.463 [2024-12-05 19:17:05.951720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:48.463 [2024-12-05 19:17:05.951873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:48.463 [2024-12-05 19:17:05.951931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:48.463 [2024-12-05 19:17:05.951955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.463 [2024-12-05 19:17:05.952036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:48.463 [2024-12-05 19:17:05.952058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:48.463 [2024-12-05 19:17:05.952078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:48.463 [2024-12-05 19:17:05.952097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.463 [2024-12-05 19:17:05.952153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:48.463 [2024-12-05 19:17:05.952229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:48.463 [2024-12-05 19:17:05.952270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:48.463 [2024-12-05 19:17:05.952293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.463 [2024-12-05 19:17:05.952323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:48.463 [2024-12-05 19:17:05.952351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:48.463 [2024-12-05 19:17:05.952371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:48.463 [2024-12-05 19:17:05.952390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.463 [2024-12-05 19:17:05.965824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:48.463 [2024-12-05 19:17:05.966044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:48.463 [2024-12-05 19:17:05.966101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:48.463 [2024-12-05 19:17:05.966125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.463 [2024-12-05 19:17:05.977278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:48.463 [2024-12-05 19:17:05.977443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:48.463 [2024-12-05 19:17:05.977525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:48.463 [2024-12-05 19:17:05.977551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.463 [2024-12-05 19:17:05.977623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:48.463 [2024-12-05 19:17:05.977647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:48.463 [2024-12-05 19:17:05.977668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:48.463 [2024-12-05 19:17:05.977729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.463 [2024-12-05 19:17:05.977816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:48.463 [2024-12-05 19:17:05.977843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:48.463 [2024-12-05 19:17:05.977885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:48.463 [2024-12-05 19:17:05.977905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.463 [2024-12-05 19:17:05.978010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:48.463 [2024-12-05 19:17:05.978104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:48.463 [2024-12-05 19:17:05.978131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:48.463 [2024-12-05 19:17:05.978152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.463 [2024-12-05 19:17:05.978240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:48.463 [2024-12-05 19:17:05.978304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:48.463 [2024-12-05 19:17:05.978314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:48.463 [2024-12-05 19:17:05.978326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.463 [2024-12-05 19:17:05.978373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:48.463 [2024-12-05 19:17:05.978383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:48.463 [2024-12-05 19:17:05.978392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:48.463 [2024-12-05 19:17:05.978407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.463 [2024-12-05 19:17:05.978461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:48.463 [2024-12-05 19:17:05.978474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:48.463 [2024-12-05 19:17:05.978485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:48.463 [2024-12-05 19:17:05.978497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.463 [2024-12-05 19:17:05.978640] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 440.684 ms, result 0 00:26:49.406 00:26:49.406 00:26:49.406 19:17:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:26:51.954 19:17:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:51.955 [2024-12-05 19:17:09.129526] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:26:51.955 [2024-12-05 19:17:09.130279] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92251 ] 00:26:51.955 [2024-12-05 19:17:09.279322] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:51.955 [2024-12-05 19:17:09.307529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:51.955 [2024-12-05 19:17:09.423118] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:51.955 [2024-12-05 19:17:09.423210] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:52.218 [2024-12-05 19:17:09.584820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.218 [2024-12-05 19:17:09.584876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:52.218 [2024-12-05 19:17:09.584892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:52.218 [2024-12-05 19:17:09.584905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.218 [2024-12-05 19:17:09.584965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.218 [2024-12-05 19:17:09.584976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:52.218 [2024-12-05 19:17:09.584984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:26:52.218 [2024-12-05 19:17:09.585003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.218 [2024-12-05 19:17:09.585030] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:52.218 [2024-12-05 19:17:09.585613] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:52.218 [2024-12-05 19:17:09.585684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.218 [2024-12-05 19:17:09.585710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:52.218 [2024-12-05 19:17:09.585734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.662 ms 00:26:52.218 [2024-12-05 19:17:09.585756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.218 [2024-12-05 19:17:09.587495] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:52.218 [2024-12-05 19:17:09.591152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.218 [2024-12-05 19:17:09.591322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:52.218 [2024-12-05 19:17:09.591341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.660 ms 00:26:52.218 [2024-12-05 19:17:09.591361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.218 [2024-12-05 19:17:09.591780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.218 [2024-12-05 19:17:09.591818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:52.218 [2024-12-05 19:17:09.591831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:26:52.218 [2024-12-05 19:17:09.591843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.218 [2024-12-05 19:17:09.600073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.218 [2024-12-05 19:17:09.600117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:52.218 [2024-12-05 19:17:09.600131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.176 ms 00:26:52.218 [2024-12-05 19:17:09.600139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.218 [2024-12-05 19:17:09.600236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.218 [2024-12-05 19:17:09.600246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:52.218 [2024-12-05 19:17:09.600281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:26:52.218 [2024-12-05 19:17:09.600289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.218 [2024-12-05 19:17:09.600374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.218 [2024-12-05 19:17:09.600390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:52.218 [2024-12-05 19:17:09.600399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:26:52.218 [2024-12-05 19:17:09.600410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.218 [2024-12-05 19:17:09.600434] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:52.218 [2024-12-05 19:17:09.602509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.218 [2024-12-05 19:17:09.602542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:52.218 [2024-12-05 19:17:09.602552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.082 ms 00:26:52.218 [2024-12-05 19:17:09.602559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.218 [2024-12-05 19:17:09.602596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.218 [2024-12-05 19:17:09.602605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:52.218 [2024-12-05 19:17:09.602614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:26:52.218 [2024-12-05 19:17:09.602624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.218 [2024-12-05 19:17:09.602645] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:52.218 [2024-12-05 19:17:09.602672] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:52.219 [2024-12-05 19:17:09.602718] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:52.219 [2024-12-05 19:17:09.602735] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:52.219 [2024-12-05 19:17:09.602841] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:52.219 [2024-12-05 19:17:09.602852] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:52.219 [2024-12-05 19:17:09.602866] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:52.219 [2024-12-05 19:17:09.602877] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:52.219 [2024-12-05 19:17:09.602886] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:52.219 [2024-12-05 19:17:09.602898] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:52.219 [2024-12-05 19:17:09.602906] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:52.219 [2024-12-05 19:17:09.602914] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:52.219 [2024-12-05 19:17:09.602921] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:52.219 [2024-12-05 19:17:09.602930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.219 [2024-12-05 19:17:09.602938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:52.219 [2024-12-05 19:17:09.602945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.287 ms 00:26:52.219 [2024-12-05 19:17:09.602953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.219 [2024-12-05 19:17:09.603038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.219 [2024-12-05 19:17:09.603049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:52.219 [2024-12-05 19:17:09.603056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:26:52.219 [2024-12-05 19:17:09.603063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.219 [2024-12-05 19:17:09.603164] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:52.219 [2024-12-05 19:17:09.603175] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:52.219 [2024-12-05 19:17:09.603185] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:52.219 [2024-12-05 19:17:09.603195] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:52.219 [2024-12-05 19:17:09.603209] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:52.219 [2024-12-05 19:17:09.603216] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:52.219 [2024-12-05 19:17:09.603225] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:52.219 [2024-12-05 19:17:09.603234] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:52.219 [2024-12-05 19:17:09.603243] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:52.219 [2024-12-05 19:17:09.603275] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:52.219 [2024-12-05 19:17:09.603284] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:52.219 [2024-12-05 19:17:09.603293] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:52.219 [2024-12-05 19:17:09.603301] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:52.219 [2024-12-05 19:17:09.603312] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:52.219 [2024-12-05 19:17:09.603320] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:52.219 [2024-12-05 19:17:09.603328] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:52.219 [2024-12-05 19:17:09.603337] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:52.219 [2024-12-05 19:17:09.603346] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:52.219 [2024-12-05 19:17:09.603354] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:52.219 [2024-12-05 19:17:09.603362] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:52.219 [2024-12-05 19:17:09.603370] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:52.219 [2024-12-05 19:17:09.603378] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:52.219 [2024-12-05 19:17:09.603386] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:52.219 [2024-12-05 19:17:09.603395] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:52.219 [2024-12-05 19:17:09.603403] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:52.219 [2024-12-05 19:17:09.603410] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:52.219 [2024-12-05 19:17:09.603418] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:52.219 [2024-12-05 19:17:09.603426] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:52.219 [2024-12-05 19:17:09.603433] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:52.219 [2024-12-05 19:17:09.603445] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:52.219 [2024-12-05 19:17:09.603453] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:52.219 [2024-12-05 19:17:09.603461] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:52.219 [2024-12-05 19:17:09.603469] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:52.219 [2024-12-05 19:17:09.603477] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:52.219 [2024-12-05 19:17:09.603485] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:52.219 [2024-12-05 19:17:09.603493] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:52.219 [2024-12-05 19:17:09.603501] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:52.219 [2024-12-05 19:17:09.603509] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:52.219 [2024-12-05 19:17:09.603516] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:52.219 [2024-12-05 19:17:09.603524] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:52.219 [2024-12-05 19:17:09.603532] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:52.219 [2024-12-05 19:17:09.603540] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:52.219 [2024-12-05 19:17:09.603548] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:52.219 [2024-12-05 19:17:09.603556] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:52.219 [2024-12-05 19:17:09.603567] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:52.219 [2024-12-05 19:17:09.603577] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:52.219 [2024-12-05 19:17:09.603584] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:52.219 [2024-12-05 19:17:09.603592] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:52.219 [2024-12-05 19:17:09.603600] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:52.219 [2024-12-05 19:17:09.603609] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:52.219 [2024-12-05 19:17:09.603616] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:52.219 [2024-12-05 19:17:09.603623] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:52.219 [2024-12-05 19:17:09.603630] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:52.219 [2024-12-05 19:17:09.603638] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:52.219 [2024-12-05 19:17:09.603649] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:52.219 [2024-12-05 19:17:09.603658] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:52.220 [2024-12-05 19:17:09.603666] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:52.220 [2024-12-05 19:17:09.603673] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:52.220 [2024-12-05 19:17:09.603680] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:52.220 [2024-12-05 19:17:09.603687] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:52.220 [2024-12-05 19:17:09.603695] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:52.220 [2024-12-05 19:17:09.603704] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:52.220 [2024-12-05 19:17:09.603711] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:52.220 [2024-12-05 19:17:09.603719] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:52.220 [2024-12-05 19:17:09.603732] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:52.220 [2024-12-05 19:17:09.603739] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:52.220 [2024-12-05 19:17:09.603747] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:52.220 [2024-12-05 19:17:09.603754] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:52.220 [2024-12-05 19:17:09.603762] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:52.220 [2024-12-05 19:17:09.603769] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:52.220 [2024-12-05 19:17:09.603778] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:52.220 [2024-12-05 19:17:09.603787] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:52.220 [2024-12-05 19:17:09.603796] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:52.220 [2024-12-05 19:17:09.603803] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:52.220 [2024-12-05 19:17:09.603810] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:52.220 [2024-12-05 19:17:09.603818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.220 [2024-12-05 19:17:09.603825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:52.220 [2024-12-05 19:17:09.603835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.723 ms 00:26:52.220 [2024-12-05 19:17:09.603845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.220 [2024-12-05 19:17:09.617711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.220 [2024-12-05 19:17:09.617869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:52.220 [2024-12-05 19:17:09.617941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.823 ms 00:26:52.220 [2024-12-05 19:17:09.617973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.220 [2024-12-05 19:17:09.618078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.220 [2024-12-05 19:17:09.618101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:52.220 [2024-12-05 19:17:09.618120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:26:52.220 [2024-12-05 19:17:09.618139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.220 [2024-12-05 19:17:09.646705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.220 [2024-12-05 19:17:09.646903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:52.220 [2024-12-05 19:17:09.646988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.492 ms 00:26:52.220 [2024-12-05 19:17:09.647016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.220 [2024-12-05 19:17:09.647085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.220 [2024-12-05 19:17:09.647112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:52.220 [2024-12-05 19:17:09.647135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:52.220 [2024-12-05 19:17:09.647157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.220 [2024-12-05 19:17:09.647758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.220 [2024-12-05 19:17:09.647900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:52.220 [2024-12-05 19:17:09.647960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.497 ms 00:26:52.220 [2024-12-05 19:17:09.647986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.220 [2024-12-05 19:17:09.648167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.220 [2024-12-05 19:17:09.648199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:52.220 [2024-12-05 19:17:09.648221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.135 ms 00:26:52.220 [2024-12-05 19:17:09.648243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.220 [2024-12-05 19:17:09.656175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.220 [2024-12-05 19:17:09.656337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:52.220 [2024-12-05 19:17:09.656401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.870 ms 00:26:52.220 [2024-12-05 19:17:09.656799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.220 [2024-12-05 19:17:09.660691] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:26:52.220 [2024-12-05 19:17:09.660868] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:52.220 [2024-12-05 19:17:09.660957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.220 [2024-12-05 19:17:09.660980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:52.220 [2024-12-05 19:17:09.661001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.972 ms 00:26:52.220 [2024-12-05 19:17:09.661021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.220 [2024-12-05 19:17:09.676824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.220 [2024-12-05 19:17:09.676982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:52.220 [2024-12-05 19:17:09.677011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.679 ms 00:26:52.220 [2024-12-05 19:17:09.677025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.220 [2024-12-05 19:17:09.679679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.220 [2024-12-05 19:17:09.679725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:52.220 [2024-12-05 19:17:09.679735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.612 ms 00:26:52.220 [2024-12-05 19:17:09.679743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.220 [2024-12-05 19:17:09.682473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.220 [2024-12-05 19:17:09.682516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:52.220 [2024-12-05 19:17:09.682527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.686 ms 00:26:52.220 [2024-12-05 19:17:09.682534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.220 [2024-12-05 19:17:09.682871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.220 [2024-12-05 19:17:09.682883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:52.220 [2024-12-05 19:17:09.682892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.260 ms 00:26:52.221 [2024-12-05 19:17:09.682903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.221 [2024-12-05 19:17:09.704995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.221 [2024-12-05 19:17:09.705213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:52.221 [2024-12-05 19:17:09.705244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.070 ms 00:26:52.221 [2024-12-05 19:17:09.705276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.221 [2024-12-05 19:17:09.713410] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:52.221 [2024-12-05 19:17:09.716452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.221 [2024-12-05 19:17:09.716494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:52.221 [2024-12-05 19:17:09.716507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.022 ms 00:26:52.221 [2024-12-05 19:17:09.716516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.221 [2024-12-05 19:17:09.716599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.221 [2024-12-05 19:17:09.716611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:52.221 [2024-12-05 19:17:09.716620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:26:52.221 [2024-12-05 19:17:09.716635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.221 [2024-12-05 19:17:09.718601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.221 [2024-12-05 19:17:09.718642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:52.221 [2024-12-05 19:17:09.718653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.928 ms 00:26:52.221 [2024-12-05 19:17:09.718661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.221 [2024-12-05 19:17:09.718695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.221 [2024-12-05 19:17:09.718704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:52.221 [2024-12-05 19:17:09.718716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:52.221 [2024-12-05 19:17:09.718724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.221 [2024-12-05 19:17:09.718769] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:52.221 [2024-12-05 19:17:09.718781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.221 [2024-12-05 19:17:09.718792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:52.221 [2024-12-05 19:17:09.718801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:26:52.221 [2024-12-05 19:17:09.718809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.221 [2024-12-05 19:17:09.724567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.221 [2024-12-05 19:17:09.724615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:52.221 [2024-12-05 19:17:09.724627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.737 ms 00:26:52.221 [2024-12-05 19:17:09.724635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.221 [2024-12-05 19:17:09.724715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.221 [2024-12-05 19:17:09.724733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:52.221 [2024-12-05 19:17:09.724745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:26:52.221 [2024-12-05 19:17:09.724754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.221 [2024-12-05 19:17:09.725977] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 140.686 ms, result 0 00:26:53.609  [2024-12-05T19:17:12.113Z] Copying: 1032/1048576 [kB] (1032 kBps) [2024-12-05T19:17:13.059Z] Copying: 4840/1048576 [kB] (3808 kBps) [2024-12-05T19:17:14.004Z] Copying: 20/1024 [MB] (15 MBps) [2024-12-05T19:17:14.942Z] Copying: 36/1024 [MB] (15 MBps) [2024-12-05T19:17:16.328Z] Copying: 72/1024 [MB] (35 MBps) [2024-12-05T19:17:17.271Z] Copying: 88/1024 [MB] (16 MBps) [2024-12-05T19:17:18.214Z] Copying: 103/1024 [MB] (15 MBps) [2024-12-05T19:17:19.159Z] Copying: 123/1024 [MB] (19 MBps) [2024-12-05T19:17:20.103Z] Copying: 156/1024 [MB] (33 MBps) [2024-12-05T19:17:21.094Z] Copying: 184/1024 [MB] (27 MBps) [2024-12-05T19:17:22.085Z] Copying: 214/1024 [MB] (30 MBps) [2024-12-05T19:17:23.022Z] Copying: 241/1024 [MB] (26 MBps) [2024-12-05T19:17:23.966Z] Copying: 279/1024 [MB] (38 MBps) [2024-12-05T19:17:25.349Z] Copying: 315/1024 [MB] (35 MBps) [2024-12-05T19:17:25.918Z] Copying: 345/1024 [MB] (29 MBps) [2024-12-05T19:17:27.300Z] Copying: 375/1024 [MB] (29 MBps) [2024-12-05T19:17:28.241Z] Copying: 409/1024 [MB] (34 MBps) [2024-12-05T19:17:29.182Z] Copying: 439/1024 [MB] (29 MBps) [2024-12-05T19:17:30.126Z] Copying: 467/1024 [MB] (28 MBps) [2024-12-05T19:17:31.071Z] Copying: 492/1024 [MB] (25 MBps) [2024-12-05T19:17:32.015Z] Copying: 521/1024 [MB] (28 MBps) [2024-12-05T19:17:32.959Z] Copying: 555/1024 [MB] (34 MBps) [2024-12-05T19:17:34.343Z] Copying: 587/1024 [MB] (31 MBps) [2024-12-05T19:17:34.916Z] Copying: 616/1024 [MB] (28 MBps) [2024-12-05T19:17:36.302Z] Copying: 648/1024 [MB] (32 MBps) [2024-12-05T19:17:37.247Z] Copying: 676/1024 [MB] (28 MBps) [2024-12-05T19:17:38.193Z] Copying: 707/1024 [MB] (30 MBps) [2024-12-05T19:17:39.139Z] Copying: 737/1024 [MB] (30 MBps) [2024-12-05T19:17:40.085Z] Copying: 759/1024 [MB] (21 MBps) [2024-12-05T19:17:41.026Z] Copying: 775/1024 [MB] (16 MBps) [2024-12-05T19:17:41.969Z] Copying: 800/1024 [MB] (24 MBps) [2024-12-05T19:17:42.912Z] Copying: 817/1024 [MB] (17 MBps) [2024-12-05T19:17:44.296Z] Copying: 835/1024 [MB] (17 MBps) [2024-12-05T19:17:45.239Z] Copying: 853/1024 [MB] (17 MBps) [2024-12-05T19:17:46.180Z] Copying: 869/1024 [MB] (15 MBps) [2024-12-05T19:17:47.118Z] Copying: 886/1024 [MB] (17 MBps) [2024-12-05T19:17:48.057Z] Copying: 903/1024 [MB] (17 MBps) [2024-12-05T19:17:49.002Z] Copying: 921/1024 [MB] (17 MBps) [2024-12-05T19:17:50.013Z] Copying: 939/1024 [MB] (17 MBps) [2024-12-05T19:17:50.959Z] Copying: 957/1024 [MB] (17 MBps) [2024-12-05T19:17:52.348Z] Copying: 974/1024 [MB] (17 MBps) [2024-12-05T19:17:52.921Z] Copying: 991/1024 [MB] (16 MBps) [2024-12-05T19:17:53.865Z] Copying: 1008/1024 [MB] (16 MBps) [2024-12-05T19:17:54.128Z] Copying: 1024/1024 [MB] (average 23 MBps)[2024-12-05 19:17:53.994167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.569 [2024-12-05 19:17:53.994270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:36.569 [2024-12-05 19:17:53.994289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:36.569 [2024-12-05 19:17:53.994300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.569 [2024-12-05 19:17:53.994326] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:36.569 [2024-12-05 19:17:53.995097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.569 [2024-12-05 19:17:53.995132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:36.569 [2024-12-05 19:17:53.995144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.747 ms 00:27:36.569 [2024-12-05 19:17:53.995152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.569 [2024-12-05 19:17:53.995506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.569 [2024-12-05 19:17:53.995519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:36.569 [2024-12-05 19:17:53.995529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.330 ms 00:27:36.569 [2024-12-05 19:17:53.995537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.569 [2024-12-05 19:17:54.008914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.569 [2024-12-05 19:17:54.009153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:36.569 [2024-12-05 19:17:54.009179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.358 ms 00:27:36.569 [2024-12-05 19:17:54.009188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.569 [2024-12-05 19:17:54.015684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.569 [2024-12-05 19:17:54.015817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:36.569 [2024-12-05 19:17:54.015882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.344 ms 00:27:36.569 [2024-12-05 19:17:54.015923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.569 [2024-12-05 19:17:54.018916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.569 [2024-12-05 19:17:54.019067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:36.569 [2024-12-05 19:17:54.019127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.913 ms 00:27:36.569 [2024-12-05 19:17:54.019150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.569 [2024-12-05 19:17:54.024240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.569 [2024-12-05 19:17:54.024418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:36.569 [2024-12-05 19:17:54.024478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.043 ms 00:27:36.569 [2024-12-05 19:17:54.024502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.569 [2024-12-05 19:17:54.028896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.569 [2024-12-05 19:17:54.029045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:36.569 [2024-12-05 19:17:54.029106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.343 ms 00:27:36.569 [2024-12-05 19:17:54.029130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.569 [2024-12-05 19:17:54.032499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.569 [2024-12-05 19:17:54.032644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:36.569 [2024-12-05 19:17:54.032701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.335 ms 00:27:36.569 [2024-12-05 19:17:54.032726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.569 [2024-12-05 19:17:54.035097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.569 [2024-12-05 19:17:54.035247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:36.569 [2024-12-05 19:17:54.035319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.235 ms 00:27:36.569 [2024-12-05 19:17:54.035342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.569 [2024-12-05 19:17:54.037831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.569 [2024-12-05 19:17:54.038040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:36.569 [2024-12-05 19:17:54.038123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.170 ms 00:27:36.569 [2024-12-05 19:17:54.038148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.569 [2024-12-05 19:17:54.040742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.569 [2024-12-05 19:17:54.040924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:36.569 [2024-12-05 19:17:54.041016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.237 ms 00:27:36.569 [2024-12-05 19:17:54.041042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.569 [2024-12-05 19:17:54.041088] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:36.569 [2024-12-05 19:17:54.041120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:36.569 [2024-12-05 19:17:54.041159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:27:36.569 [2024-12-05 19:17:54.041189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:36.569 [2024-12-05 19:17:54.041220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:36.569 [2024-12-05 19:17:54.041344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:36.569 [2024-12-05 19:17:54.041376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:36.570 [2024-12-05 19:17:54.041405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:36.570 [2024-12-05 19:17:54.041434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:36.570 [2024-12-05 19:17:54.041463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:36.570 [2024-12-05 19:17:54.041505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:36.570 [2024-12-05 19:17:54.041723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:36.570 [2024-12-05 19:17:54.041761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:36.570 [2024-12-05 19:17:54.041800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:36.570 [2024-12-05 19:17:54.041829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:36.570 [2024-12-05 19:17:54.041858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:36.570 [2024-12-05 19:17:54.041928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:36.570 [2024-12-05 19:17:54.041976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:36.570 [2024-12-05 19:17:54.042006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:36.570 [2024-12-05 19:17:54.042035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:36.570 [2024-12-05 19:17:54.042065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:36.570 [2024-12-05 19:17:54.042120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:36.570 [2024-12-05 19:17:54.042153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:36.570 [2024-12-05 19:17:54.042182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:36.570 [2024-12-05 19:17:54.042211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:36.570 [2024-12-05 19:17:54.042239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:36.570 [2024-12-05 19:17:54.042286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:36.570 [2024-12-05 19:17:54.042347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:36.570 [2024-12-05 19:17:54.042378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:36.570 [2024-12-05 19:17:54.042407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:36.570 [2024-12-05 19:17:54.042436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:36.570 [2024-12-05 19:17:54.042465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:36.570 [2024-12-05 19:17:54.042494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:36.570 [2024-12-05 19:17:54.042558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:36.570 [2024-12-05 19:17:54.042588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:36.570 [2024-12-05 19:17:54.042617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:36.570 [2024-12-05 19:17:54.042647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:36.570 [2024-12-05 19:17:54.042829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:36.570 [2024-12-05 19:17:54.042867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:36.570 [2024-12-05 19:17:54.042986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:36.570 [2024-12-05 19:17:54.042996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:36.570 [2024-12-05 19:17:54.043005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:36.570 [2024-12-05 19:17:54.043013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:36.570 [2024-12-05 19:17:54.043021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:36.570 [2024-12-05 19:17:54.043043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:36.570 [2024-12-05 19:17:54.043052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:36.570 [2024-12-05 19:17:54.043060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:36.570 [2024-12-05 19:17:54.043068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:36.570 [2024-12-05 19:17:54.043076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:36.570 [2024-12-05 19:17:54.043085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:36.570 [2024-12-05 19:17:54.043093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:36.570 [2024-12-05 19:17:54.043102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:36.570 [2024-12-05 19:17:54.043110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:36.570 [2024-12-05 19:17:54.043118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:36.570 [2024-12-05 19:17:54.043126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:36.570 [2024-12-05 19:17:54.043133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:36.570 [2024-12-05 19:17:54.043141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:36.570 [2024-12-05 19:17:54.043149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:36.570 [2024-12-05 19:17:54.043157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:36.570 [2024-12-05 19:17:54.043164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:36.570 [2024-12-05 19:17:54.043172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:36.570 [2024-12-05 19:17:54.043180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:36.570 [2024-12-05 19:17:54.043188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:36.570 [2024-12-05 19:17:54.043195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:36.570 [2024-12-05 19:17:54.043203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:36.570 [2024-12-05 19:17:54.043210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:36.570 [2024-12-05 19:17:54.043218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:36.570 [2024-12-05 19:17:54.043226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:36.570 [2024-12-05 19:17:54.043233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:36.570 [2024-12-05 19:17:54.043242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:36.570 [2024-12-05 19:17:54.043264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:36.570 [2024-12-05 19:17:54.043272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:36.570 [2024-12-05 19:17:54.043280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:36.570 [2024-12-05 19:17:54.043289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:36.570 [2024-12-05 19:17:54.043296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:36.570 [2024-12-05 19:17:54.043304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:36.570 [2024-12-05 19:17:54.043312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:36.570 [2024-12-05 19:17:54.043319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:36.570 [2024-12-05 19:17:54.043328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:36.570 [2024-12-05 19:17:54.043335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:36.570 [2024-12-05 19:17:54.043343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:36.570 [2024-12-05 19:17:54.043365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:36.570 [2024-12-05 19:17:54.043372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:36.570 [2024-12-05 19:17:54.043380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:36.570 [2024-12-05 19:17:54.043388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:36.570 [2024-12-05 19:17:54.043396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:36.570 [2024-12-05 19:17:54.043404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:36.571 [2024-12-05 19:17:54.043411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:36.571 [2024-12-05 19:17:54.043419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:36.571 [2024-12-05 19:17:54.043427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:36.571 [2024-12-05 19:17:54.043435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:36.571 [2024-12-05 19:17:54.043442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:36.571 [2024-12-05 19:17:54.043450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:36.571 [2024-12-05 19:17:54.043457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:36.571 [2024-12-05 19:17:54.043464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:36.571 [2024-12-05 19:17:54.043473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:36.571 [2024-12-05 19:17:54.043480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:36.571 [2024-12-05 19:17:54.043488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:36.571 [2024-12-05 19:17:54.043495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:36.571 [2024-12-05 19:17:54.043503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:36.571 [2024-12-05 19:17:54.043511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:36.571 [2024-12-05 19:17:54.043528] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:36.571 [2024-12-05 19:17:54.043542] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a560a9df-7547-43f8-95d1-de6f5f67663b 00:27:36.571 [2024-12-05 19:17:54.043551] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:27:36.571 [2024-12-05 19:17:54.043559] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 145344 00:27:36.571 [2024-12-05 19:17:54.043567] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 143360 00:27:36.571 [2024-12-05 19:17:54.043576] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0138 00:27:36.571 [2024-12-05 19:17:54.043584] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:36.571 [2024-12-05 19:17:54.043592] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:36.571 [2024-12-05 19:17:54.043601] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:36.571 [2024-12-05 19:17:54.043607] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:36.571 [2024-12-05 19:17:54.043614] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:36.571 [2024-12-05 19:17:54.043624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.571 [2024-12-05 19:17:54.043643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:36.571 [2024-12-05 19:17:54.043653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.537 ms 00:27:36.571 [2024-12-05 19:17:54.043668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.571 [2024-12-05 19:17:54.046782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.571 [2024-12-05 19:17:54.046816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:36.571 [2024-12-05 19:17:54.046828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.087 ms 00:27:36.571 [2024-12-05 19:17:54.046837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.571 [2024-12-05 19:17:54.046997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.571 [2024-12-05 19:17:54.047013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:36.571 [2024-12-05 19:17:54.047024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.138 ms 00:27:36.571 [2024-12-05 19:17:54.047035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.571 [2024-12-05 19:17:54.057062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:36.571 [2024-12-05 19:17:54.057101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:36.571 [2024-12-05 19:17:54.057113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:36.571 [2024-12-05 19:17:54.057123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.571 [2024-12-05 19:17:54.057190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:36.571 [2024-12-05 19:17:54.057207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:36.571 [2024-12-05 19:17:54.057215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:36.571 [2024-12-05 19:17:54.057223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.571 [2024-12-05 19:17:54.057349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:36.571 [2024-12-05 19:17:54.057361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:36.571 [2024-12-05 19:17:54.057370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:36.571 [2024-12-05 19:17:54.057378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.571 [2024-12-05 19:17:54.057395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:36.571 [2024-12-05 19:17:54.057403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:36.571 [2024-12-05 19:17:54.057418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:36.571 [2024-12-05 19:17:54.057426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.571 [2024-12-05 19:17:54.077436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:36.571 [2024-12-05 19:17:54.077485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:36.571 [2024-12-05 19:17:54.077498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:36.571 [2024-12-05 19:17:54.077509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.571 [2024-12-05 19:17:54.093461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:36.571 [2024-12-05 19:17:54.093528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:36.571 [2024-12-05 19:17:54.093541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:36.571 [2024-12-05 19:17:54.093550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.571 [2024-12-05 19:17:54.093621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:36.571 [2024-12-05 19:17:54.093632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:36.571 [2024-12-05 19:17:54.093642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:36.571 [2024-12-05 19:17:54.093650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.571 [2024-12-05 19:17:54.093690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:36.571 [2024-12-05 19:17:54.093700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:36.571 [2024-12-05 19:17:54.093709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:36.571 [2024-12-05 19:17:54.093722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.571 [2024-12-05 19:17:54.093808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:36.571 [2024-12-05 19:17:54.093825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:36.571 [2024-12-05 19:17:54.093835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:36.571 [2024-12-05 19:17:54.093844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.571 [2024-12-05 19:17:54.093881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:36.571 [2024-12-05 19:17:54.093892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:36.571 [2024-12-05 19:17:54.093902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:36.571 [2024-12-05 19:17:54.093911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.571 [2024-12-05 19:17:54.093989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:36.571 [2024-12-05 19:17:54.094000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:36.571 [2024-12-05 19:17:54.094011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:36.571 [2024-12-05 19:17:54.094019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.571 [2024-12-05 19:17:54.094112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:36.571 [2024-12-05 19:17:54.094125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:36.571 [2024-12-05 19:17:54.094141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:36.571 [2024-12-05 19:17:54.094153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.571 [2024-12-05 19:17:54.094382] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 100.166 ms, result 0 00:27:36.833 00:27:36.833 00:27:37.095 19:17:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:39.647 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:27:39.647 19:17:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:39.647 [2024-12-05 19:17:56.638800] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:27:39.647 [2024-12-05 19:17:56.638896] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92737 ] 00:27:39.647 [2024-12-05 19:17:56.788754] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:39.647 [2024-12-05 19:17:56.815506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:39.647 [2024-12-05 19:17:56.926854] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:39.647 [2024-12-05 19:17:56.926939] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:39.647 [2024-12-05 19:17:57.087363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.647 [2024-12-05 19:17:57.087588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:39.647 [2024-12-05 19:17:57.087611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:39.647 [2024-12-05 19:17:57.087622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.647 [2024-12-05 19:17:57.087686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.647 [2024-12-05 19:17:57.087701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:39.647 [2024-12-05 19:17:57.087710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:27:39.647 [2024-12-05 19:17:57.087723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.647 [2024-12-05 19:17:57.087759] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:39.647 [2024-12-05 19:17:57.088008] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:39.647 [2024-12-05 19:17:57.088027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.647 [2024-12-05 19:17:57.088039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:39.647 [2024-12-05 19:17:57.088050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:27:39.647 [2024-12-05 19:17:57.088057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.647 [2024-12-05 19:17:57.089612] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:39.647 [2024-12-05 19:17:57.093174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.647 [2024-12-05 19:17:57.093218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:39.647 [2024-12-05 19:17:57.093229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.564 ms 00:27:39.647 [2024-12-05 19:17:57.093243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.647 [2024-12-05 19:17:57.093323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.647 [2024-12-05 19:17:57.093333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:39.647 [2024-12-05 19:17:57.093341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:27:39.647 [2024-12-05 19:17:57.093349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.647 [2024-12-05 19:17:57.100951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.647 [2024-12-05 19:17:57.100984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:39.647 [2024-12-05 19:17:57.101000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.559 ms 00:27:39.647 [2024-12-05 19:17:57.101008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.647 [2024-12-05 19:17:57.101098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.647 [2024-12-05 19:17:57.101107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:39.647 [2024-12-05 19:17:57.101118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:27:39.647 [2024-12-05 19:17:57.101126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.647 [2024-12-05 19:17:57.101167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.647 [2024-12-05 19:17:57.101178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:39.647 [2024-12-05 19:17:57.101186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:27:39.647 [2024-12-05 19:17:57.101196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.647 [2024-12-05 19:17:57.101225] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:39.647 [2024-12-05 19:17:57.103140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.647 [2024-12-05 19:17:57.103289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:39.647 [2024-12-05 19:17:57.103305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.920 ms 00:27:39.647 [2024-12-05 19:17:57.103313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.647 [2024-12-05 19:17:57.103354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.647 [2024-12-05 19:17:57.103362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:39.648 [2024-12-05 19:17:57.103371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:27:39.648 [2024-12-05 19:17:57.103385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.648 [2024-12-05 19:17:57.103414] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:39.648 [2024-12-05 19:17:57.103438] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:39.648 [2024-12-05 19:17:57.103482] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:39.648 [2024-12-05 19:17:57.103498] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:39.648 [2024-12-05 19:17:57.103607] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:39.648 [2024-12-05 19:17:57.103618] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:39.648 [2024-12-05 19:17:57.103632] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:39.648 [2024-12-05 19:17:57.103647] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:39.648 [2024-12-05 19:17:57.103656] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:39.648 [2024-12-05 19:17:57.103664] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:39.648 [2024-12-05 19:17:57.103671] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:39.648 [2024-12-05 19:17:57.103679] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:39.648 [2024-12-05 19:17:57.103691] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:39.648 [2024-12-05 19:17:57.103702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.648 [2024-12-05 19:17:57.103709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:39.648 [2024-12-05 19:17:57.103720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.292 ms 00:27:39.648 [2024-12-05 19:17:57.103731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.648 [2024-12-05 19:17:57.103817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.648 [2024-12-05 19:17:57.103826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:39.648 [2024-12-05 19:17:57.103838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:27:39.648 [2024-12-05 19:17:57.103846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.648 [2024-12-05 19:17:57.103954] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:39.648 [2024-12-05 19:17:57.103966] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:39.648 [2024-12-05 19:17:57.103975] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:39.648 [2024-12-05 19:17:57.103988] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:39.648 [2024-12-05 19:17:57.103998] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:39.648 [2024-12-05 19:17:57.104007] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:39.648 [2024-12-05 19:17:57.104014] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:39.648 [2024-12-05 19:17:57.104022] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:39.648 [2024-12-05 19:17:57.104033] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:39.648 [2024-12-05 19:17:57.104044] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:39.648 [2024-12-05 19:17:57.104052] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:39.648 [2024-12-05 19:17:57.104060] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:39.648 [2024-12-05 19:17:57.104067] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:39.648 [2024-12-05 19:17:57.104075] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:39.648 [2024-12-05 19:17:57.104084] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:39.648 [2024-12-05 19:17:57.104092] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:39.648 [2024-12-05 19:17:57.104100] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:39.648 [2024-12-05 19:17:57.104112] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:39.648 [2024-12-05 19:17:57.104120] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:39.648 [2024-12-05 19:17:57.104129] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:39.648 [2024-12-05 19:17:57.104136] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:39.648 [2024-12-05 19:17:57.104143] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:39.648 [2024-12-05 19:17:57.104151] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:39.648 [2024-12-05 19:17:57.104158] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:39.648 [2024-12-05 19:17:57.104165] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:39.648 [2024-12-05 19:17:57.104178] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:39.648 [2024-12-05 19:17:57.104187] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:39.648 [2024-12-05 19:17:57.104195] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:39.648 [2024-12-05 19:17:57.104202] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:39.648 [2024-12-05 19:17:57.104210] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:39.648 [2024-12-05 19:17:57.104217] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:39.648 [2024-12-05 19:17:57.104224] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:39.648 [2024-12-05 19:17:57.104231] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:39.648 [2024-12-05 19:17:57.104237] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:39.648 [2024-12-05 19:17:57.104244] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:39.648 [2024-12-05 19:17:57.104271] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:39.648 [2024-12-05 19:17:57.104278] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:39.648 [2024-12-05 19:17:57.104285] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:39.648 [2024-12-05 19:17:57.104293] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:39.648 [2024-12-05 19:17:57.104300] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:39.648 [2024-12-05 19:17:57.104306] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:39.648 [2024-12-05 19:17:57.104315] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:39.648 [2024-12-05 19:17:57.104321] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:39.648 [2024-12-05 19:17:57.104329] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:39.648 [2024-12-05 19:17:57.104340] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:39.648 [2024-12-05 19:17:57.104348] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:39.648 [2024-12-05 19:17:57.104355] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:39.648 [2024-12-05 19:17:57.104362] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:39.648 [2024-12-05 19:17:57.104369] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:39.648 [2024-12-05 19:17:57.104376] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:39.648 [2024-12-05 19:17:57.104383] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:39.648 [2024-12-05 19:17:57.104390] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:39.648 [2024-12-05 19:17:57.104397] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:39.648 [2024-12-05 19:17:57.104405] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:39.648 [2024-12-05 19:17:57.104415] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:39.648 [2024-12-05 19:17:57.104427] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:39.648 [2024-12-05 19:17:57.104435] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:39.648 [2024-12-05 19:17:57.104444] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:39.648 [2024-12-05 19:17:57.104451] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:39.648 [2024-12-05 19:17:57.104458] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:39.648 [2024-12-05 19:17:57.104466] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:39.648 [2024-12-05 19:17:57.104472] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:39.648 [2024-12-05 19:17:57.104479] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:39.648 [2024-12-05 19:17:57.104487] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:39.648 [2024-12-05 19:17:57.104498] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:39.648 [2024-12-05 19:17:57.104506] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:39.648 [2024-12-05 19:17:57.104513] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:39.648 [2024-12-05 19:17:57.104520] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:39.648 [2024-12-05 19:17:57.104527] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:39.648 [2024-12-05 19:17:57.104534] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:39.648 [2024-12-05 19:17:57.104543] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:39.649 [2024-12-05 19:17:57.104552] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:39.649 [2024-12-05 19:17:57.104560] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:39.649 [2024-12-05 19:17:57.104568] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:39.649 [2024-12-05 19:17:57.104576] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:39.649 [2024-12-05 19:17:57.104584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.649 [2024-12-05 19:17:57.104592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:39.649 [2024-12-05 19:17:57.104599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.702 ms 00:27:39.649 [2024-12-05 19:17:57.104609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.649 [2024-12-05 19:17:57.118229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.649 [2024-12-05 19:17:57.118276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:39.649 [2024-12-05 19:17:57.118286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.573 ms 00:27:39.649 [2024-12-05 19:17:57.118295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.649 [2024-12-05 19:17:57.118381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.649 [2024-12-05 19:17:57.118390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:39.649 [2024-12-05 19:17:57.118398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:27:39.649 [2024-12-05 19:17:57.118411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.649 [2024-12-05 19:17:57.138520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.649 [2024-12-05 19:17:57.138568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:39.649 [2024-12-05 19:17:57.138583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.056 ms 00:27:39.649 [2024-12-05 19:17:57.138594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.649 [2024-12-05 19:17:57.138644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.649 [2024-12-05 19:17:57.138664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:39.649 [2024-12-05 19:17:57.138678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:39.649 [2024-12-05 19:17:57.138687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.649 [2024-12-05 19:17:57.139237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.649 [2024-12-05 19:17:57.139288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:39.649 [2024-12-05 19:17:57.139302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.483 ms 00:27:39.649 [2024-12-05 19:17:57.139313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.649 [2024-12-05 19:17:57.139490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.649 [2024-12-05 19:17:57.139505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:39.649 [2024-12-05 19:17:57.139516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.148 ms 00:27:39.649 [2024-12-05 19:17:57.139527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.649 [2024-12-05 19:17:57.147754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.649 [2024-12-05 19:17:57.147949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:39.649 [2024-12-05 19:17:57.147970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.203 ms 00:27:39.649 [2024-12-05 19:17:57.147979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.649 [2024-12-05 19:17:57.151784] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:39.649 [2024-12-05 19:17:57.151827] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:39.649 [2024-12-05 19:17:57.151848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.649 [2024-12-05 19:17:57.151857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:39.649 [2024-12-05 19:17:57.151866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.759 ms 00:27:39.649 [2024-12-05 19:17:57.151874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.649 [2024-12-05 19:17:57.167244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.649 [2024-12-05 19:17:57.167288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:39.649 [2024-12-05 19:17:57.167300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.324 ms 00:27:39.649 [2024-12-05 19:17:57.167308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.649 [2024-12-05 19:17:57.169870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.649 [2024-12-05 19:17:57.169909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:39.649 [2024-12-05 19:17:57.169919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.518 ms 00:27:39.649 [2024-12-05 19:17:57.169927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.649 [2024-12-05 19:17:57.172335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.649 [2024-12-05 19:17:57.172371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:39.649 [2024-12-05 19:17:57.172381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.358 ms 00:27:39.649 [2024-12-05 19:17:57.172398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.649 [2024-12-05 19:17:57.172725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.649 [2024-12-05 19:17:57.172738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:39.649 [2024-12-05 19:17:57.172747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.259 ms 00:27:39.649 [2024-12-05 19:17:57.172754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.649 [2024-12-05 19:17:57.198008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.649 [2024-12-05 19:17:57.198052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:39.649 [2024-12-05 19:17:57.198064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.228 ms 00:27:39.649 [2024-12-05 19:17:57.198074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.909 [2024-12-05 19:17:57.206497] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:39.909 [2024-12-05 19:17:57.209942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.909 [2024-12-05 19:17:57.210150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:39.909 [2024-12-05 19:17:57.210179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.825 ms 00:27:39.909 [2024-12-05 19:17:57.210188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.909 [2024-12-05 19:17:57.210286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.909 [2024-12-05 19:17:57.210300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:39.909 [2024-12-05 19:17:57.210317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:27:39.909 [2024-12-05 19:17:57.210326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.909 [2024-12-05 19:17:57.211218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.909 [2024-12-05 19:17:57.211278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:39.909 [2024-12-05 19:17:57.211290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.850 ms 00:27:39.909 [2024-12-05 19:17:57.211299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.909 [2024-12-05 19:17:57.211332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.909 [2024-12-05 19:17:57.211341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:39.909 [2024-12-05 19:17:57.211350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:39.909 [2024-12-05 19:17:57.211358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.909 [2024-12-05 19:17:57.211399] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:39.909 [2024-12-05 19:17:57.211409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.910 [2024-12-05 19:17:57.211420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:39.910 [2024-12-05 19:17:57.211434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:27:39.910 [2024-12-05 19:17:57.211443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.910 [2024-12-05 19:17:57.216529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.910 [2024-12-05 19:17:57.216671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:39.910 [2024-12-05 19:17:57.216727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.066 ms 00:27:39.910 [2024-12-05 19:17:57.216753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.910 [2024-12-05 19:17:57.216842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.910 [2024-12-05 19:17:57.216867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:39.910 [2024-12-05 19:17:57.216888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:27:39.910 [2024-12-05 19:17:57.216915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.910 [2024-12-05 19:17:57.218131] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 130.256 ms, result 0 00:27:40.854  [2024-12-05T19:17:59.801Z] Copying: 22/1024 [MB] (22 MBps) [2024-12-05T19:18:00.747Z] Copying: 48/1024 [MB] (26 MBps) [2024-12-05T19:18:01.692Z] Copying: 69/1024 [MB] (21 MBps) [2024-12-05T19:18:02.636Z] Copying: 92/1024 [MB] (23 MBps) [2024-12-05T19:18:03.580Z] Copying: 114/1024 [MB] (21 MBps) [2024-12-05T19:18:04.611Z] Copying: 134/1024 [MB] (20 MBps) [2024-12-05T19:18:05.553Z] Copying: 157/1024 [MB] (23 MBps) [2024-12-05T19:18:06.492Z] Copying: 177/1024 [MB] (19 MBps) [2024-12-05T19:18:07.428Z] Copying: 199/1024 [MB] (21 MBps) [2024-12-05T19:18:08.812Z] Copying: 217/1024 [MB] (18 MBps) [2024-12-05T19:18:09.752Z] Copying: 237/1024 [MB] (19 MBps) [2024-12-05T19:18:10.695Z] Copying: 256/1024 [MB] (18 MBps) [2024-12-05T19:18:11.640Z] Copying: 276/1024 [MB] (20 MBps) [2024-12-05T19:18:12.587Z] Copying: 288/1024 [MB] (12 MBps) [2024-12-05T19:18:13.529Z] Copying: 299/1024 [MB] (10 MBps) [2024-12-05T19:18:14.474Z] Copying: 310/1024 [MB] (11 MBps) [2024-12-05T19:18:15.418Z] Copying: 322/1024 [MB] (12 MBps) [2024-12-05T19:18:16.801Z] Copying: 335/1024 [MB] (12 MBps) [2024-12-05T19:18:17.746Z] Copying: 345/1024 [MB] (10 MBps) [2024-12-05T19:18:18.690Z] Copying: 357/1024 [MB] (11 MBps) [2024-12-05T19:18:19.632Z] Copying: 371/1024 [MB] (13 MBps) [2024-12-05T19:18:20.573Z] Copying: 389/1024 [MB] (18 MBps) [2024-12-05T19:18:21.517Z] Copying: 406/1024 [MB] (17 MBps) [2024-12-05T19:18:22.458Z] Copying: 422/1024 [MB] (15 MBps) [2024-12-05T19:18:23.400Z] Copying: 434/1024 [MB] (11 MBps) [2024-12-05T19:18:24.785Z] Copying: 445/1024 [MB] (11 MBps) [2024-12-05T19:18:25.724Z] Copying: 456/1024 [MB] (11 MBps) [2024-12-05T19:18:26.681Z] Copying: 467/1024 [MB] (10 MBps) [2024-12-05T19:18:27.621Z] Copying: 479/1024 [MB] (12 MBps) [2024-12-05T19:18:28.560Z] Copying: 490/1024 [MB] (11 MBps) [2024-12-05T19:18:29.503Z] Copying: 500/1024 [MB] (10 MBps) [2024-12-05T19:18:30.446Z] Copying: 518/1024 [MB] (17 MBps) [2024-12-05T19:18:31.832Z] Copying: 531/1024 [MB] (12 MBps) [2024-12-05T19:18:32.405Z] Copying: 541/1024 [MB] (10 MBps) [2024-12-05T19:18:33.872Z] Copying: 564440/1048576 [kB] (10112 kBps) [2024-12-05T19:18:34.446Z] Copying: 563/1024 [MB] (11 MBps) [2024-12-05T19:18:35.833Z] Copying: 586512/1048576 [kB] (9820 kBps) [2024-12-05T19:18:36.404Z] Copying: 596256/1048576 [kB] (9744 kBps) [2024-12-05T19:18:37.790Z] Copying: 592/1024 [MB] (10 MBps) [2024-12-05T19:18:38.732Z] Copying: 602/1024 [MB] (10 MBps) [2024-12-05T19:18:39.674Z] Copying: 612/1024 [MB] (10 MBps) [2024-12-05T19:18:40.618Z] Copying: 626/1024 [MB] (13 MBps) [2024-12-05T19:18:41.561Z] Copying: 636/1024 [MB] (10 MBps) [2024-12-05T19:18:42.507Z] Copying: 650/1024 [MB] (13 MBps) [2024-12-05T19:18:43.450Z] Copying: 675816/1048576 [kB] (10056 kBps) [2024-12-05T19:18:44.837Z] Copying: 670/1024 [MB] (10 MBps) [2024-12-05T19:18:45.408Z] Copying: 696600/1048576 [kB] (10216 kBps) [2024-12-05T19:18:46.794Z] Copying: 706288/1048576 [kB] (9688 kBps) [2024-12-05T19:18:47.738Z] Copying: 699/1024 [MB] (10 MBps) [2024-12-05T19:18:48.684Z] Copying: 710/1024 [MB] (10 MBps) [2024-12-05T19:18:49.630Z] Copying: 720/1024 [MB] (10 MBps) [2024-12-05T19:18:50.575Z] Copying: 747376/1048576 [kB] (9648 kBps) [2024-12-05T19:18:51.569Z] Copying: 756768/1048576 [kB] (9392 kBps) [2024-12-05T19:18:52.515Z] Copying: 765920/1048576 [kB] (9152 kBps) [2024-12-05T19:18:53.462Z] Copying: 774988/1048576 [kB] (9068 kBps) [2024-12-05T19:18:54.408Z] Copying: 784852/1048576 [kB] (9864 kBps) [2024-12-05T19:18:55.795Z] Copying: 794332/1048576 [kB] (9480 kBps) [2024-12-05T19:18:56.739Z] Copying: 803232/1048576 [kB] (8900 kBps) [2024-12-05T19:18:57.679Z] Copying: 812352/1048576 [kB] (9120 kBps) [2024-12-05T19:18:58.618Z] Copying: 821448/1048576 [kB] (9096 kBps) [2024-12-05T19:18:59.558Z] Copying: 830640/1048576 [kB] (9192 kBps) [2024-12-05T19:19:00.501Z] Copying: 839840/1048576 [kB] (9200 kBps) [2024-12-05T19:19:01.445Z] Copying: 848896/1048576 [kB] (9056 kBps) [2024-12-05T19:19:02.834Z] Copying: 858092/1048576 [kB] (9196 kBps) [2024-12-05T19:19:03.408Z] Copying: 866840/1048576 [kB] (8748 kBps) [2024-12-05T19:19:04.794Z] Copying: 876148/1048576 [kB] (9308 kBps) [2024-12-05T19:19:05.737Z] Copying: 884880/1048576 [kB] (8732 kBps) [2024-12-05T19:19:06.676Z] Copying: 894544/1048576 [kB] (9664 kBps) [2024-12-05T19:19:07.618Z] Copying: 903520/1048576 [kB] (8976 kBps) [2024-12-05T19:19:08.565Z] Copying: 912808/1048576 [kB] (9288 kBps) [2024-12-05T19:19:09.511Z] Copying: 922208/1048576 [kB] (9400 kBps) [2024-12-05T19:19:10.457Z] Copying: 931440/1048576 [kB] (9232 kBps) [2024-12-05T19:19:11.402Z] Copying: 940480/1048576 [kB] (9040 kBps) [2024-12-05T19:19:12.791Z] Copying: 949464/1048576 [kB] (8984 kBps) [2024-12-05T19:19:13.737Z] Copying: 957976/1048576 [kB] (8512 kBps) [2024-12-05T19:19:14.683Z] Copying: 966912/1048576 [kB] (8936 kBps) [2024-12-05T19:19:15.629Z] Copying: 975936/1048576 [kB] (9024 kBps) [2024-12-05T19:19:16.571Z] Copying: 984848/1048576 [kB] (8912 kBps) [2024-12-05T19:19:17.516Z] Copying: 993508/1048576 [kB] (8660 kBps) [2024-12-05T19:19:18.458Z] Copying: 1002288/1048576 [kB] (8780 kBps) [2024-12-05T19:19:19.401Z] Copying: 1011104/1048576 [kB] (8816 kBps) [2024-12-05T19:19:20.788Z] Copying: 1019664/1048576 [kB] (8560 kBps) [2024-12-05T19:19:21.733Z] Copying: 1028464/1048576 [kB] (8800 kBps) [2024-12-05T19:19:21.994Z] Copying: 1037664/1048576 [kB] (9200 kBps) [2024-12-05T19:19:22.257Z] Copying: 1024/1024 [MB] (average 12 MBps)[2024-12-05 19:19:22.093905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:04.698 [2024-12-05 19:19:22.093971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:04.698 [2024-12-05 19:19:22.093995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:04.698 [2024-12-05 19:19:22.094003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.698 [2024-12-05 19:19:22.094025] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:04.698 [2024-12-05 19:19:22.094554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:04.698 [2024-12-05 19:19:22.094575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:04.698 [2024-12-05 19:19:22.094585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.514 ms 00:29:04.698 [2024-12-05 19:19:22.094592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.698 [2024-12-05 19:19:22.094816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:04.698 [2024-12-05 19:19:22.094831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:04.698 [2024-12-05 19:19:22.094841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.204 ms 00:29:04.698 [2024-12-05 19:19:22.094860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.698 [2024-12-05 19:19:22.098775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:04.698 [2024-12-05 19:19:22.098803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:04.698 [2024-12-05 19:19:22.098814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.900 ms 00:29:04.698 [2024-12-05 19:19:22.098821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.698 [2024-12-05 19:19:22.104987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:04.698 [2024-12-05 19:19:22.105123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:04.698 [2024-12-05 19:19:22.105140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.146 ms 00:29:04.698 [2024-12-05 19:19:22.105158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.698 [2024-12-05 19:19:22.107761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:04.698 [2024-12-05 19:19:22.107797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:04.698 [2024-12-05 19:19:22.107807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.548 ms 00:29:04.698 [2024-12-05 19:19:22.107815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.698 [2024-12-05 19:19:22.111943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:04.698 [2024-12-05 19:19:22.112056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:04.698 [2024-12-05 19:19:22.112112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.097 ms 00:29:04.698 [2024-12-05 19:19:22.112136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.698 [2024-12-05 19:19:22.116411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:04.698 [2024-12-05 19:19:22.116510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:04.698 [2024-12-05 19:19:22.116560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.220 ms 00:29:04.698 [2024-12-05 19:19:22.116593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.698 [2024-12-05 19:19:22.119209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:04.698 [2024-12-05 19:19:22.119328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:04.698 [2024-12-05 19:19:22.119381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.587 ms 00:29:04.698 [2024-12-05 19:19:22.119402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.698 [2024-12-05 19:19:22.121664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:04.698 [2024-12-05 19:19:22.121761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:04.698 [2024-12-05 19:19:22.121774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.225 ms 00:29:04.698 [2024-12-05 19:19:22.121781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.698 [2024-12-05 19:19:22.123504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:04.698 [2024-12-05 19:19:22.123536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:04.698 [2024-12-05 19:19:22.123545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.698 ms 00:29:04.699 [2024-12-05 19:19:22.123551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.699 [2024-12-05 19:19:22.125383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:04.699 [2024-12-05 19:19:22.125415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:04.699 [2024-12-05 19:19:22.125424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.780 ms 00:29:04.699 [2024-12-05 19:19:22.125431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.699 [2024-12-05 19:19:22.125460] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:04.699 [2024-12-05 19:19:22.125474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:29:04.699 [2024-12-05 19:19:22.125485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:29:04.699 [2024-12-05 19:19:22.125493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-05 19:19:22.125501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-05 19:19:22.125509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-05 19:19:22.125517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-05 19:19:22.125525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-05 19:19:22.125533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-05 19:19:22.125540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-05 19:19:22.125548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-05 19:19:22.125556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-05 19:19:22.125563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-05 19:19:22.125571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-05 19:19:22.125579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-05 19:19:22.125587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-05 19:19:22.125595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-05 19:19:22.125609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-05 19:19:22.125617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-05 19:19:22.125624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-05 19:19:22.125632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-05 19:19:22.125640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-05 19:19:22.125647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-05 19:19:22.125655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-05 19:19:22.125663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-05 19:19:22.125671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-05 19:19:22.125679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-05 19:19:22.125689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-05 19:19:22.125697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-05 19:19:22.125705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-05 19:19:22.125713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-05 19:19:22.125722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-05 19:19:22.125730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-05 19:19:22.125738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-05 19:19:22.125746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-05 19:19:22.125753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-05 19:19:22.125761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-05 19:19:22.125768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-05 19:19:22.125775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-05 19:19:22.125782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-05 19:19:22.125790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-05 19:19:22.125797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-05 19:19:22.125804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-05 19:19:22.125821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-05 19:19:22.125828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-05 19:19:22.125836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-05 19:19:22.125843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-05 19:19:22.125851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-05 19:19:22.125859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-05 19:19:22.125866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-05 19:19:22.125873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-05 19:19:22.125881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-05 19:19:22.125888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-05 19:19:22.125896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-05 19:19:22.125903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-05 19:19:22.125911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-05 19:19:22.125919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-05 19:19:22.125927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-05 19:19:22.125934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-05 19:19:22.125942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-05 19:19:22.125949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-05 19:19:22.125957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-05 19:19:22.125965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-05 19:19:22.125973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-05 19:19:22.125980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-05 19:19:22.125988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-05 19:19:22.125995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-05 19:19:22.126002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-05 19:19:22.126010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-05 19:19:22.126017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-05 19:19:22.126025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-05 19:19:22.126032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-05 19:19:22.126039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-05 19:19:22.126047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-05 19:19:22.126063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-05 19:19:22.126070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-05 19:19:22.126078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-05 19:19:22.126085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:04.699 [2024-12-05 19:19:22.126092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:04.700 [2024-12-05 19:19:22.126099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:04.700 [2024-12-05 19:19:22.126107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:04.700 [2024-12-05 19:19:22.126114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:04.700 [2024-12-05 19:19:22.126121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:04.700 [2024-12-05 19:19:22.126128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:04.700 [2024-12-05 19:19:22.126136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:04.700 [2024-12-05 19:19:22.126144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:04.700 [2024-12-05 19:19:22.126152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:04.700 [2024-12-05 19:19:22.126159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:04.700 [2024-12-05 19:19:22.126171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:04.700 [2024-12-05 19:19:22.126178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:04.700 [2024-12-05 19:19:22.126186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:04.700 [2024-12-05 19:19:22.126193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:04.700 [2024-12-05 19:19:22.126201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:04.700 [2024-12-05 19:19:22.126209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:04.700 [2024-12-05 19:19:22.126217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:04.700 [2024-12-05 19:19:22.126226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:04.700 [2024-12-05 19:19:22.126233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:04.700 [2024-12-05 19:19:22.126240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:04.700 [2024-12-05 19:19:22.126247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:04.700 [2024-12-05 19:19:22.126266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:04.700 [2024-12-05 19:19:22.126274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:04.700 [2024-12-05 19:19:22.126289] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:04.700 [2024-12-05 19:19:22.126297] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a560a9df-7547-43f8-95d1-de6f5f67663b 00:29:04.700 [2024-12-05 19:19:22.126305] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:29:04.700 [2024-12-05 19:19:22.126312] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:29:04.700 [2024-12-05 19:19:22.126320] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:04.700 [2024-12-05 19:19:22.126327] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:04.700 [2024-12-05 19:19:22.126339] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:04.700 [2024-12-05 19:19:22.126347] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:04.700 [2024-12-05 19:19:22.126356] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:04.700 [2024-12-05 19:19:22.126363] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:04.700 [2024-12-05 19:19:22.126369] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:04.700 [2024-12-05 19:19:22.126376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:04.700 [2024-12-05 19:19:22.126387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:04.700 [2024-12-05 19:19:22.126399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.917 ms 00:29:04.700 [2024-12-05 19:19:22.126407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.700 [2024-12-05 19:19:22.128123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:04.700 [2024-12-05 19:19:22.128212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:04.700 [2024-12-05 19:19:22.128277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.684 ms 00:29:04.700 [2024-12-05 19:19:22.128300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.700 [2024-12-05 19:19:22.128400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:04.700 [2024-12-05 19:19:22.128422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:04.700 [2024-12-05 19:19:22.128558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:29:04.700 [2024-12-05 19:19:22.128580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.700 [2024-12-05 19:19:22.133899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:04.700 [2024-12-05 19:19:22.134009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:04.700 [2024-12-05 19:19:22.134063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:04.700 [2024-12-05 19:19:22.134093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.700 [2024-12-05 19:19:22.134154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:04.700 [2024-12-05 19:19:22.134175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:04.700 [2024-12-05 19:19:22.134193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:04.700 [2024-12-05 19:19:22.134211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.700 [2024-12-05 19:19:22.134273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:04.700 [2024-12-05 19:19:22.134297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:04.700 [2024-12-05 19:19:22.134324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:04.700 [2024-12-05 19:19:22.134377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.700 [2024-12-05 19:19:22.134410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:04.700 [2024-12-05 19:19:22.134431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:04.700 [2024-12-05 19:19:22.134451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:04.700 [2024-12-05 19:19:22.134474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.700 [2024-12-05 19:19:22.144059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:04.700 [2024-12-05 19:19:22.144202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:04.700 [2024-12-05 19:19:22.144285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:04.700 [2024-12-05 19:19:22.144317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.700 [2024-12-05 19:19:22.151912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:04.700 [2024-12-05 19:19:22.152039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:04.700 [2024-12-05 19:19:22.152086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:04.700 [2024-12-05 19:19:22.152109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.700 [2024-12-05 19:19:22.152162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:04.700 [2024-12-05 19:19:22.152184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:04.700 [2024-12-05 19:19:22.152203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:04.700 [2024-12-05 19:19:22.152222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.700 [2024-12-05 19:19:22.152326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:04.700 [2024-12-05 19:19:22.152350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:04.700 [2024-12-05 19:19:22.152371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:04.700 [2024-12-05 19:19:22.152420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.700 [2024-12-05 19:19:22.152506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:04.700 [2024-12-05 19:19:22.152529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:04.700 [2024-12-05 19:19:22.152550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:04.700 [2024-12-05 19:19:22.152568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.700 [2024-12-05 19:19:22.152607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:04.700 [2024-12-05 19:19:22.152705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:04.700 [2024-12-05 19:19:22.152725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:04.700 [2024-12-05 19:19:22.152743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.700 [2024-12-05 19:19:22.152789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:04.700 [2024-12-05 19:19:22.152810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:04.700 [2024-12-05 19:19:22.152861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:04.700 [2024-12-05 19:19:22.152882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.700 [2024-12-05 19:19:22.152941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:04.700 [2024-12-05 19:19:22.152966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:04.700 [2024-12-05 19:19:22.152985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:04.700 [2024-12-05 19:19:22.153010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.700 [2024-12-05 19:19:22.153136] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 59.205 ms, result 0 00:29:04.962 00:29:04.962 00:29:04.963 19:19:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:29:07.622 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:29:07.622 19:19:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:29:07.622 19:19:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:29:07.622 19:19:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:07.622 19:19:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:29:07.622 19:19:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:29:07.622 19:19:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:29:07.622 19:19:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:29:07.622 Process with pid 90751 is not found 00:29:07.622 19:19:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 90751 00:29:07.622 19:19:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 90751 ']' 00:29:07.622 19:19:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 90751 00:29:07.622 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (90751) - No such process 00:29:07.622 19:19:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 90751 is not found' 00:29:07.622 19:19:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:29:07.622 Remove shared memory files 00:29:07.622 19:19:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:29:07.623 19:19:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:29:07.883 19:19:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:29:07.883 19:19:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:29:07.883 19:19:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:29:07.883 19:19:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:29:07.883 19:19:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:29:07.883 ************************************ 00:29:07.883 END TEST ftl_dirty_shutdown 00:29:07.883 ************************************ 00:29:07.883 00:29:07.883 real 4m36.408s 00:29:07.883 user 4m48.419s 00:29:07.883 sys 0m23.608s 00:29:07.883 19:19:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:07.883 19:19:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:07.883 19:19:25 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:29:07.883 19:19:25 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:07.883 19:19:25 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:07.883 19:19:25 ftl -- common/autotest_common.sh@10 -- # set +x 00:29:07.883 ************************************ 00:29:07.883 START TEST ftl_upgrade_shutdown 00:29:07.883 ************************************ 00:29:07.883 19:19:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:29:07.883 * Looking for test storage... 00:29:07.883 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:29:07.883 19:19:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:07.883 19:19:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:29:07.883 19:19:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:07.883 19:19:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:07.883 19:19:25 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:07.883 19:19:25 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:07.883 19:19:25 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:07.883 19:19:25 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:29:07.883 19:19:25 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:29:07.883 19:19:25 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:29:07.883 19:19:25 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:29:07.883 19:19:25 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:29:07.883 19:19:25 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:29:07.883 19:19:25 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:29:07.883 19:19:25 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:07.883 19:19:25 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:29:07.883 19:19:25 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:29:07.883 19:19:25 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:07.883 19:19:25 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:07.883 19:19:25 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:29:07.883 19:19:25 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:29:07.883 19:19:25 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:07.883 19:19:25 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:29:07.883 19:19:25 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:29:07.883 19:19:25 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:29:07.883 19:19:25 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:29:07.883 19:19:25 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:07.883 19:19:25 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:29:07.883 19:19:25 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:29:07.883 19:19:25 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:07.883 19:19:25 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:07.883 19:19:25 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:29:07.883 19:19:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:07.883 19:19:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:07.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:07.883 --rc genhtml_branch_coverage=1 00:29:07.883 --rc genhtml_function_coverage=1 00:29:07.883 --rc genhtml_legend=1 00:29:07.883 --rc geninfo_all_blocks=1 00:29:07.883 --rc geninfo_unexecuted_blocks=1 00:29:07.883 00:29:07.883 ' 00:29:07.883 19:19:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:07.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:07.883 --rc genhtml_branch_coverage=1 00:29:07.883 --rc genhtml_function_coverage=1 00:29:07.883 --rc genhtml_legend=1 00:29:07.883 --rc geninfo_all_blocks=1 00:29:07.883 --rc geninfo_unexecuted_blocks=1 00:29:07.883 00:29:07.883 ' 00:29:07.883 19:19:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:07.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:07.883 --rc genhtml_branch_coverage=1 00:29:07.883 --rc genhtml_function_coverage=1 00:29:07.883 --rc genhtml_legend=1 00:29:07.883 --rc geninfo_all_blocks=1 00:29:07.884 --rc geninfo_unexecuted_blocks=1 00:29:07.884 00:29:07.884 ' 00:29:07.884 19:19:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:07.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:07.884 --rc genhtml_branch_coverage=1 00:29:07.884 --rc genhtml_function_coverage=1 00:29:07.884 --rc genhtml_legend=1 00:29:07.884 --rc geninfo_all_blocks=1 00:29:07.884 --rc geninfo_unexecuted_blocks=1 00:29:07.884 00:29:07.884 ' 00:29:07.884 19:19:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:29:07.884 19:19:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:29:08.144 19:19:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:29:08.144 19:19:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:29:08.144 19:19:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:29:08.144 19:19:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:29:08.144 19:19:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:08.144 19:19:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:29:08.144 19:19:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:29:08.144 19:19:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:08.144 19:19:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:08.144 19:19:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:29:08.144 19:19:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:29:08.144 19:19:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:08.144 19:19:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:08.144 19:19:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:29:08.144 19:19:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:29:08.144 19:19:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:08.144 19:19:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:08.144 19:19:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:29:08.144 19:19:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:29:08.144 19:19:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:08.144 19:19:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:08.144 19:19:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:08.144 19:19:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:08.144 19:19:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:29:08.144 19:19:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:29:08.144 19:19:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:08.144 19:19:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:08.144 19:19:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:08.144 19:19:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:29:08.144 19:19:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:29:08.144 19:19:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:29:08.144 19:19:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:29:08.144 19:19:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:29:08.144 19:19:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:29:08.144 19:19:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:29:08.144 19:19:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:29:08.144 19:19:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:29:08.144 19:19:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:29:08.144 19:19:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:29:08.144 19:19:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:29:08.144 19:19:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:29:08.144 19:19:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:29:08.144 19:19:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:29:08.144 19:19:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:08.144 19:19:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=93705 00:29:08.144 19:19:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:29:08.144 19:19:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:29:08.144 19:19:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 93705 00:29:08.144 19:19:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 93705 ']' 00:29:08.144 19:19:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:08.144 19:19:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:08.144 19:19:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:08.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:08.144 19:19:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:08.144 19:19:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:08.144 [2024-12-05 19:19:25.539384] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:29:08.144 [2024-12-05 19:19:25.539635] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93705 ] 00:29:08.144 [2024-12-05 19:19:25.686930] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:08.406 [2024-12-05 19:19:25.706456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:08.979 19:19:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:08.979 19:19:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:29:08.979 19:19:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:08.979 19:19:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:29:08.979 19:19:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:29:08.979 19:19:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:08.979 19:19:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:29:08.979 19:19:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:08.979 19:19:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:29:08.979 19:19:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:08.979 19:19:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:29:08.979 19:19:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:08.979 19:19:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:29:08.979 19:19:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:08.979 19:19:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:29:08.979 19:19:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:08.979 19:19:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:29:08.979 19:19:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:29:08.979 19:19:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:29:08.979 19:19:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:29:08.979 19:19:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:29:08.979 19:19:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:29:08.979 19:19:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:29:09.240 19:19:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:29:09.240 19:19:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:29:09.240 19:19:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:29:09.240 19:19:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:29:09.240 19:19:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:09.240 19:19:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:29:09.240 19:19:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:29:09.240 19:19:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:29:09.240 19:19:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:09.240 { 00:29:09.240 "name": "basen1", 00:29:09.240 "aliases": [ 00:29:09.240 "ee93f705-aa53-4722-a764-ad6deefb5a19" 00:29:09.240 ], 00:29:09.240 "product_name": "NVMe disk", 00:29:09.240 "block_size": 4096, 00:29:09.240 "num_blocks": 1310720, 00:29:09.240 "uuid": "ee93f705-aa53-4722-a764-ad6deefb5a19", 00:29:09.240 "numa_id": -1, 00:29:09.240 "assigned_rate_limits": { 00:29:09.240 "rw_ios_per_sec": 0, 00:29:09.240 "rw_mbytes_per_sec": 0, 00:29:09.240 "r_mbytes_per_sec": 0, 00:29:09.240 "w_mbytes_per_sec": 0 00:29:09.240 }, 00:29:09.240 "claimed": true, 00:29:09.240 "claim_type": "read_many_write_one", 00:29:09.240 "zoned": false, 00:29:09.240 "supported_io_types": { 00:29:09.240 "read": true, 00:29:09.240 "write": true, 00:29:09.240 "unmap": true, 00:29:09.240 "flush": true, 00:29:09.240 "reset": true, 00:29:09.240 "nvme_admin": true, 00:29:09.240 "nvme_io": true, 00:29:09.240 "nvme_io_md": false, 00:29:09.240 "write_zeroes": true, 00:29:09.240 "zcopy": false, 00:29:09.240 "get_zone_info": false, 00:29:09.240 "zone_management": false, 00:29:09.240 "zone_append": false, 00:29:09.240 "compare": true, 00:29:09.240 "compare_and_write": false, 00:29:09.240 "abort": true, 00:29:09.240 "seek_hole": false, 00:29:09.240 "seek_data": false, 00:29:09.240 "copy": true, 00:29:09.240 "nvme_iov_md": false 00:29:09.240 }, 00:29:09.240 "driver_specific": { 00:29:09.240 "nvme": [ 00:29:09.240 { 00:29:09.240 "pci_address": "0000:00:11.0", 00:29:09.240 "trid": { 00:29:09.240 "trtype": "PCIe", 00:29:09.240 "traddr": "0000:00:11.0" 00:29:09.240 }, 00:29:09.240 "ctrlr_data": { 00:29:09.240 "cntlid": 0, 00:29:09.240 "vendor_id": "0x1b36", 00:29:09.240 "model_number": "QEMU NVMe Ctrl", 00:29:09.240 "serial_number": "12341", 00:29:09.240 "firmware_revision": "8.0.0", 00:29:09.240 "subnqn": "nqn.2019-08.org.qemu:12341", 00:29:09.240 "oacs": { 00:29:09.240 "security": 0, 00:29:09.240 "format": 1, 00:29:09.240 "firmware": 0, 00:29:09.240 "ns_manage": 1 00:29:09.240 }, 00:29:09.240 "multi_ctrlr": false, 00:29:09.240 "ana_reporting": false 00:29:09.240 }, 00:29:09.240 "vs": { 00:29:09.240 "nvme_version": "1.4" 00:29:09.240 }, 00:29:09.240 "ns_data": { 00:29:09.240 "id": 1, 00:29:09.240 "can_share": false 00:29:09.240 } 00:29:09.240 } 00:29:09.240 ], 00:29:09.240 "mp_policy": "active_passive" 00:29:09.240 } 00:29:09.240 } 00:29:09.240 ]' 00:29:09.240 19:19:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:09.502 19:19:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:29:09.502 19:19:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:09.502 19:19:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:29:09.502 19:19:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:29:09.502 19:19:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:29:09.502 19:19:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:29:09.502 19:19:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:29:09.502 19:19:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:29:09.502 19:19:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:09.502 19:19:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:29:09.502 19:19:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=32587d71-1472-4033-90cb-3b7c3232841d 00:29:09.502 19:19:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:29:09.502 19:19:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 32587d71-1472-4033-90cb-3b7c3232841d 00:29:09.765 19:19:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:29:10.023 19:19:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=4aa7393c-67ca-4cbe-9867-69576c539c5c 00:29:10.024 19:19:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 4aa7393c-67ca-4cbe-9867-69576c539c5c 00:29:10.284 19:19:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=2f6726c5-64ba-4d22-9079-877151812c7d 00:29:10.284 19:19:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 2f6726c5-64ba-4d22-9079-877151812c7d ]] 00:29:10.284 19:19:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 2f6726c5-64ba-4d22-9079-877151812c7d 5120 00:29:10.284 19:19:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:29:10.284 19:19:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:29:10.284 19:19:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=2f6726c5-64ba-4d22-9079-877151812c7d 00:29:10.284 19:19:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:29:10.284 19:19:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 2f6726c5-64ba-4d22-9079-877151812c7d 00:29:10.285 19:19:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=2f6726c5-64ba-4d22-9079-877151812c7d 00:29:10.285 19:19:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:10.285 19:19:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:29:10.285 19:19:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:29:10.285 19:19:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2f6726c5-64ba-4d22-9079-877151812c7d 00:29:10.546 19:19:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:10.546 { 00:29:10.546 "name": "2f6726c5-64ba-4d22-9079-877151812c7d", 00:29:10.546 "aliases": [ 00:29:10.546 "lvs/basen1p0" 00:29:10.546 ], 00:29:10.546 "product_name": "Logical Volume", 00:29:10.546 "block_size": 4096, 00:29:10.546 "num_blocks": 5242880, 00:29:10.546 "uuid": "2f6726c5-64ba-4d22-9079-877151812c7d", 00:29:10.546 "assigned_rate_limits": { 00:29:10.546 "rw_ios_per_sec": 0, 00:29:10.546 "rw_mbytes_per_sec": 0, 00:29:10.546 "r_mbytes_per_sec": 0, 00:29:10.546 "w_mbytes_per_sec": 0 00:29:10.546 }, 00:29:10.546 "claimed": false, 00:29:10.546 "zoned": false, 00:29:10.546 "supported_io_types": { 00:29:10.546 "read": true, 00:29:10.546 "write": true, 00:29:10.546 "unmap": true, 00:29:10.546 "flush": false, 00:29:10.546 "reset": true, 00:29:10.546 "nvme_admin": false, 00:29:10.546 "nvme_io": false, 00:29:10.546 "nvme_io_md": false, 00:29:10.546 "write_zeroes": true, 00:29:10.546 "zcopy": false, 00:29:10.546 "get_zone_info": false, 00:29:10.546 "zone_management": false, 00:29:10.546 "zone_append": false, 00:29:10.546 "compare": false, 00:29:10.546 "compare_and_write": false, 00:29:10.546 "abort": false, 00:29:10.546 "seek_hole": true, 00:29:10.546 "seek_data": true, 00:29:10.547 "copy": false, 00:29:10.547 "nvme_iov_md": false 00:29:10.547 }, 00:29:10.547 "driver_specific": { 00:29:10.547 "lvol": { 00:29:10.547 "lvol_store_uuid": "4aa7393c-67ca-4cbe-9867-69576c539c5c", 00:29:10.547 "base_bdev": "basen1", 00:29:10.547 "thin_provision": true, 00:29:10.547 "num_allocated_clusters": 0, 00:29:10.547 "snapshot": false, 00:29:10.547 "clone": false, 00:29:10.547 "esnap_clone": false 00:29:10.547 } 00:29:10.547 } 00:29:10.547 } 00:29:10.547 ]' 00:29:10.547 19:19:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:10.547 19:19:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:29:10.547 19:19:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:10.547 19:19:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:29:10.547 19:19:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:29:10.547 19:19:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:29:10.547 19:19:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:29:10.547 19:19:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:29:10.547 19:19:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:29:10.809 19:19:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:29:10.809 19:19:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:29:10.809 19:19:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:29:11.071 19:19:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:29:11.071 19:19:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:29:11.071 19:19:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 2f6726c5-64ba-4d22-9079-877151812c7d -c cachen1p0 --l2p_dram_limit 2 00:29:11.071 [2024-12-05 19:19:28.589814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.071 [2024-12-05 19:19:28.589863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:29:11.071 [2024-12-05 19:19:28.589875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:11.071 [2024-12-05 19:19:28.589883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.071 [2024-12-05 19:19:28.589922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.071 [2024-12-05 19:19:28.589932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:11.071 [2024-12-05 19:19:28.589940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:29:11.071 [2024-12-05 19:19:28.589950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.071 [2024-12-05 19:19:28.589965] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:29:11.071 [2024-12-05 19:19:28.590162] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:29:11.071 [2024-12-05 19:19:28.590175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.071 [2024-12-05 19:19:28.590184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:11.071 [2024-12-05 19:19:28.590191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.214 ms 00:29:11.071 [2024-12-05 19:19:28.590200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.072 [2024-12-05 19:19:28.590221] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID a211da58-b31b-4752-af2b-fb84d4161411 00:29:11.072 [2024-12-05 19:19:28.591549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.072 [2024-12-05 19:19:28.591573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:29:11.072 [2024-12-05 19:19:28.591587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:29:11.072 [2024-12-05 19:19:28.591593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.072 [2024-12-05 19:19:28.598579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.072 [2024-12-05 19:19:28.598604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:11.072 [2024-12-05 19:19:28.598614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.945 ms 00:29:11.072 [2024-12-05 19:19:28.598620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.072 [2024-12-05 19:19:28.598689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.072 [2024-12-05 19:19:28.598697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:11.072 [2024-12-05 19:19:28.598705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:29:11.072 [2024-12-05 19:19:28.598711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.072 [2024-12-05 19:19:28.598755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.072 [2024-12-05 19:19:28.598764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:29:11.072 [2024-12-05 19:19:28.598772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:29:11.072 [2024-12-05 19:19:28.598780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.072 [2024-12-05 19:19:28.598798] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:29:11.072 [2024-12-05 19:19:28.600462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.072 [2024-12-05 19:19:28.600488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:11.072 [2024-12-05 19:19:28.600496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.669 ms 00:29:11.072 [2024-12-05 19:19:28.600504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.072 [2024-12-05 19:19:28.600526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.072 [2024-12-05 19:19:28.600537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:29:11.072 [2024-12-05 19:19:28.600544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:29:11.072 [2024-12-05 19:19:28.600554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.072 [2024-12-05 19:19:28.600567] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:29:11.072 [2024-12-05 19:19:28.600688] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:29:11.072 [2024-12-05 19:19:28.600698] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:29:11.072 [2024-12-05 19:19:28.600710] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:29:11.072 [2024-12-05 19:19:28.600719] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:29:11.072 [2024-12-05 19:19:28.600729] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:29:11.072 [2024-12-05 19:19:28.600735] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:29:11.072 [2024-12-05 19:19:28.600745] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:29:11.072 [2024-12-05 19:19:28.600750] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:29:11.072 [2024-12-05 19:19:28.600761] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:29:11.072 [2024-12-05 19:19:28.600770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.072 [2024-12-05 19:19:28.600777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:29:11.072 [2024-12-05 19:19:28.600783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.204 ms 00:29:11.072 [2024-12-05 19:19:28.600790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.072 [2024-12-05 19:19:28.600855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.072 [2024-12-05 19:19:28.600866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:29:11.072 [2024-12-05 19:19:28.600871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.054 ms 00:29:11.072 [2024-12-05 19:19:28.600880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.072 [2024-12-05 19:19:28.600952] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:29:11.072 [2024-12-05 19:19:28.600963] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:29:11.072 [2024-12-05 19:19:28.600969] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:11.072 [2024-12-05 19:19:28.600977] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:11.072 [2024-12-05 19:19:28.600983] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:29:11.072 [2024-12-05 19:19:28.600989] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:29:11.072 [2024-12-05 19:19:28.600995] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:29:11.072 [2024-12-05 19:19:28.601002] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:29:11.072 [2024-12-05 19:19:28.601007] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:29:11.072 [2024-12-05 19:19:28.601014] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:11.072 [2024-12-05 19:19:28.601019] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:29:11.072 [2024-12-05 19:19:28.601028] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:29:11.072 [2024-12-05 19:19:28.601033] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:11.072 [2024-12-05 19:19:28.601042] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:29:11.072 [2024-12-05 19:19:28.601047] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:29:11.072 [2024-12-05 19:19:28.601054] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:11.072 [2024-12-05 19:19:28.601060] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:29:11.072 [2024-12-05 19:19:28.601066] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:29:11.072 [2024-12-05 19:19:28.601072] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:11.072 [2024-12-05 19:19:28.601079] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:29:11.072 [2024-12-05 19:19:28.601084] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:29:11.072 [2024-12-05 19:19:28.601091] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:11.072 [2024-12-05 19:19:28.601096] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:29:11.072 [2024-12-05 19:19:28.601103] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:29:11.072 [2024-12-05 19:19:28.601108] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:11.072 [2024-12-05 19:19:28.601115] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:29:11.072 [2024-12-05 19:19:28.601121] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:29:11.072 [2024-12-05 19:19:28.601129] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:11.072 [2024-12-05 19:19:28.601135] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:29:11.072 [2024-12-05 19:19:28.601146] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:29:11.072 [2024-12-05 19:19:28.601152] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:11.072 [2024-12-05 19:19:28.601159] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:29:11.072 [2024-12-05 19:19:28.601165] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:29:11.072 [2024-12-05 19:19:28.601172] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:11.072 [2024-12-05 19:19:28.601177] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:29:11.072 [2024-12-05 19:19:28.601185] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:29:11.072 [2024-12-05 19:19:28.601190] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:11.072 [2024-12-05 19:19:28.601198] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:29:11.072 [2024-12-05 19:19:28.601205] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:29:11.072 [2024-12-05 19:19:28.601212] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:11.072 [2024-12-05 19:19:28.601217] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:29:11.072 [2024-12-05 19:19:28.601225] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:29:11.072 [2024-12-05 19:19:28.601230] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:11.072 [2024-12-05 19:19:28.601238] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:29:11.072 [2024-12-05 19:19:28.601245] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:29:11.072 [2024-12-05 19:19:28.601268] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:11.072 [2024-12-05 19:19:28.601275] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:11.072 [2024-12-05 19:19:28.601284] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:29:11.072 [2024-12-05 19:19:28.601295] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:29:11.072 [2024-12-05 19:19:28.601303] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:29:11.072 [2024-12-05 19:19:28.601310] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:29:11.072 [2024-12-05 19:19:28.601317] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:29:11.072 [2024-12-05 19:19:28.601323] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:29:11.072 [2024-12-05 19:19:28.601333] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:29:11.072 [2024-12-05 19:19:28.601343] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:11.072 [2024-12-05 19:19:28.601352] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:29:11.073 [2024-12-05 19:19:28.601358] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:29:11.073 [2024-12-05 19:19:28.601377] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:29:11.073 [2024-12-05 19:19:28.601385] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:29:11.073 [2024-12-05 19:19:28.601393] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:29:11.073 [2024-12-05 19:19:28.601399] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:29:11.073 [2024-12-05 19:19:28.601409] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:29:11.073 [2024-12-05 19:19:28.601415] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:29:11.073 [2024-12-05 19:19:28.601422] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:29:11.073 [2024-12-05 19:19:28.601429] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:29:11.073 [2024-12-05 19:19:28.601436] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:29:11.073 [2024-12-05 19:19:28.601443] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:29:11.073 [2024-12-05 19:19:28.601450] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:29:11.073 [2024-12-05 19:19:28.601457] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:29:11.073 [2024-12-05 19:19:28.601465] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:29:11.073 [2024-12-05 19:19:28.601472] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:11.073 [2024-12-05 19:19:28.601480] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:11.073 [2024-12-05 19:19:28.601486] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:29:11.073 [2024-12-05 19:19:28.601494] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:29:11.073 [2024-12-05 19:19:28.601499] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:29:11.073 [2024-12-05 19:19:28.601508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:11.073 [2024-12-05 19:19:28.601515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:29:11.073 [2024-12-05 19:19:28.601525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.606 ms 00:29:11.073 [2024-12-05 19:19:28.601541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:11.073 [2024-12-05 19:19:28.601572] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:29:11.073 [2024-12-05 19:19:28.601580] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:29:15.289 [2024-12-05 19:19:32.151514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.289 [2024-12-05 19:19:32.151826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:29:15.289 [2024-12-05 19:19:32.151854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3549.919 ms 00:29:15.289 [2024-12-05 19:19:32.151863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.289 [2024-12-05 19:19:32.167763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.289 [2024-12-05 19:19:32.167816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:15.289 [2024-12-05 19:19:32.167830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.785 ms 00:29:15.289 [2024-12-05 19:19:32.167837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.289 [2024-12-05 19:19:32.167933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.289 [2024-12-05 19:19:32.167943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:29:15.289 [2024-12-05 19:19:32.167954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:29:15.289 [2024-12-05 19:19:32.167964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.289 [2024-12-05 19:19:32.182162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.289 [2024-12-05 19:19:32.182205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:15.289 [2024-12-05 19:19:32.182217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.168 ms 00:29:15.289 [2024-12-05 19:19:32.182228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.289 [2024-12-05 19:19:32.182285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.289 [2024-12-05 19:19:32.182293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:15.289 [2024-12-05 19:19:32.182302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:15.289 [2024-12-05 19:19:32.182312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.289 [2024-12-05 19:19:32.182903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.289 [2024-12-05 19:19:32.182940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:15.289 [2024-12-05 19:19:32.182956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.547 ms 00:29:15.289 [2024-12-05 19:19:32.182965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.289 [2024-12-05 19:19:32.183012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.289 [2024-12-05 19:19:32.183022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:15.289 [2024-12-05 19:19:32.183039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:29:15.289 [2024-12-05 19:19:32.183047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.289 [2024-12-05 19:19:32.191244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.289 [2024-12-05 19:19:32.191289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:15.289 [2024-12-05 19:19:32.191299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.178 ms 00:29:15.289 [2024-12-05 19:19:32.191306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.290 [2024-12-05 19:19:32.213665] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:29:15.290 [2024-12-05 19:19:32.215063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.290 [2024-12-05 19:19:32.215096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:29:15.290 [2024-12-05 19:19:32.215106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.699 ms 00:29:15.290 [2024-12-05 19:19:32.215114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.290 [2024-12-05 19:19:32.232223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.290 [2024-12-05 19:19:32.232269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:29:15.290 [2024-12-05 19:19:32.232281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.082 ms 00:29:15.290 [2024-12-05 19:19:32.232292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.290 [2024-12-05 19:19:32.232364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.290 [2024-12-05 19:19:32.232375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:29:15.290 [2024-12-05 19:19:32.232382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.042 ms 00:29:15.290 [2024-12-05 19:19:32.232390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.290 [2024-12-05 19:19:32.235826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.290 [2024-12-05 19:19:32.235942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:29:15.290 [2024-12-05 19:19:32.235957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.421 ms 00:29:15.290 [2024-12-05 19:19:32.235965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.290 [2024-12-05 19:19:32.238647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.290 [2024-12-05 19:19:32.238677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:29:15.290 [2024-12-05 19:19:32.238685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.656 ms 00:29:15.290 [2024-12-05 19:19:32.238692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.290 [2024-12-05 19:19:32.238931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.290 [2024-12-05 19:19:32.238941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:29:15.290 [2024-12-05 19:19:32.238948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.214 ms 00:29:15.290 [2024-12-05 19:19:32.238957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.290 [2024-12-05 19:19:32.266109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.290 [2024-12-05 19:19:32.266234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:29:15.290 [2024-12-05 19:19:32.266309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 27.137 ms 00:29:15.290 [2024-12-05 19:19:32.266332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.290 [2024-12-05 19:19:32.270362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.290 [2024-12-05 19:19:32.270467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:29:15.290 [2024-12-05 19:19:32.270512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.992 ms 00:29:15.290 [2024-12-05 19:19:32.270533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.290 [2024-12-05 19:19:32.274509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.290 [2024-12-05 19:19:32.274603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:29:15.290 [2024-12-05 19:19:32.274645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.948 ms 00:29:15.290 [2024-12-05 19:19:32.274664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.290 [2024-12-05 19:19:32.278802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.290 [2024-12-05 19:19:32.278895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:29:15.290 [2024-12-05 19:19:32.278936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.111 ms 00:29:15.290 [2024-12-05 19:19:32.278957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.290 [2024-12-05 19:19:32.278990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.290 [2024-12-05 19:19:32.279009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:29:15.290 [2024-12-05 19:19:32.279025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:29:15.290 [2024-12-05 19:19:32.279041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.290 [2024-12-05 19:19:32.279105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.290 [2024-12-05 19:19:32.279128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:29:15.290 [2024-12-05 19:19:32.279181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:29:15.290 [2024-12-05 19:19:32.279195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.290 [2024-12-05 19:19:32.280040] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3689.857 ms, result 0 00:29:15.290 { 00:29:15.290 "name": "ftl", 00:29:15.290 "uuid": "a211da58-b31b-4752-af2b-fb84d4161411" 00:29:15.290 } 00:29:15.290 19:19:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:29:15.290 [2024-12-05 19:19:32.489822] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:15.290 19:19:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:29:15.290 19:19:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:29:15.552 [2024-12-05 19:19:32.894113] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:29:15.552 19:19:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:29:15.552 [2024-12-05 19:19:33.094408] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:15.813 19:19:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:29:16.074 Fill FTL, iteration 1 00:29:16.075 19:19:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:29:16.075 19:19:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:29:16.075 19:19:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:29:16.075 19:19:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:29:16.075 19:19:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:29:16.075 19:19:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:29:16.075 19:19:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:29:16.075 19:19:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:29:16.075 19:19:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:29:16.075 19:19:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:29:16.075 19:19:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:29:16.075 19:19:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:29:16.075 19:19:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:16.075 19:19:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:16.075 19:19:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:16.075 19:19:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:29:16.075 19:19:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=93825 00:29:16.075 19:19:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:29:16.075 19:19:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:29:16.075 19:19:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 93825 /var/tmp/spdk.tgt.sock 00:29:16.075 19:19:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 93825 ']' 00:29:16.075 19:19:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:29:16.075 19:19:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:16.075 19:19:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:29:16.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:29:16.075 19:19:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:16.075 19:19:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:16.075 [2024-12-05 19:19:33.507755] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:29:16.075 [2024-12-05 19:19:33.508023] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93825 ] 00:29:16.337 [2024-12-05 19:19:33.653541] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:16.337 [2024-12-05 19:19:33.672401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:16.910 19:19:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:16.910 19:19:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:29:16.910 19:19:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:29:17.172 ftln1 00:29:17.172 19:19:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:29:17.172 19:19:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:29:17.434 19:19:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:29:17.434 19:19:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 93825 00:29:17.434 19:19:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 93825 ']' 00:29:17.434 19:19:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 93825 00:29:17.434 19:19:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:29:17.434 19:19:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:17.434 19:19:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93825 00:29:17.434 killing process with pid 93825 00:29:17.434 19:19:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:17.434 19:19:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:17.434 19:19:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93825' 00:29:17.434 19:19:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 93825 00:29:17.434 19:19:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 93825 00:29:17.695 19:19:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:29:17.695 19:19:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:29:17.695 [2024-12-05 19:19:35.143058] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:29:17.695 [2024-12-05 19:19:35.143172] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93854 ] 00:29:17.955 [2024-12-05 19:19:35.287242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:17.955 [2024-12-05 19:19:35.306134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:19.343  [2024-12-05T19:19:37.848Z] Copying: 188/1024 [MB] (188 MBps) [2024-12-05T19:19:38.793Z] Copying: 382/1024 [MB] (194 MBps) [2024-12-05T19:19:39.739Z] Copying: 561/1024 [MB] (179 MBps) [2024-12-05T19:19:40.682Z] Copying: 734/1024 [MB] (173 MBps) [2024-12-05T19:19:41.255Z] Copying: 915/1024 [MB] (181 MBps) [2024-12-05T19:19:41.516Z] Copying: 1024/1024 [MB] (average 181 MBps) 00:29:23.957 00:29:23.957 Calculate MD5 checksum, iteration 1 00:29:23.957 19:19:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:29:23.957 19:19:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:29:23.957 19:19:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:23.957 19:19:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:23.957 19:19:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:23.957 19:19:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:23.957 19:19:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:23.957 19:19:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:23.957 [2024-12-05 19:19:41.423783] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:29:23.957 [2024-12-05 19:19:41.423925] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93919 ] 00:29:24.219 [2024-12-05 19:19:41.574094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:24.219 [2024-12-05 19:19:41.602718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:25.606  [2024-12-05T19:19:43.739Z] Copying: 597/1024 [MB] (597 MBps) [2024-12-05T19:19:44.001Z] Copying: 1024/1024 [MB] (average 551 MBps) 00:29:26.442 00:29:26.442 19:19:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:29:26.442 19:19:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:28.988 19:19:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:29:28.988 Fill FTL, iteration 2 00:29:28.988 19:19:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=de3a7ca253de27fb34e56f009bdfdb64 00:29:28.988 19:19:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:29:28.988 19:19:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:29:28.988 19:19:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:29:28.988 19:19:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:29:28.988 19:19:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:28.988 19:19:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:28.988 19:19:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:28.988 19:19:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:28.988 19:19:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:29:28.988 [2024-12-05 19:19:46.105597] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:29:28.988 [2024-12-05 19:19:46.105712] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93975 ] 00:29:28.988 [2024-12-05 19:19:46.251259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:28.988 [2024-12-05 19:19:46.270558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:29.934  [2024-12-05T19:19:48.883Z] Copying: 175/1024 [MB] (175 MBps) [2024-12-05T19:19:49.457Z] Copying: 359/1024 [MB] (184 MBps) [2024-12-05T19:19:50.843Z] Copying: 539/1024 [MB] (180 MBps) [2024-12-05T19:19:51.786Z] Copying: 737/1024 [MB] (198 MBps) [2024-12-05T19:19:52.359Z] Copying: 909/1024 [MB] (172 MBps) [2024-12-05T19:19:52.359Z] Copying: 1024/1024 [MB] (average 181 MBps) 00:29:34.800 00:29:34.800 19:19:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:29:34.800 19:19:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:29:34.800 Calculate MD5 checksum, iteration 2 00:29:34.800 19:19:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:34.800 19:19:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:34.800 19:19:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:34.800 19:19:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:34.801 19:19:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:34.801 19:19:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:34.801 [2024-12-05 19:19:52.310855] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:29:34.801 [2024-12-05 19:19:52.311081] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94039 ] 00:29:35.063 [2024-12-05 19:19:52.457354] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:35.063 [2024-12-05 19:19:52.476415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:36.451  [2024-12-05T19:19:54.581Z] Copying: 680/1024 [MB] (680 MBps) [2024-12-05T19:19:55.554Z] Copying: 1024/1024 [MB] (average 646 MBps) 00:29:37.995 00:29:37.995 19:19:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:29:37.995 19:19:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:39.914 19:19:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:29:39.914 19:19:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=d8e23664d5538f6f2346e07126c3ab5f 00:29:39.914 19:19:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:29:39.914 19:19:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:29:39.914 19:19:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:29:40.175 [2024-12-05 19:19:57.602035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:40.175 [2024-12-05 19:19:57.602214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:29:40.175 [2024-12-05 19:19:57.602233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:29:40.175 [2024-12-05 19:19:57.602247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:40.175 [2024-12-05 19:19:57.602283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:40.175 [2024-12-05 19:19:57.602290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:29:40.175 [2024-12-05 19:19:57.602297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:40.175 [2024-12-05 19:19:57.602303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:40.175 [2024-12-05 19:19:57.602320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:40.175 [2024-12-05 19:19:57.602326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:29:40.175 [2024-12-05 19:19:57.602332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:40.175 [2024-12-05 19:19:57.602341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:40.175 [2024-12-05 19:19:57.602397] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.344 ms, result 0 00:29:40.175 true 00:29:40.175 19:19:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:40.436 { 00:29:40.436 "name": "ftl", 00:29:40.436 "properties": [ 00:29:40.436 { 00:29:40.436 "name": "superblock_version", 00:29:40.436 "value": 5, 00:29:40.436 "read-only": true 00:29:40.436 }, 00:29:40.436 { 00:29:40.436 "name": "base_device", 00:29:40.436 "bands": [ 00:29:40.436 { 00:29:40.436 "id": 0, 00:29:40.436 "state": "FREE", 00:29:40.436 "validity": 0.0 00:29:40.436 }, 00:29:40.436 { 00:29:40.436 "id": 1, 00:29:40.436 "state": "FREE", 00:29:40.436 "validity": 0.0 00:29:40.436 }, 00:29:40.436 { 00:29:40.436 "id": 2, 00:29:40.436 "state": "FREE", 00:29:40.436 "validity": 0.0 00:29:40.436 }, 00:29:40.436 { 00:29:40.436 "id": 3, 00:29:40.436 "state": "FREE", 00:29:40.436 "validity": 0.0 00:29:40.436 }, 00:29:40.436 { 00:29:40.436 "id": 4, 00:29:40.436 "state": "FREE", 00:29:40.436 "validity": 0.0 00:29:40.436 }, 00:29:40.436 { 00:29:40.436 "id": 5, 00:29:40.436 "state": "FREE", 00:29:40.436 "validity": 0.0 00:29:40.436 }, 00:29:40.436 { 00:29:40.436 "id": 6, 00:29:40.436 "state": "FREE", 00:29:40.436 "validity": 0.0 00:29:40.436 }, 00:29:40.436 { 00:29:40.436 "id": 7, 00:29:40.436 "state": "FREE", 00:29:40.436 "validity": 0.0 00:29:40.436 }, 00:29:40.436 { 00:29:40.436 "id": 8, 00:29:40.436 "state": "FREE", 00:29:40.436 "validity": 0.0 00:29:40.436 }, 00:29:40.436 { 00:29:40.436 "id": 9, 00:29:40.436 "state": "FREE", 00:29:40.436 "validity": 0.0 00:29:40.436 }, 00:29:40.436 { 00:29:40.436 "id": 10, 00:29:40.436 "state": "FREE", 00:29:40.436 "validity": 0.0 00:29:40.436 }, 00:29:40.436 { 00:29:40.436 "id": 11, 00:29:40.436 "state": "FREE", 00:29:40.436 "validity": 0.0 00:29:40.436 }, 00:29:40.436 { 00:29:40.436 "id": 12, 00:29:40.436 "state": "FREE", 00:29:40.436 "validity": 0.0 00:29:40.436 }, 00:29:40.436 { 00:29:40.436 "id": 13, 00:29:40.436 "state": "FREE", 00:29:40.436 "validity": 0.0 00:29:40.436 }, 00:29:40.436 { 00:29:40.436 "id": 14, 00:29:40.436 "state": "FREE", 00:29:40.436 "validity": 0.0 00:29:40.436 }, 00:29:40.436 { 00:29:40.436 "id": 15, 00:29:40.436 "state": "FREE", 00:29:40.436 "validity": 0.0 00:29:40.436 }, 00:29:40.436 { 00:29:40.436 "id": 16, 00:29:40.436 "state": "FREE", 00:29:40.436 "validity": 0.0 00:29:40.436 }, 00:29:40.436 { 00:29:40.436 "id": 17, 00:29:40.436 "state": "FREE", 00:29:40.436 "validity": 0.0 00:29:40.436 } 00:29:40.436 ], 00:29:40.436 "read-only": true 00:29:40.436 }, 00:29:40.436 { 00:29:40.436 "name": "cache_device", 00:29:40.436 "type": "bdev", 00:29:40.436 "chunks": [ 00:29:40.436 { 00:29:40.436 "id": 0, 00:29:40.436 "state": "INACTIVE", 00:29:40.436 "utilization": 0.0 00:29:40.436 }, 00:29:40.436 { 00:29:40.436 "id": 1, 00:29:40.436 "state": "CLOSED", 00:29:40.436 "utilization": 1.0 00:29:40.436 }, 00:29:40.436 { 00:29:40.436 "id": 2, 00:29:40.436 "state": "CLOSED", 00:29:40.436 "utilization": 1.0 00:29:40.436 }, 00:29:40.436 { 00:29:40.436 "id": 3, 00:29:40.436 "state": "OPEN", 00:29:40.436 "utilization": 0.001953125 00:29:40.436 }, 00:29:40.436 { 00:29:40.436 "id": 4, 00:29:40.436 "state": "OPEN", 00:29:40.436 "utilization": 0.0 00:29:40.437 } 00:29:40.437 ], 00:29:40.437 "read-only": true 00:29:40.437 }, 00:29:40.437 { 00:29:40.437 "name": "verbose_mode", 00:29:40.437 "value": true, 00:29:40.437 "unit": "", 00:29:40.437 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:29:40.437 }, 00:29:40.437 { 00:29:40.437 "name": "prep_upgrade_on_shutdown", 00:29:40.437 "value": false, 00:29:40.437 "unit": "", 00:29:40.437 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:29:40.437 } 00:29:40.437 ] 00:29:40.437 } 00:29:40.437 19:19:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:29:40.697 [2024-12-05 19:19:58.026374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:40.697 [2024-12-05 19:19:58.026403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:29:40.697 [2024-12-05 19:19:58.026411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:40.697 [2024-12-05 19:19:58.026416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:40.697 [2024-12-05 19:19:58.026432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:40.697 [2024-12-05 19:19:58.026439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:29:40.697 [2024-12-05 19:19:58.026444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:40.697 [2024-12-05 19:19:58.026450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:40.697 [2024-12-05 19:19:58.026465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:40.697 [2024-12-05 19:19:58.026470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:29:40.697 [2024-12-05 19:19:58.026476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:40.697 [2024-12-05 19:19:58.026482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:40.697 [2024-12-05 19:19:58.026528] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.137 ms, result 0 00:29:40.697 true 00:29:40.697 19:19:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:29:40.697 19:19:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:40.697 19:19:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:29:40.697 19:19:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:29:40.697 19:19:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:29:40.697 19:19:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:29:40.957 [2024-12-05 19:19:58.438697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:40.957 [2024-12-05 19:19:58.438724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:29:40.957 [2024-12-05 19:19:58.438731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:40.957 [2024-12-05 19:19:58.438737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:40.957 [2024-12-05 19:19:58.438753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:40.957 [2024-12-05 19:19:58.438759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:29:40.957 [2024-12-05 19:19:58.438765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:40.957 [2024-12-05 19:19:58.438770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:40.957 [2024-12-05 19:19:58.438784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:40.957 [2024-12-05 19:19:58.438789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:29:40.957 [2024-12-05 19:19:58.438795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:40.957 [2024-12-05 19:19:58.438800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:40.957 [2024-12-05 19:19:58.438839] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.130 ms, result 0 00:29:40.957 true 00:29:40.957 19:19:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:41.217 { 00:29:41.217 "name": "ftl", 00:29:41.217 "properties": [ 00:29:41.217 { 00:29:41.217 "name": "superblock_version", 00:29:41.217 "value": 5, 00:29:41.217 "read-only": true 00:29:41.217 }, 00:29:41.217 { 00:29:41.217 "name": "base_device", 00:29:41.217 "bands": [ 00:29:41.217 { 00:29:41.217 "id": 0, 00:29:41.217 "state": "FREE", 00:29:41.217 "validity": 0.0 00:29:41.217 }, 00:29:41.217 { 00:29:41.217 "id": 1, 00:29:41.217 "state": "FREE", 00:29:41.217 "validity": 0.0 00:29:41.217 }, 00:29:41.217 { 00:29:41.217 "id": 2, 00:29:41.217 "state": "FREE", 00:29:41.217 "validity": 0.0 00:29:41.217 }, 00:29:41.217 { 00:29:41.217 "id": 3, 00:29:41.217 "state": "FREE", 00:29:41.217 "validity": 0.0 00:29:41.217 }, 00:29:41.217 { 00:29:41.217 "id": 4, 00:29:41.217 "state": "FREE", 00:29:41.217 "validity": 0.0 00:29:41.218 }, 00:29:41.218 { 00:29:41.218 "id": 5, 00:29:41.218 "state": "FREE", 00:29:41.218 "validity": 0.0 00:29:41.218 }, 00:29:41.218 { 00:29:41.218 "id": 6, 00:29:41.218 "state": "FREE", 00:29:41.218 "validity": 0.0 00:29:41.218 }, 00:29:41.218 { 00:29:41.218 "id": 7, 00:29:41.218 "state": "FREE", 00:29:41.218 "validity": 0.0 00:29:41.218 }, 00:29:41.218 { 00:29:41.218 "id": 8, 00:29:41.218 "state": "FREE", 00:29:41.218 "validity": 0.0 00:29:41.218 }, 00:29:41.218 { 00:29:41.218 "id": 9, 00:29:41.218 "state": "FREE", 00:29:41.218 "validity": 0.0 00:29:41.218 }, 00:29:41.218 { 00:29:41.218 "id": 10, 00:29:41.218 "state": "FREE", 00:29:41.218 "validity": 0.0 00:29:41.218 }, 00:29:41.218 { 00:29:41.218 "id": 11, 00:29:41.218 "state": "FREE", 00:29:41.218 "validity": 0.0 00:29:41.218 }, 00:29:41.218 { 00:29:41.218 "id": 12, 00:29:41.218 "state": "FREE", 00:29:41.218 "validity": 0.0 00:29:41.218 }, 00:29:41.218 { 00:29:41.218 "id": 13, 00:29:41.218 "state": "FREE", 00:29:41.218 "validity": 0.0 00:29:41.218 }, 00:29:41.218 { 00:29:41.218 "id": 14, 00:29:41.218 "state": "FREE", 00:29:41.218 "validity": 0.0 00:29:41.218 }, 00:29:41.218 { 00:29:41.218 "id": 15, 00:29:41.218 "state": "FREE", 00:29:41.218 "validity": 0.0 00:29:41.218 }, 00:29:41.218 { 00:29:41.218 "id": 16, 00:29:41.218 "state": "FREE", 00:29:41.218 "validity": 0.0 00:29:41.218 }, 00:29:41.218 { 00:29:41.218 "id": 17, 00:29:41.218 "state": "FREE", 00:29:41.218 "validity": 0.0 00:29:41.218 } 00:29:41.218 ], 00:29:41.218 "read-only": true 00:29:41.218 }, 00:29:41.218 { 00:29:41.218 "name": "cache_device", 00:29:41.218 "type": "bdev", 00:29:41.218 "chunks": [ 00:29:41.218 { 00:29:41.218 "id": 0, 00:29:41.218 "state": "INACTIVE", 00:29:41.218 "utilization": 0.0 00:29:41.218 }, 00:29:41.218 { 00:29:41.218 "id": 1, 00:29:41.218 "state": "CLOSED", 00:29:41.218 "utilization": 1.0 00:29:41.218 }, 00:29:41.218 { 00:29:41.218 "id": 2, 00:29:41.218 "state": "CLOSED", 00:29:41.218 "utilization": 1.0 00:29:41.218 }, 00:29:41.218 { 00:29:41.218 "id": 3, 00:29:41.218 "state": "OPEN", 00:29:41.218 "utilization": 0.001953125 00:29:41.218 }, 00:29:41.218 { 00:29:41.218 "id": 4, 00:29:41.218 "state": "OPEN", 00:29:41.218 "utilization": 0.0 00:29:41.218 } 00:29:41.218 ], 00:29:41.218 "read-only": true 00:29:41.218 }, 00:29:41.218 { 00:29:41.218 "name": "verbose_mode", 00:29:41.218 "value": true, 00:29:41.218 "unit": "", 00:29:41.218 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:29:41.218 }, 00:29:41.218 { 00:29:41.218 "name": "prep_upgrade_on_shutdown", 00:29:41.218 "value": true, 00:29:41.218 "unit": "", 00:29:41.218 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:29:41.218 } 00:29:41.218 ] 00:29:41.218 } 00:29:41.218 19:19:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:29:41.218 19:19:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 93705 ]] 00:29:41.218 19:19:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 93705 00:29:41.218 19:19:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 93705 ']' 00:29:41.218 19:19:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 93705 00:29:41.218 19:19:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:29:41.218 19:19:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:41.218 19:19:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93705 00:29:41.218 killing process with pid 93705 00:29:41.218 19:19:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:41.218 19:19:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:41.218 19:19:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93705' 00:29:41.218 19:19:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 93705 00:29:41.218 19:19:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 93705 00:29:41.479 [2024-12-05 19:19:58.808506] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:29:41.479 [2024-12-05 19:19:58.812644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.479 [2024-12-05 19:19:58.812678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:29:41.479 [2024-12-05 19:19:58.812688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:41.479 [2024-12-05 19:19:58.812696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.479 [2024-12-05 19:19:58.812715] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:29:41.479 [2024-12-05 19:19:58.813230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.479 [2024-12-05 19:19:58.813271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:29:41.479 [2024-12-05 19:19:58.813280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.505 ms 00:29:41.479 [2024-12-05 19:19:58.813291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.477 [2024-12-05 19:20:07.198788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.477 [2024-12-05 19:20:07.198850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:29:51.477 [2024-12-05 19:20:07.198864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8385.446 ms 00:29:51.477 [2024-12-05 19:20:07.198872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.477 [2024-12-05 19:20:07.200398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.477 [2024-12-05 19:20:07.200422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:29:51.477 [2024-12-05 19:20:07.200430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.512 ms 00:29:51.477 [2024-12-05 19:20:07.200437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.477 [2024-12-05 19:20:07.201311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.477 [2024-12-05 19:20:07.201446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:29:51.477 [2024-12-05 19:20:07.201460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.853 ms 00:29:51.477 [2024-12-05 19:20:07.201467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.477 [2024-12-05 19:20:07.203774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.477 [2024-12-05 19:20:07.203802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:29:51.477 [2024-12-05 19:20:07.203810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.271 ms 00:29:51.477 [2024-12-05 19:20:07.203816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.477 [2024-12-05 19:20:07.207364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.477 [2024-12-05 19:20:07.207392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:29:51.477 [2024-12-05 19:20:07.207400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.522 ms 00:29:51.477 [2024-12-05 19:20:07.207411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.477 [2024-12-05 19:20:07.207476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.477 [2024-12-05 19:20:07.207485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:29:51.477 [2024-12-05 19:20:07.207492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.040 ms 00:29:51.477 [2024-12-05 19:20:07.207499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.477 [2024-12-05 19:20:07.209906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.477 [2024-12-05 19:20:07.209931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:29:51.477 [2024-12-05 19:20:07.209939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.395 ms 00:29:51.477 [2024-12-05 19:20:07.209945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.477 [2024-12-05 19:20:07.211918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.477 [2024-12-05 19:20:07.211944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:29:51.477 [2024-12-05 19:20:07.211951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.949 ms 00:29:51.477 [2024-12-05 19:20:07.211957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.477 [2024-12-05 19:20:07.213742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.477 [2024-12-05 19:20:07.213768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:29:51.477 [2024-12-05 19:20:07.213775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.761 ms 00:29:51.477 [2024-12-05 19:20:07.213781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.477 [2024-12-05 19:20:07.215422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.477 [2024-12-05 19:20:07.215448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:29:51.477 [2024-12-05 19:20:07.215455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.596 ms 00:29:51.477 [2024-12-05 19:20:07.215461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.477 [2024-12-05 19:20:07.215485] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:29:51.477 [2024-12-05 19:20:07.215496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:29:51.477 [2024-12-05 19:20:07.215504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:29:51.477 [2024-12-05 19:20:07.215511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:29:51.477 [2024-12-05 19:20:07.215517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:51.477 [2024-12-05 19:20:07.215523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:51.477 [2024-12-05 19:20:07.215529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:51.477 [2024-12-05 19:20:07.215535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:51.477 [2024-12-05 19:20:07.215541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:51.477 [2024-12-05 19:20:07.215546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:51.477 [2024-12-05 19:20:07.215552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:51.477 [2024-12-05 19:20:07.215558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:51.477 [2024-12-05 19:20:07.215564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:51.477 [2024-12-05 19:20:07.215570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:51.477 [2024-12-05 19:20:07.215575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:51.477 [2024-12-05 19:20:07.215581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:51.477 [2024-12-05 19:20:07.215587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:51.477 [2024-12-05 19:20:07.215592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:51.477 [2024-12-05 19:20:07.215598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:51.477 [2024-12-05 19:20:07.215607] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:29:51.477 [2024-12-05 19:20:07.215613] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: a211da58-b31b-4752-af2b-fb84d4161411 00:29:51.477 [2024-12-05 19:20:07.215620] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:29:51.477 [2024-12-05 19:20:07.215630] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:29:51.477 [2024-12-05 19:20:07.215636] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:29:51.477 [2024-12-05 19:20:07.215642] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:29:51.477 [2024-12-05 19:20:07.215648] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:29:51.477 [2024-12-05 19:20:07.215654] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:29:51.477 [2024-12-05 19:20:07.215661] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:29:51.477 [2024-12-05 19:20:07.215666] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:29:51.477 [2024-12-05 19:20:07.215672] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:29:51.477 [2024-12-05 19:20:07.215681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.477 [2024-12-05 19:20:07.215689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:29:51.477 [2024-12-05 19:20:07.215696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.197 ms 00:29:51.477 [2024-12-05 19:20:07.215702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.477 [2024-12-05 19:20:07.217421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.477 [2024-12-05 19:20:07.217448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:29:51.477 [2024-12-05 19:20:07.217455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.707 ms 00:29:51.477 [2024-12-05 19:20:07.217462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.477 [2024-12-05 19:20:07.217545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.477 [2024-12-05 19:20:07.217552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:29:51.477 [2024-12-05 19:20:07.217559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.070 ms 00:29:51.477 [2024-12-05 19:20:07.217565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.477 [2024-12-05 19:20:07.223514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:51.477 [2024-12-05 19:20:07.223650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:51.477 [2024-12-05 19:20:07.223668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:51.477 [2024-12-05 19:20:07.223677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.477 [2024-12-05 19:20:07.223700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:51.477 [2024-12-05 19:20:07.223708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:51.477 [2024-12-05 19:20:07.223714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:51.477 [2024-12-05 19:20:07.223720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.477 [2024-12-05 19:20:07.223769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:51.477 [2024-12-05 19:20:07.223778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:51.477 [2024-12-05 19:20:07.223785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:51.477 [2024-12-05 19:20:07.223793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.477 [2024-12-05 19:20:07.223806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:51.477 [2024-12-05 19:20:07.223813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:51.477 [2024-12-05 19:20:07.223822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:51.477 [2024-12-05 19:20:07.223828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.478 [2024-12-05 19:20:07.234822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:51.478 [2024-12-05 19:20:07.234955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:51.478 [2024-12-05 19:20:07.234968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:51.478 [2024-12-05 19:20:07.234975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.478 [2024-12-05 19:20:07.243416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:51.478 [2024-12-05 19:20:07.243448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:51.478 [2024-12-05 19:20:07.243456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:51.478 [2024-12-05 19:20:07.243463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.478 [2024-12-05 19:20:07.243523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:51.478 [2024-12-05 19:20:07.243536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:51.478 [2024-12-05 19:20:07.243543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:51.478 [2024-12-05 19:20:07.243555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.478 [2024-12-05 19:20:07.243579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:51.478 [2024-12-05 19:20:07.243587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:51.478 [2024-12-05 19:20:07.243593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:51.478 [2024-12-05 19:20:07.243600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.478 [2024-12-05 19:20:07.243659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:51.478 [2024-12-05 19:20:07.243666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:51.478 [2024-12-05 19:20:07.243676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:51.478 [2024-12-05 19:20:07.243682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.478 [2024-12-05 19:20:07.243710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:51.478 [2024-12-05 19:20:07.243718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:29:51.478 [2024-12-05 19:20:07.243725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:51.478 [2024-12-05 19:20:07.243732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.478 [2024-12-05 19:20:07.243768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:51.478 [2024-12-05 19:20:07.243775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:51.478 [2024-12-05 19:20:07.243783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:51.478 [2024-12-05 19:20:07.243791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.478 [2024-12-05 19:20:07.243831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:51.478 [2024-12-05 19:20:07.243841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:51.478 [2024-12-05 19:20:07.243847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:51.478 [2024-12-05 19:20:07.243854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.478 [2024-12-05 19:20:07.243969] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8431.261 ms, result 0 00:29:53.391 19:20:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:29:53.391 19:20:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:29:53.391 19:20:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:29:53.391 19:20:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:29:53.391 19:20:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:53.391 19:20:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=94222 00:29:53.391 19:20:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:29:53.391 19:20:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 94222 00:29:53.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:53.391 19:20:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 94222 ']' 00:29:53.391 19:20:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:53.391 19:20:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:53.391 19:20:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:53.391 19:20:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:53.391 19:20:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:53.391 19:20:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:53.391 [2024-12-05 19:20:10.817355] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:29:53.391 [2024-12-05 19:20:10.817843] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94222 ] 00:29:53.657 [2024-12-05 19:20:10.973314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:53.657 [2024-12-05 19:20:11.014514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:53.944 [2024-12-05 19:20:11.447014] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:29:53.944 [2024-12-05 19:20:11.447121] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:29:54.206 [2024-12-05 19:20:11.599892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:54.206 [2024-12-05 19:20:11.599961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:29:54.206 [2024-12-05 19:20:11.599982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:29:54.206 [2024-12-05 19:20:11.599992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:54.206 [2024-12-05 19:20:11.600054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:54.206 [2024-12-05 19:20:11.600065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:54.206 [2024-12-05 19:20:11.600079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.040 ms 00:29:54.206 [2024-12-05 19:20:11.600087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:54.206 [2024-12-05 19:20:11.600111] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:29:54.206 [2024-12-05 19:20:11.600427] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:29:54.206 [2024-12-05 19:20:11.600447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:54.206 [2024-12-05 19:20:11.600461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:54.206 [2024-12-05 19:20:11.600474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.342 ms 00:29:54.206 [2024-12-05 19:20:11.600482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:54.206 [2024-12-05 19:20:11.602790] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:29:54.206 [2024-12-05 19:20:11.607794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:54.206 [2024-12-05 19:20:11.607861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:29:54.206 [2024-12-05 19:20:11.607874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.007 ms 00:29:54.206 [2024-12-05 19:20:11.607883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:54.206 [2024-12-05 19:20:11.607975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:54.206 [2024-12-05 19:20:11.607987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:29:54.206 [2024-12-05 19:20:11.607996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:29:54.206 [2024-12-05 19:20:11.608004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:54.206 [2024-12-05 19:20:11.619874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:54.206 [2024-12-05 19:20:11.619925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:54.206 [2024-12-05 19:20:11.619938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.796 ms 00:29:54.206 [2024-12-05 19:20:11.619947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:54.206 [2024-12-05 19:20:11.620001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:54.206 [2024-12-05 19:20:11.620011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:54.206 [2024-12-05 19:20:11.620021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:29:54.206 [2024-12-05 19:20:11.620029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:54.206 [2024-12-05 19:20:11.620103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:54.206 [2024-12-05 19:20:11.620124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:29:54.206 [2024-12-05 19:20:11.620137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:29:54.206 [2024-12-05 19:20:11.620146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:54.206 [2024-12-05 19:20:11.620175] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:29:54.206 [2024-12-05 19:20:11.623036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:54.206 [2024-12-05 19:20:11.623082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:54.206 [2024-12-05 19:20:11.623093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.867 ms 00:29:54.206 [2024-12-05 19:20:11.623102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:54.206 [2024-12-05 19:20:11.623139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:54.206 [2024-12-05 19:20:11.623153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:29:54.206 [2024-12-05 19:20:11.623162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:29:54.206 [2024-12-05 19:20:11.623170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:54.206 [2024-12-05 19:20:11.623198] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:29:54.206 [2024-12-05 19:20:11.623227] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:29:54.206 [2024-12-05 19:20:11.623285] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:29:54.206 [2024-12-05 19:20:11.623303] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:29:54.206 [2024-12-05 19:20:11.623424] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:29:54.206 [2024-12-05 19:20:11.623438] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:29:54.206 [2024-12-05 19:20:11.623452] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:29:54.206 [2024-12-05 19:20:11.623464] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:29:54.206 [2024-12-05 19:20:11.623474] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:29:54.206 [2024-12-05 19:20:11.623482] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:29:54.206 [2024-12-05 19:20:11.623490] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:29:54.206 [2024-12-05 19:20:11.623506] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:29:54.206 [2024-12-05 19:20:11.623514] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:29:54.206 [2024-12-05 19:20:11.623524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:54.206 [2024-12-05 19:20:11.623536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:29:54.206 [2024-12-05 19:20:11.623547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.331 ms 00:29:54.206 [2024-12-05 19:20:11.623559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:54.206 [2024-12-05 19:20:11.623649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:54.206 [2024-12-05 19:20:11.623664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:29:54.206 [2024-12-05 19:20:11.623675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.073 ms 00:29:54.206 [2024-12-05 19:20:11.623685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:54.206 [2024-12-05 19:20:11.623794] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:29:54.206 [2024-12-05 19:20:11.623808] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:29:54.206 [2024-12-05 19:20:11.623818] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:54.206 [2024-12-05 19:20:11.623832] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:54.206 [2024-12-05 19:20:11.623842] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:29:54.206 [2024-12-05 19:20:11.623850] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:29:54.206 [2024-12-05 19:20:11.623859] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:29:54.206 [2024-12-05 19:20:11.623866] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:29:54.206 [2024-12-05 19:20:11.623876] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:29:54.206 [2024-12-05 19:20:11.623887] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:54.206 [2024-12-05 19:20:11.623896] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:29:54.206 [2024-12-05 19:20:11.623904] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:29:54.206 [2024-12-05 19:20:11.623912] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:54.206 [2024-12-05 19:20:11.623920] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:29:54.206 [2024-12-05 19:20:11.623946] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:29:54.206 [2024-12-05 19:20:11.623956] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:54.206 [2024-12-05 19:20:11.623964] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:29:54.206 [2024-12-05 19:20:11.623974] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:29:54.206 [2024-12-05 19:20:11.623984] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:54.206 [2024-12-05 19:20:11.623993] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:29:54.206 [2024-12-05 19:20:11.624001] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:29:54.206 [2024-12-05 19:20:11.624010] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:54.206 [2024-12-05 19:20:11.624018] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:29:54.206 [2024-12-05 19:20:11.624026] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:29:54.206 [2024-12-05 19:20:11.624034] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:54.206 [2024-12-05 19:20:11.624044] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:29:54.206 [2024-12-05 19:20:11.624052] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:29:54.206 [2024-12-05 19:20:11.624059] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:54.206 [2024-12-05 19:20:11.624066] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:29:54.207 [2024-12-05 19:20:11.624072] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:29:54.207 [2024-12-05 19:20:11.624083] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:54.207 [2024-12-05 19:20:11.624092] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:29:54.207 [2024-12-05 19:20:11.624099] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:29:54.207 [2024-12-05 19:20:11.624106] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:54.207 [2024-12-05 19:20:11.624113] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:29:54.207 [2024-12-05 19:20:11.624120] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:29:54.207 [2024-12-05 19:20:11.624126] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:54.207 [2024-12-05 19:20:11.624132] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:29:54.207 [2024-12-05 19:20:11.624141] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:29:54.207 [2024-12-05 19:20:11.624148] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:54.207 [2024-12-05 19:20:11.624155] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:29:54.207 [2024-12-05 19:20:11.624161] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:29:54.207 [2024-12-05 19:20:11.624168] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:54.207 [2024-12-05 19:20:11.624176] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:29:54.207 [2024-12-05 19:20:11.624187] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:29:54.207 [2024-12-05 19:20:11.624195] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:54.207 [2024-12-05 19:20:11.624206] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:54.207 [2024-12-05 19:20:11.624217] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:29:54.207 [2024-12-05 19:20:11.624224] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:29:54.207 [2024-12-05 19:20:11.624232] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:29:54.207 [2024-12-05 19:20:11.624240] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:29:54.207 [2024-12-05 19:20:11.624246] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:29:54.207 [2024-12-05 19:20:11.624271] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:29:54.207 [2024-12-05 19:20:11.624281] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:29:54.207 [2024-12-05 19:20:11.624292] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:54.207 [2024-12-05 19:20:11.624302] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:29:54.207 [2024-12-05 19:20:11.624310] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:29:54.207 [2024-12-05 19:20:11.624318] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:29:54.207 [2024-12-05 19:20:11.624326] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:29:54.207 [2024-12-05 19:20:11.624334] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:29:54.207 [2024-12-05 19:20:11.624342] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:29:54.207 [2024-12-05 19:20:11.624349] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:29:54.207 [2024-12-05 19:20:11.624359] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:29:54.207 [2024-12-05 19:20:11.624369] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:29:54.207 [2024-12-05 19:20:11.624376] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:29:54.207 [2024-12-05 19:20:11.624383] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:29:54.207 [2024-12-05 19:20:11.624390] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:29:54.207 [2024-12-05 19:20:11.624397] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:29:54.207 [2024-12-05 19:20:11.624404] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:29:54.207 [2024-12-05 19:20:11.624413] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:29:54.207 [2024-12-05 19:20:11.624423] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:54.207 [2024-12-05 19:20:11.624432] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:54.207 [2024-12-05 19:20:11.624440] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:29:54.207 [2024-12-05 19:20:11.624447] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:29:54.207 [2024-12-05 19:20:11.624463] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:29:54.207 [2024-12-05 19:20:11.624471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:54.207 [2024-12-05 19:20:11.624482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:29:54.207 [2024-12-05 19:20:11.624490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.746 ms 00:29:54.207 [2024-12-05 19:20:11.624500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:54.207 [2024-12-05 19:20:11.624567] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:29:54.207 [2024-12-05 19:20:11.624580] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:29:58.423 [2024-12-05 19:20:15.704439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:58.423 [2024-12-05 19:20:15.704642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:29:58.423 [2024-12-05 19:20:15.704705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4079.857 ms 00:29:58.423 [2024-12-05 19:20:15.704737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:58.423 [2024-12-05 19:20:15.712762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:58.423 [2024-12-05 19:20:15.712892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:58.423 [2024-12-05 19:20:15.712946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.939 ms 00:29:58.423 [2024-12-05 19:20:15.712969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:58.423 [2024-12-05 19:20:15.713024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:58.423 [2024-12-05 19:20:15.713047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:29:58.423 [2024-12-05 19:20:15.713067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:29:58.423 [2024-12-05 19:20:15.713086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:58.423 [2024-12-05 19:20:15.721425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:58.423 [2024-12-05 19:20:15.721541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:58.423 [2024-12-05 19:20:15.721591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.290 ms 00:29:58.423 [2024-12-05 19:20:15.721613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:58.423 [2024-12-05 19:20:15.721672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:58.423 [2024-12-05 19:20:15.721696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:58.423 [2024-12-05 19:20:15.721715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:58.423 [2024-12-05 19:20:15.721738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:58.423 [2024-12-05 19:20:15.722127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:58.423 [2024-12-05 19:20:15.722214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:58.423 [2024-12-05 19:20:15.722288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.306 ms 00:29:58.423 [2024-12-05 19:20:15.722313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:58.423 [2024-12-05 19:20:15.722392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:58.423 [2024-12-05 19:20:15.722509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:58.423 [2024-12-05 19:20:15.722556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:29:58.423 [2024-12-05 19:20:15.722579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:58.423 [2024-12-05 19:20:15.727897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:58.423 [2024-12-05 19:20:15.727999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:58.423 [2024-12-05 19:20:15.728059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.280 ms 00:29:58.423 [2024-12-05 19:20:15.728081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:58.423 [2024-12-05 19:20:15.738714] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:29:58.423 [2024-12-05 19:20:15.738852] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:29:58.423 [2024-12-05 19:20:15.738920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:58.423 [2024-12-05 19:20:15.738942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:29:58.423 [2024-12-05 19:20:15.738963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.624 ms 00:29:58.423 [2024-12-05 19:20:15.738982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:58.423 [2024-12-05 19:20:15.743059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:58.423 [2024-12-05 19:20:15.743183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:29:58.423 [2024-12-05 19:20:15.743246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.033 ms 00:29:58.424 [2024-12-05 19:20:15.743297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:58.424 [2024-12-05 19:20:15.745317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:58.424 [2024-12-05 19:20:15.745424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:29:58.424 [2024-12-05 19:20:15.745480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.938 ms 00:29:58.424 [2024-12-05 19:20:15.745507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:58.424 [2024-12-05 19:20:15.747339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:58.424 [2024-12-05 19:20:15.747447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:29:58.424 [2024-12-05 19:20:15.747502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.784 ms 00:29:58.424 [2024-12-05 19:20:15.747529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:58.424 [2024-12-05 19:20:15.748022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:58.424 [2024-12-05 19:20:15.748115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:29:58.424 [2024-12-05 19:20:15.748173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.321 ms 00:29:58.424 [2024-12-05 19:20:15.748202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:58.424 [2024-12-05 19:20:15.763705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:58.424 [2024-12-05 19:20:15.763835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:29:58.424 [2024-12-05 19:20:15.763892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.454 ms 00:29:58.424 [2024-12-05 19:20:15.763915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:58.424 [2024-12-05 19:20:15.771339] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:29:58.424 [2024-12-05 19:20:15.772042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:58.424 [2024-12-05 19:20:15.772132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:29:58.424 [2024-12-05 19:20:15.772179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.026 ms 00:29:58.424 [2024-12-05 19:20:15.772203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:58.424 [2024-12-05 19:20:15.772290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:58.424 [2024-12-05 19:20:15.772320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:29:58.424 [2024-12-05 19:20:15.772343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:29:58.424 [2024-12-05 19:20:15.772364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:58.424 [2024-12-05 19:20:15.772422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:58.424 [2024-12-05 19:20:15.772448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:29:58.424 [2024-12-05 19:20:15.772475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:29:58.424 [2024-12-05 19:20:15.772537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:58.424 [2024-12-05 19:20:15.772579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:58.424 [2024-12-05 19:20:15.772603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:29:58.424 [2024-12-05 19:20:15.772626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:29:58.424 [2024-12-05 19:20:15.772646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:58.424 [2024-12-05 19:20:15.772723] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:29:58.424 [2024-12-05 19:20:15.772750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:58.424 [2024-12-05 19:20:15.772768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:29:58.424 [2024-12-05 19:20:15.772788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:29:58.424 [2024-12-05 19:20:15.772810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:58.424 [2024-12-05 19:20:15.776240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:58.424 [2024-12-05 19:20:15.776280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:29:58.424 [2024-12-05 19:20:15.776290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.365 ms 00:29:58.424 [2024-12-05 19:20:15.776298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:58.424 [2024-12-05 19:20:15.776365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:58.424 [2024-12-05 19:20:15.776374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:29:58.424 [2024-12-05 19:20:15.776382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:29:58.424 [2024-12-05 19:20:15.776390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:58.424 [2024-12-05 19:20:15.777271] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4177.001 ms, result 0 00:29:58.424 [2024-12-05 19:20:15.792595] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:58.424 [2024-12-05 19:20:15.808609] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:29:58.424 [2024-12-05 19:20:15.816680] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:58.424 19:20:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:58.424 19:20:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:29:58.424 19:20:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:58.424 19:20:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:29:58.424 19:20:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:29:58.685 [2024-12-05 19:20:16.036747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:58.685 [2024-12-05 19:20:16.036786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:29:58.685 [2024-12-05 19:20:16.036798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:29:58.685 [2024-12-05 19:20:16.036807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:58.685 [2024-12-05 19:20:16.036828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:58.685 [2024-12-05 19:20:16.036837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:29:58.685 [2024-12-05 19:20:16.036848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:58.685 [2024-12-05 19:20:16.036856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:58.685 [2024-12-05 19:20:16.036875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:58.685 [2024-12-05 19:20:16.036883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:29:58.685 [2024-12-05 19:20:16.036891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:58.685 [2024-12-05 19:20:16.036898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:58.685 [2024-12-05 19:20:16.036955] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.193 ms, result 0 00:29:58.685 true 00:29:58.685 19:20:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:58.947 { 00:29:58.947 "name": "ftl", 00:29:58.947 "properties": [ 00:29:58.947 { 00:29:58.947 "name": "superblock_version", 00:29:58.947 "value": 5, 00:29:58.947 "read-only": true 00:29:58.947 }, 00:29:58.947 { 00:29:58.947 "name": "base_device", 00:29:58.947 "bands": [ 00:29:58.947 { 00:29:58.947 "id": 0, 00:29:58.947 "state": "CLOSED", 00:29:58.947 "validity": 1.0 00:29:58.947 }, 00:29:58.947 { 00:29:58.947 "id": 1, 00:29:58.947 "state": "CLOSED", 00:29:58.947 "validity": 1.0 00:29:58.947 }, 00:29:58.947 { 00:29:58.947 "id": 2, 00:29:58.947 "state": "CLOSED", 00:29:58.947 "validity": 0.007843137254901933 00:29:58.947 }, 00:29:58.947 { 00:29:58.947 "id": 3, 00:29:58.947 "state": "FREE", 00:29:58.947 "validity": 0.0 00:29:58.947 }, 00:29:58.947 { 00:29:58.947 "id": 4, 00:29:58.947 "state": "FREE", 00:29:58.947 "validity": 0.0 00:29:58.947 }, 00:29:58.947 { 00:29:58.947 "id": 5, 00:29:58.947 "state": "FREE", 00:29:58.947 "validity": 0.0 00:29:58.947 }, 00:29:58.947 { 00:29:58.947 "id": 6, 00:29:58.947 "state": "FREE", 00:29:58.947 "validity": 0.0 00:29:58.947 }, 00:29:58.947 { 00:29:58.947 "id": 7, 00:29:58.947 "state": "FREE", 00:29:58.947 "validity": 0.0 00:29:58.947 }, 00:29:58.947 { 00:29:58.947 "id": 8, 00:29:58.947 "state": "FREE", 00:29:58.947 "validity": 0.0 00:29:58.947 }, 00:29:58.947 { 00:29:58.947 "id": 9, 00:29:58.947 "state": "FREE", 00:29:58.947 "validity": 0.0 00:29:58.947 }, 00:29:58.947 { 00:29:58.947 "id": 10, 00:29:58.947 "state": "FREE", 00:29:58.947 "validity": 0.0 00:29:58.947 }, 00:29:58.947 { 00:29:58.947 "id": 11, 00:29:58.947 "state": "FREE", 00:29:58.947 "validity": 0.0 00:29:58.947 }, 00:29:58.947 { 00:29:58.947 "id": 12, 00:29:58.947 "state": "FREE", 00:29:58.947 "validity": 0.0 00:29:58.947 }, 00:29:58.947 { 00:29:58.947 "id": 13, 00:29:58.947 "state": "FREE", 00:29:58.947 "validity": 0.0 00:29:58.947 }, 00:29:58.947 { 00:29:58.947 "id": 14, 00:29:58.947 "state": "FREE", 00:29:58.947 "validity": 0.0 00:29:58.947 }, 00:29:58.947 { 00:29:58.947 "id": 15, 00:29:58.947 "state": "FREE", 00:29:58.947 "validity": 0.0 00:29:58.947 }, 00:29:58.947 { 00:29:58.947 "id": 16, 00:29:58.947 "state": "FREE", 00:29:58.947 "validity": 0.0 00:29:58.947 }, 00:29:58.947 { 00:29:58.947 "id": 17, 00:29:58.947 "state": "FREE", 00:29:58.947 "validity": 0.0 00:29:58.947 } 00:29:58.947 ], 00:29:58.947 "read-only": true 00:29:58.947 }, 00:29:58.947 { 00:29:58.947 "name": "cache_device", 00:29:58.947 "type": "bdev", 00:29:58.947 "chunks": [ 00:29:58.947 { 00:29:58.947 "id": 0, 00:29:58.947 "state": "INACTIVE", 00:29:58.947 "utilization": 0.0 00:29:58.947 }, 00:29:58.947 { 00:29:58.947 "id": 1, 00:29:58.947 "state": "OPEN", 00:29:58.947 "utilization": 0.0 00:29:58.947 }, 00:29:58.947 { 00:29:58.947 "id": 2, 00:29:58.947 "state": "OPEN", 00:29:58.947 "utilization": 0.0 00:29:58.947 }, 00:29:58.947 { 00:29:58.947 "id": 3, 00:29:58.948 "state": "FREE", 00:29:58.948 "utilization": 0.0 00:29:58.948 }, 00:29:58.948 { 00:29:58.948 "id": 4, 00:29:58.948 "state": "FREE", 00:29:58.948 "utilization": 0.0 00:29:58.948 } 00:29:58.948 ], 00:29:58.948 "read-only": true 00:29:58.948 }, 00:29:58.948 { 00:29:58.948 "name": "verbose_mode", 00:29:58.948 "value": true, 00:29:58.948 "unit": "", 00:29:58.948 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:29:58.948 }, 00:29:58.948 { 00:29:58.948 "name": "prep_upgrade_on_shutdown", 00:29:58.948 "value": false, 00:29:58.948 "unit": "", 00:29:58.948 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:29:58.948 } 00:29:58.948 ] 00:29:58.948 } 00:29:58.948 19:20:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:29:58.948 19:20:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:58.948 19:20:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:29:58.948 19:20:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:29:58.948 19:20:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:29:58.948 19:20:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:29:58.948 19:20:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:29:58.948 19:20:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:59.210 Validate MD5 checksum, iteration 1 00:29:59.210 19:20:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:29:59.210 19:20:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:29:59.210 19:20:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:29:59.210 19:20:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:29:59.210 19:20:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:29:59.210 19:20:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:59.210 19:20:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:29:59.210 19:20:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:59.210 19:20:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:59.210 19:20:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:59.210 19:20:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:59.210 19:20:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:59.210 19:20:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:59.210 [2024-12-05 19:20:16.737184] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:29:59.210 [2024-12-05 19:20:16.737441] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94301 ] 00:29:59.470 [2024-12-05 19:20:16.884673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:59.470 [2024-12-05 19:20:16.905224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:00.858  [2024-12-05T19:20:18.988Z] Copying: 638/1024 [MB] (638 MBps) [2024-12-05T19:20:19.930Z] Copying: 1024/1024 [MB] (average 602 MBps) 00:30:02.371 00:30:02.371 19:20:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:30:02.371 19:20:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:04.917 19:20:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:04.917 19:20:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=de3a7ca253de27fb34e56f009bdfdb64 00:30:04.917 19:20:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ de3a7ca253de27fb34e56f009bdfdb64 != \d\e\3\a\7\c\a\2\5\3\d\e\2\7\f\b\3\4\e\5\6\f\0\0\9\b\d\f\d\b\6\4 ]] 00:30:04.917 19:20:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:04.917 19:20:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:04.917 19:20:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:30:04.917 Validate MD5 checksum, iteration 2 00:30:04.917 19:20:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:04.917 19:20:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:04.917 19:20:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:04.917 19:20:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:04.917 19:20:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:04.917 19:20:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:04.917 [2024-12-05 19:20:21.970758] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:30:04.917 [2024-12-05 19:20:21.970982] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94359 ] 00:30:04.917 [2024-12-05 19:20:22.114119] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:04.917 [2024-12-05 19:20:22.133847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:06.303  [2024-12-05T19:20:24.433Z] Copying: 584/1024 [MB] (584 MBps) [2024-12-05T19:20:26.973Z] Copying: 1024/1024 [MB] (average 595 MBps) 00:30:09.414 00:30:09.414 19:20:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:30:09.414 19:20:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:11.324 19:20:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:11.324 19:20:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=d8e23664d5538f6f2346e07126c3ab5f 00:30:11.324 19:20:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ d8e23664d5538f6f2346e07126c3ab5f != \d\8\e\2\3\6\6\4\d\5\5\3\8\f\6\f\2\3\4\6\e\0\7\1\2\6\c\3\a\b\5\f ]] 00:30:11.324 19:20:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:11.324 19:20:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:11.324 19:20:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:30:11.324 19:20:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 94222 ]] 00:30:11.324 19:20:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 94222 00:30:11.324 19:20:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:30:11.324 19:20:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:30:11.324 19:20:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:30:11.324 19:20:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:30:11.324 19:20:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:11.324 19:20:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=94430 00:30:11.324 19:20:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:11.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:11.324 19:20:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:30:11.324 19:20:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 94430 00:30:11.324 19:20:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 94430 ']' 00:30:11.324 19:20:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:11.324 19:20:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:11.324 19:20:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:11.324 19:20:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:11.324 19:20:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:11.324 [2024-12-05 19:20:28.655196] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:30:11.324 [2024-12-05 19:20:28.655480] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94430 ] 00:30:11.324 [2024-12-05 19:20:28.793953] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:11.325 [2024-12-05 19:20:28.817381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:11.325 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 94222 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:30:11.586 [2024-12-05 19:20:29.117312] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:11.586 [2024-12-05 19:20:29.117551] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:11.846 [2024-12-05 19:20:29.264174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.846 [2024-12-05 19:20:29.264352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:30:11.846 [2024-12-05 19:20:29.264685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:11.846 [2024-12-05 19:20:29.264727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.846 [2024-12-05 19:20:29.264820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.846 [2024-12-05 19:20:29.264914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:11.846 [2024-12-05 19:20:29.264943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.043 ms 00:30:11.846 [2024-12-05 19:20:29.264962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.847 [2024-12-05 19:20:29.265038] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:30:11.847 [2024-12-05 19:20:29.265349] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:30:11.847 [2024-12-05 19:20:29.265396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.847 [2024-12-05 19:20:29.265457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:11.847 [2024-12-05 19:20:29.265479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.365 ms 00:30:11.847 [2024-12-05 19:20:29.265498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.847 [2024-12-05 19:20:29.265794] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:30:11.847 [2024-12-05 19:20:29.271001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.847 [2024-12-05 19:20:29.271121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:30:11.847 [2024-12-05 19:20:29.271201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.207 ms 00:30:11.847 [2024-12-05 19:20:29.271224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.847 [2024-12-05 19:20:29.272285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.847 [2024-12-05 19:20:29.272383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:30:11.847 [2024-12-05 19:20:29.272397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:30:11.847 [2024-12-05 19:20:29.272412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.847 [2024-12-05 19:20:29.272688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.847 [2024-12-05 19:20:29.272700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:11.847 [2024-12-05 19:20:29.272709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.217 ms 00:30:11.847 [2024-12-05 19:20:29.272720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.847 [2024-12-05 19:20:29.272759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.847 [2024-12-05 19:20:29.272767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:11.847 [2024-12-05 19:20:29.272775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:30:11.847 [2024-12-05 19:20:29.272783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.847 [2024-12-05 19:20:29.272806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.847 [2024-12-05 19:20:29.272818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:30:11.847 [2024-12-05 19:20:29.272828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:11.847 [2024-12-05 19:20:29.272836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.847 [2024-12-05 19:20:29.272859] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:30:11.847 [2024-12-05 19:20:29.273733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.847 [2024-12-05 19:20:29.273748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:11.847 [2024-12-05 19:20:29.273757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.877 ms 00:30:11.847 [2024-12-05 19:20:29.273764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.847 [2024-12-05 19:20:29.273794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.847 [2024-12-05 19:20:29.273806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:30:11.847 [2024-12-05 19:20:29.273814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:11.847 [2024-12-05 19:20:29.273821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.847 [2024-12-05 19:20:29.273841] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:30:11.847 [2024-12-05 19:20:29.273863] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:30:11.847 [2024-12-05 19:20:29.273897] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:30:11.847 [2024-12-05 19:20:29.273914] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:30:11.847 [2024-12-05 19:20:29.274026] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:30:11.847 [2024-12-05 19:20:29.274038] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:30:11.847 [2024-12-05 19:20:29.274048] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:30:11.847 [2024-12-05 19:20:29.274061] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:30:11.847 [2024-12-05 19:20:29.274072] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:30:11.847 [2024-12-05 19:20:29.274103] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:30:11.847 [2024-12-05 19:20:29.274110] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:30:11.847 [2024-12-05 19:20:29.274117] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:30:11.847 [2024-12-05 19:20:29.274124] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:30:11.847 [2024-12-05 19:20:29.274132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.847 [2024-12-05 19:20:29.274143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:30:11.847 [2024-12-05 19:20:29.274153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.294 ms 00:30:11.847 [2024-12-05 19:20:29.274160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.847 [2024-12-05 19:20:29.274247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.847 [2024-12-05 19:20:29.274277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:30:11.847 [2024-12-05 19:20:29.274288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.072 ms 00:30:11.847 [2024-12-05 19:20:29.274295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.847 [2024-12-05 19:20:29.274399] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:30:11.847 [2024-12-05 19:20:29.274411] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:30:11.847 [2024-12-05 19:20:29.274421] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:11.847 [2024-12-05 19:20:29.274432] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:11.847 [2024-12-05 19:20:29.274440] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:30:11.847 [2024-12-05 19:20:29.274448] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:30:11.847 [2024-12-05 19:20:29.274457] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:30:11.847 [2024-12-05 19:20:29.274466] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:30:11.847 [2024-12-05 19:20:29.274474] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:30:11.847 [2024-12-05 19:20:29.274481] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:11.847 [2024-12-05 19:20:29.274489] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:30:11.847 [2024-12-05 19:20:29.274497] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:30:11.847 [2024-12-05 19:20:29.274507] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:11.847 [2024-12-05 19:20:29.274516] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:30:11.847 [2024-12-05 19:20:29.274530] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:30:11.847 [2024-12-05 19:20:29.274538] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:11.847 [2024-12-05 19:20:29.274546] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:30:11.847 [2024-12-05 19:20:29.274553] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:30:11.847 [2024-12-05 19:20:29.274560] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:11.847 [2024-12-05 19:20:29.274568] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:30:11.847 [2024-12-05 19:20:29.274575] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:30:11.847 [2024-12-05 19:20:29.274582] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:11.847 [2024-12-05 19:20:29.274591] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:30:11.847 [2024-12-05 19:20:29.274598] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:30:11.847 [2024-12-05 19:20:29.274605] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:11.847 [2024-12-05 19:20:29.274613] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:30:11.847 [2024-12-05 19:20:29.274620] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:30:11.847 [2024-12-05 19:20:29.274627] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:11.847 [2024-12-05 19:20:29.274635] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:30:11.847 [2024-12-05 19:20:29.274642] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:30:11.847 [2024-12-05 19:20:29.274651] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:11.847 [2024-12-05 19:20:29.274659] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:30:11.847 [2024-12-05 19:20:29.274667] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:30:11.847 [2024-12-05 19:20:29.274674] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:11.847 [2024-12-05 19:20:29.274682] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:30:11.847 [2024-12-05 19:20:29.274689] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:30:11.847 [2024-12-05 19:20:29.274696] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:11.847 [2024-12-05 19:20:29.274703] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:30:11.847 [2024-12-05 19:20:29.274711] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:30:11.847 [2024-12-05 19:20:29.274718] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:11.847 [2024-12-05 19:20:29.274725] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:30:11.847 [2024-12-05 19:20:29.274733] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:30:11.847 [2024-12-05 19:20:29.274741] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:11.847 [2024-12-05 19:20:29.274748] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:30:11.847 [2024-12-05 19:20:29.274757] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:30:11.847 [2024-12-05 19:20:29.274766] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:11.847 [2024-12-05 19:20:29.274775] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:11.847 [2024-12-05 19:20:29.274784] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:30:11.847 [2024-12-05 19:20:29.274791] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:30:11.847 [2024-12-05 19:20:29.274798] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:30:11.847 [2024-12-05 19:20:29.274805] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:30:11.847 [2024-12-05 19:20:29.274812] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:30:11.847 [2024-12-05 19:20:29.274818] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:30:11.847 [2024-12-05 19:20:29.274826] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:30:11.847 [2024-12-05 19:20:29.274834] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:11.847 [2024-12-05 19:20:29.274843] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:30:11.847 [2024-12-05 19:20:29.274850] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:30:11.847 [2024-12-05 19:20:29.274857] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:30:11.847 [2024-12-05 19:20:29.274863] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:30:11.847 [2024-12-05 19:20:29.274870] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:30:11.847 [2024-12-05 19:20:29.274877] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:30:11.847 [2024-12-05 19:20:29.274885] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:30:11.847 [2024-12-05 19:20:29.274893] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:30:11.847 [2024-12-05 19:20:29.274900] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:30:11.847 [2024-12-05 19:20:29.274908] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:30:11.847 [2024-12-05 19:20:29.274915] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:30:11.847 [2024-12-05 19:20:29.274922] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:30:11.847 [2024-12-05 19:20:29.274928] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:30:11.847 [2024-12-05 19:20:29.274935] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:30:11.847 [2024-12-05 19:20:29.274942] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:30:11.847 [2024-12-05 19:20:29.274951] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:11.847 [2024-12-05 19:20:29.274959] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:11.847 [2024-12-05 19:20:29.274972] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:30:11.847 [2024-12-05 19:20:29.274979] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:30:11.847 [2024-12-05 19:20:29.274986] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:30:11.847 [2024-12-05 19:20:29.274993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.847 [2024-12-05 19:20:29.275003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:30:11.847 [2024-12-05 19:20:29.275011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.663 ms 00:30:11.847 [2024-12-05 19:20:29.275019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.847 [2024-12-05 19:20:29.283657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.847 [2024-12-05 19:20:29.283773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:11.847 [2024-12-05 19:20:29.283787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.592 ms 00:30:11.847 [2024-12-05 19:20:29.283796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.847 [2024-12-05 19:20:29.283836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.847 [2024-12-05 19:20:29.283844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:30:11.847 [2024-12-05 19:20:29.283853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:30:11.847 [2024-12-05 19:20:29.283863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.847 [2024-12-05 19:20:29.294516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.847 [2024-12-05 19:20:29.294629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:11.847 [2024-12-05 19:20:29.294643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.602 ms 00:30:11.847 [2024-12-05 19:20:29.294652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.847 [2024-12-05 19:20:29.294684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.847 [2024-12-05 19:20:29.294696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:11.847 [2024-12-05 19:20:29.294707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:11.847 [2024-12-05 19:20:29.294714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.847 [2024-12-05 19:20:29.294804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.847 [2024-12-05 19:20:29.294817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:11.847 [2024-12-05 19:20:29.294825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:30:11.847 [2024-12-05 19:20:29.294833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.847 [2024-12-05 19:20:29.294873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.847 [2024-12-05 19:20:29.294881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:11.847 [2024-12-05 19:20:29.294889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:30:11.847 [2024-12-05 19:20:29.294899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.848 [2024-12-05 19:20:29.301775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.848 [2024-12-05 19:20:29.301877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:11.848 [2024-12-05 19:20:29.301891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.854 ms 00:30:11.848 [2024-12-05 19:20:29.301899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.848 [2024-12-05 19:20:29.301983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.848 [2024-12-05 19:20:29.301995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:30:11.848 [2024-12-05 19:20:29.302006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:11.848 [2024-12-05 19:20:29.302014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.848 [2024-12-05 19:20:29.317182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.848 [2024-12-05 19:20:29.317227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:30:11.848 [2024-12-05 19:20:29.317268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.150 ms 00:30:11.848 [2024-12-05 19:20:29.317280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.848 [2024-12-05 19:20:29.318746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.848 [2024-12-05 19:20:29.318782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:30:11.848 [2024-12-05 19:20:29.318798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.319 ms 00:30:11.848 [2024-12-05 19:20:29.318808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.848 [2024-12-05 19:20:29.337232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.848 [2024-12-05 19:20:29.337286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:30:11.848 [2024-12-05 19:20:29.337301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.379 ms 00:30:11.848 [2024-12-05 19:20:29.337309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.848 [2024-12-05 19:20:29.337437] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:30:11.848 [2024-12-05 19:20:29.337554] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:30:11.848 [2024-12-05 19:20:29.337649] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:30:11.848 [2024-12-05 19:20:29.337744] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:30:11.848 [2024-12-05 19:20:29.337754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.848 [2024-12-05 19:20:29.337762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:30:11.848 [2024-12-05 19:20:29.337773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.403 ms 00:30:11.848 [2024-12-05 19:20:29.337781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.848 [2024-12-05 19:20:29.337826] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:30:11.848 [2024-12-05 19:20:29.337838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.848 [2024-12-05 19:20:29.337846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:30:11.848 [2024-12-05 19:20:29.337856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:30:11.848 [2024-12-05 19:20:29.337864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.848 [2024-12-05 19:20:29.341435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.848 [2024-12-05 19:20:29.341467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:30:11.848 [2024-12-05 19:20:29.341478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.551 ms 00:30:11.848 [2024-12-05 19:20:29.341490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.848 [2024-12-05 19:20:29.342059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.848 [2024-12-05 19:20:29.342089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:30:11.848 [2024-12-05 19:20:29.342099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:30:11.848 [2024-12-05 19:20:29.342107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:11.848 [2024-12-05 19:20:29.342176] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:30:11.848 [2024-12-05 19:20:29.342376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:11.848 [2024-12-05 19:20:29.342388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:30:11.848 [2024-12-05 19:20:29.342400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.201 ms 00:30:11.848 [2024-12-05 19:20:29.342412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:12.414 [2024-12-05 19:20:29.773512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:12.414 [2024-12-05 19:20:29.773571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:30:12.414 [2024-12-05 19:20:29.773585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 430.827 ms 00:30:12.414 [2024-12-05 19:20:29.773594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:12.414 [2024-12-05 19:20:29.774869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:12.414 [2024-12-05 19:20:29.774903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:30:12.414 [2024-12-05 19:20:29.774920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.837 ms 00:30:12.414 [2024-12-05 19:20:29.774928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:12.414 [2024-12-05 19:20:29.775279] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:30:12.414 [2024-12-05 19:20:29.775307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:12.414 [2024-12-05 19:20:29.775315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:30:12.414 [2024-12-05 19:20:29.775323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.351 ms 00:30:12.414 [2024-12-05 19:20:29.775331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:12.414 [2024-12-05 19:20:29.775359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:12.414 [2024-12-05 19:20:29.775372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:30:12.414 [2024-12-05 19:20:29.775386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:12.414 [2024-12-05 19:20:29.775393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:12.414 [2024-12-05 19:20:29.775426] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 433.251 ms, result 0 00:30:12.414 [2024-12-05 19:20:29.775468] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:30:12.414 [2024-12-05 19:20:29.775556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:12.414 [2024-12-05 19:20:29.775566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:30:12.414 [2024-12-05 19:20:29.775573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.089 ms 00:30:12.414 [2024-12-05 19:20:29.775580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:12.673 [2024-12-05 19:20:30.190328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:12.673 [2024-12-05 19:20:30.190392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:30:12.673 [2024-12-05 19:20:30.190407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 414.333 ms 00:30:12.673 [2024-12-05 19:20:30.190416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:12.673 [2024-12-05 19:20:30.191552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:12.673 [2024-12-05 19:20:30.191588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:30:12.673 [2024-12-05 19:20:30.191599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.754 ms 00:30:12.673 [2024-12-05 19:20:30.191607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:12.673 [2024-12-05 19:20:30.191889] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:30:12.673 [2024-12-05 19:20:30.191914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:12.674 [2024-12-05 19:20:30.191922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:30:12.674 [2024-12-05 19:20:30.191930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.277 ms 00:30:12.674 [2024-12-05 19:20:30.191938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:12.674 [2024-12-05 19:20:30.191988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:12.674 [2024-12-05 19:20:30.191997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:30:12.674 [2024-12-05 19:20:30.192005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:12.674 [2024-12-05 19:20:30.192012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:12.674 [2024-12-05 19:20:30.192046] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 416.579 ms, result 0 00:30:12.674 [2024-12-05 19:20:30.192086] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:30:12.674 [2024-12-05 19:20:30.192101] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:30:12.674 [2024-12-05 19:20:30.192110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:12.674 [2024-12-05 19:20:30.192117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:30:12.674 [2024-12-05 19:20:30.192126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 849.955 ms 00:30:12.674 [2024-12-05 19:20:30.192136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:12.674 [2024-12-05 19:20:30.192164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:12.674 [2024-12-05 19:20:30.192172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:30:12.674 [2024-12-05 19:20:30.192185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:12.674 [2024-12-05 19:20:30.192192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:12.674 [2024-12-05 19:20:30.199908] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:30:12.674 [2024-12-05 19:20:30.200002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:12.674 [2024-12-05 19:20:30.200016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:30:12.674 [2024-12-05 19:20:30.200026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.795 ms 00:30:12.674 [2024-12-05 19:20:30.200033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:12.674 [2024-12-05 19:20:30.200705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:12.674 [2024-12-05 19:20:30.200728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:30:12.674 [2024-12-05 19:20:30.200737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.608 ms 00:30:12.674 [2024-12-05 19:20:30.200745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:12.674 [2024-12-05 19:20:30.202972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:12.674 [2024-12-05 19:20:30.202998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:30:12.674 [2024-12-05 19:20:30.203013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.211 ms 00:30:12.674 [2024-12-05 19:20:30.203026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:12.674 [2024-12-05 19:20:30.203065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:12.674 [2024-12-05 19:20:30.203077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:30:12.674 [2024-12-05 19:20:30.203085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:12.674 [2024-12-05 19:20:30.203092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:12.674 [2024-12-05 19:20:30.203190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:12.674 [2024-12-05 19:20:30.203204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:30:12.674 [2024-12-05 19:20:30.203215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:30:12.674 [2024-12-05 19:20:30.203222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:12.674 [2024-12-05 19:20:30.203241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:12.674 [2024-12-05 19:20:30.203258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:30:12.674 [2024-12-05 19:20:30.203266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:12.674 [2024-12-05 19:20:30.203273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:12.674 [2024-12-05 19:20:30.203302] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:30:12.674 [2024-12-05 19:20:30.203315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:12.674 [2024-12-05 19:20:30.203328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:30:12.674 [2024-12-05 19:20:30.203335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:30:12.674 [2024-12-05 19:20:30.203345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:12.674 [2024-12-05 19:20:30.203394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:12.674 [2024-12-05 19:20:30.203406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:30:12.674 [2024-12-05 19:20:30.203413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:30:12.674 [2024-12-05 19:20:30.203423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:12.674 [2024-12-05 19:20:30.204240] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 939.688 ms, result 0 00:30:12.674 [2024-12-05 19:20:30.216605] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:12.933 [2024-12-05 19:20:30.232597] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:30:12.933 [2024-12-05 19:20:30.240685] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:12.933 19:20:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:12.933 19:20:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:30:12.933 19:20:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:12.933 19:20:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:30:12.933 19:20:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:30:12.933 19:20:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:30:12.933 19:20:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:30:12.933 19:20:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:12.933 19:20:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:30:12.933 Validate MD5 checksum, iteration 1 00:30:12.933 19:20:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:12.933 19:20:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:12.933 19:20:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:12.933 19:20:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:12.933 19:20:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:12.933 19:20:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:12.933 [2024-12-05 19:20:30.335702] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:30:12.933 [2024-12-05 19:20:30.335814] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94453 ] 00:30:12.933 [2024-12-05 19:20:30.472323] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:12.933 [2024-12-05 19:20:30.490702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:14.312  [2024-12-05T19:20:32.441Z] Copying: 656/1024 [MB] (656 MBps) [2024-12-05T19:20:35.040Z] Copying: 1024/1024 [MB] (average 642 MBps) 00:30:17.481 00:30:17.481 19:20:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:30:17.481 19:20:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:19.398 19:20:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:19.398 Validate MD5 checksum, iteration 2 00:30:19.398 19:20:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=de3a7ca253de27fb34e56f009bdfdb64 00:30:19.398 19:20:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ de3a7ca253de27fb34e56f009bdfdb64 != \d\e\3\a\7\c\a\2\5\3\d\e\2\7\f\b\3\4\e\5\6\f\0\0\9\b\d\f\d\b\6\4 ]] 00:30:19.398 19:20:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:19.398 19:20:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:19.398 19:20:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:30:19.398 19:20:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:19.398 19:20:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:19.398 19:20:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:19.398 19:20:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:19.398 19:20:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:19.398 19:20:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:19.398 [2024-12-05 19:20:36.663287] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:30:19.398 [2024-12-05 19:20:36.663838] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94521 ] 00:30:19.398 [2024-12-05 19:20:36.804366] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:19.398 [2024-12-05 19:20:36.831202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:20.777  [2024-12-05T19:20:38.908Z] Copying: 629/1024 [MB] (629 MBps) [2024-12-05T19:20:39.480Z] Copying: 1024/1024 [MB] (average 632 MBps) 00:30:21.921 00:30:21.921 19:20:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:30:21.921 19:20:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:24.469 19:20:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:24.469 19:20:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=d8e23664d5538f6f2346e07126c3ab5f 00:30:24.469 19:20:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ d8e23664d5538f6f2346e07126c3ab5f != \d\8\e\2\3\6\6\4\d\5\5\3\8\f\6\f\2\3\4\6\e\0\7\1\2\6\c\3\a\b\5\f ]] 00:30:24.469 19:20:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:24.469 19:20:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:24.469 19:20:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:30:24.469 19:20:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:30:24.469 19:20:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:30:24.469 19:20:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:24.469 19:20:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:30:24.469 19:20:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:30:24.469 19:20:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:30:24.469 19:20:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:30:24.469 19:20:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 94430 ]] 00:30:24.469 19:20:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 94430 00:30:24.469 19:20:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 94430 ']' 00:30:24.469 19:20:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 94430 00:30:24.469 19:20:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:30:24.469 19:20:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:24.469 19:20:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94430 00:30:24.469 killing process with pid 94430 00:30:24.469 19:20:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:24.469 19:20:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:24.469 19:20:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94430' 00:30:24.469 19:20:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 94430 00:30:24.469 19:20:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 94430 00:30:24.731 [2024-12-05 19:20:42.055122] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:30:24.731 [2024-12-05 19:20:42.062712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:24.731 [2024-12-05 19:20:42.062765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:30:24.731 [2024-12-05 19:20:42.062780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:24.731 [2024-12-05 19:20:42.062789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:24.731 [2024-12-05 19:20:42.062815] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:30:24.731 [2024-12-05 19:20:42.063493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:24.731 [2024-12-05 19:20:42.063529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:30:24.731 [2024-12-05 19:20:42.063541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.663 ms 00:30:24.731 [2024-12-05 19:20:42.063560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:24.731 [2024-12-05 19:20:42.063832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:24.731 [2024-12-05 19:20:42.063846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:30:24.731 [2024-12-05 19:20:42.063857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.224 ms 00:30:24.731 [2024-12-05 19:20:42.063866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:24.731 [2024-12-05 19:20:42.066116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:24.731 [2024-12-05 19:20:42.066159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:30:24.731 [2024-12-05 19:20:42.066171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.231 ms 00:30:24.731 [2024-12-05 19:20:42.066180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:24.731 [2024-12-05 19:20:42.067363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:24.731 [2024-12-05 19:20:42.067393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:30:24.731 [2024-12-05 19:20:42.067405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.132 ms 00:30:24.731 [2024-12-05 19:20:42.067413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:24.731 [2024-12-05 19:20:42.070279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:24.731 [2024-12-05 19:20:42.070328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:30:24.731 [2024-12-05 19:20:42.070341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.808 ms 00:30:24.731 [2024-12-05 19:20:42.070350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:24.731 [2024-12-05 19:20:42.072159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:24.731 [2024-12-05 19:20:42.072205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:30:24.732 [2024-12-05 19:20:42.072216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.750 ms 00:30:24.732 [2024-12-05 19:20:42.072224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:24.732 [2024-12-05 19:20:42.072321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:24.732 [2024-12-05 19:20:42.072331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:30:24.732 [2024-12-05 19:20:42.072341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.046 ms 00:30:24.732 [2024-12-05 19:20:42.072350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:24.732 [2024-12-05 19:20:42.075025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:24.732 [2024-12-05 19:20:42.075069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:30:24.732 [2024-12-05 19:20:42.075090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.657 ms 00:30:24.732 [2024-12-05 19:20:42.075098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:24.732 [2024-12-05 19:20:42.078553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:24.732 [2024-12-05 19:20:42.078611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:30:24.732 [2024-12-05 19:20:42.078623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.683 ms 00:30:24.732 [2024-12-05 19:20:42.078631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:24.732 [2024-12-05 19:20:42.080616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:24.732 [2024-12-05 19:20:42.080665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:30:24.732 [2024-12-05 19:20:42.080676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.937 ms 00:30:24.732 [2024-12-05 19:20:42.080684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:24.732 [2024-12-05 19:20:42.082814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:24.732 [2024-12-05 19:20:42.082858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:30:24.732 [2024-12-05 19:20:42.082869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.048 ms 00:30:24.732 [2024-12-05 19:20:42.082877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:24.732 [2024-12-05 19:20:42.082918] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:30:24.732 [2024-12-05 19:20:42.082946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:30:24.732 [2024-12-05 19:20:42.082956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:30:24.732 [2024-12-05 19:20:42.082965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:30:24.732 [2024-12-05 19:20:42.082974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:24.732 [2024-12-05 19:20:42.082983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:24.732 [2024-12-05 19:20:42.082990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:24.732 [2024-12-05 19:20:42.082998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:24.732 [2024-12-05 19:20:42.083006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:24.732 [2024-12-05 19:20:42.083014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:24.732 [2024-12-05 19:20:42.083022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:24.732 [2024-12-05 19:20:42.083029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:24.732 [2024-12-05 19:20:42.083038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:24.732 [2024-12-05 19:20:42.083046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:24.732 [2024-12-05 19:20:42.083054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:24.732 [2024-12-05 19:20:42.083062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:24.732 [2024-12-05 19:20:42.083070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:24.732 [2024-12-05 19:20:42.083078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:24.732 [2024-12-05 19:20:42.083086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:24.732 [2024-12-05 19:20:42.083096] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:30:24.732 [2024-12-05 19:20:42.083104] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: a211da58-b31b-4752-af2b-fb84d4161411 00:30:24.732 [2024-12-05 19:20:42.083113] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:30:24.732 [2024-12-05 19:20:42.083120] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:30:24.732 [2024-12-05 19:20:42.083127] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:30:24.732 [2024-12-05 19:20:42.083135] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:30:24.732 [2024-12-05 19:20:42.083143] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:30:24.732 [2024-12-05 19:20:42.083152] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:30:24.732 [2024-12-05 19:20:42.083159] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:30:24.732 [2024-12-05 19:20:42.083165] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:30:24.732 [2024-12-05 19:20:42.083172] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:30:24.732 [2024-12-05 19:20:42.083180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:24.732 [2024-12-05 19:20:42.083192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:30:24.732 [2024-12-05 19:20:42.083203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.263 ms 00:30:24.732 [2024-12-05 19:20:42.083211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:24.732 [2024-12-05 19:20:42.085484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:24.732 [2024-12-05 19:20:42.085520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:30:24.732 [2024-12-05 19:20:42.085532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.255 ms 00:30:24.732 [2024-12-05 19:20:42.085541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:24.732 [2024-12-05 19:20:42.085676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:24.732 [2024-12-05 19:20:42.085687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:30:24.732 [2024-12-05 19:20:42.085697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.110 ms 00:30:24.732 [2024-12-05 19:20:42.085706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:24.732 [2024-12-05 19:20:42.093649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:24.732 [2024-12-05 19:20:42.093696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:24.732 [2024-12-05 19:20:42.093706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:24.732 [2024-12-05 19:20:42.093714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:24.732 [2024-12-05 19:20:42.093756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:24.732 [2024-12-05 19:20:42.093767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:24.732 [2024-12-05 19:20:42.093775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:24.732 [2024-12-05 19:20:42.093783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:24.732 [2024-12-05 19:20:42.093865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:24.732 [2024-12-05 19:20:42.093877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:24.732 [2024-12-05 19:20:42.093886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:24.732 [2024-12-05 19:20:42.093894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:24.732 [2024-12-05 19:20:42.093915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:24.732 [2024-12-05 19:20:42.093924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:24.732 [2024-12-05 19:20:42.093935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:24.732 [2024-12-05 19:20:42.093943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:24.732 [2024-12-05 19:20:42.108535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:24.732 [2024-12-05 19:20:42.108595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:24.732 [2024-12-05 19:20:42.108606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:24.732 [2024-12-05 19:20:42.108615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:24.732 [2024-12-05 19:20:42.120439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:24.732 [2024-12-05 19:20:42.120502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:24.732 [2024-12-05 19:20:42.120513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:24.732 [2024-12-05 19:20:42.120522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:24.732 [2024-12-05 19:20:42.120596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:24.732 [2024-12-05 19:20:42.120607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:24.732 [2024-12-05 19:20:42.120616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:24.732 [2024-12-05 19:20:42.120625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:24.732 [2024-12-05 19:20:42.120670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:24.732 [2024-12-05 19:20:42.120687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:24.732 [2024-12-05 19:20:42.120698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:24.732 [2024-12-05 19:20:42.120708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:24.732 [2024-12-05 19:20:42.120784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:24.732 [2024-12-05 19:20:42.120794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:24.733 [2024-12-05 19:20:42.120803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:24.733 [2024-12-05 19:20:42.120812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:24.733 [2024-12-05 19:20:42.120842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:24.733 [2024-12-05 19:20:42.120852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:30:24.733 [2024-12-05 19:20:42.120860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:24.733 [2024-12-05 19:20:42.120872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:24.733 [2024-12-05 19:20:42.120917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:24.733 [2024-12-05 19:20:42.120927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:24.733 [2024-12-05 19:20:42.120935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:24.733 [2024-12-05 19:20:42.120944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:24.733 [2024-12-05 19:20:42.120994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:24.733 [2024-12-05 19:20:42.121005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:24.733 [2024-12-05 19:20:42.121017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:24.733 [2024-12-05 19:20:42.121026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:24.733 [2024-12-05 19:20:42.121164] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 58.420 ms, result 0 00:30:24.994 19:20:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:30:24.994 19:20:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:24.994 19:20:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:30:24.994 19:20:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:30:24.994 19:20:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:30:24.994 19:20:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:24.994 Remove shared memory files 00:30:24.994 19:20:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:30:24.994 19:20:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:30:24.994 19:20:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:30:24.994 19:20:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:30:24.994 19:20:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid94222 00:30:24.994 19:20:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:30:24.994 19:20:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:30:24.994 00:30:24.994 real 1m17.116s 00:30:24.994 user 1m42.901s 00:30:24.994 sys 0m21.438s 00:30:24.994 19:20:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:24.994 ************************************ 00:30:24.994 END TEST ftl_upgrade_shutdown 00:30:24.994 ************************************ 00:30:24.994 19:20:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:24.994 19:20:42 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:30:24.994 19:20:42 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:30:24.994 19:20:42 ftl -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:30:24.994 19:20:42 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:24.994 19:20:42 ftl -- common/autotest_common.sh@10 -- # set +x 00:30:24.994 ************************************ 00:30:24.994 START TEST ftl_restore_fast 00:30:24.994 ************************************ 00:30:24.994 19:20:42 ftl.ftl_restore_fast -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:30:24.994 * Looking for test storage... 00:30:24.994 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:30:24.994 19:20:42 ftl.ftl_restore_fast -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:24.994 19:20:42 ftl.ftl_restore_fast -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:24.994 19:20:42 ftl.ftl_restore_fast -- common/autotest_common.sh@1711 -- # lcov --version 00:30:25.255 19:20:42 ftl.ftl_restore_fast -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:25.255 19:20:42 ftl.ftl_restore_fast -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:25.255 19:20:42 ftl.ftl_restore_fast -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:25.255 19:20:42 ftl.ftl_restore_fast -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:25.255 19:20:42 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # IFS=.-: 00:30:25.255 19:20:42 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # read -ra ver1 00:30:25.255 19:20:42 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # IFS=.-: 00:30:25.255 19:20:42 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # read -ra ver2 00:30:25.255 19:20:42 ftl.ftl_restore_fast -- scripts/common.sh@338 -- # local 'op=<' 00:30:25.255 19:20:42 ftl.ftl_restore_fast -- scripts/common.sh@340 -- # ver1_l=2 00:30:25.255 19:20:42 ftl.ftl_restore_fast -- scripts/common.sh@341 -- # ver2_l=1 00:30:25.255 19:20:42 ftl.ftl_restore_fast -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:25.255 19:20:42 ftl.ftl_restore_fast -- scripts/common.sh@344 -- # case "$op" in 00:30:25.255 19:20:42 ftl.ftl_restore_fast -- scripts/common.sh@345 -- # : 1 00:30:25.255 19:20:42 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:25.255 19:20:42 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:25.255 19:20:42 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # decimal 1 00:30:25.255 19:20:42 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=1 00:30:25.255 19:20:42 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:25.255 19:20:42 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 1 00:30:25.255 19:20:42 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # ver1[v]=1 00:30:25.255 19:20:42 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # decimal 2 00:30:25.255 19:20:42 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=2 00:30:25.255 19:20:42 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:25.255 19:20:42 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 2 00:30:25.255 19:20:42 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # ver2[v]=2 00:30:25.255 19:20:42 ftl.ftl_restore_fast -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:25.255 19:20:42 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:25.255 19:20:42 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # return 0 00:30:25.255 19:20:42 ftl.ftl_restore_fast -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:25.255 19:20:42 ftl.ftl_restore_fast -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:25.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:25.255 --rc genhtml_branch_coverage=1 00:30:25.255 --rc genhtml_function_coverage=1 00:30:25.255 --rc genhtml_legend=1 00:30:25.255 --rc geninfo_all_blocks=1 00:30:25.255 --rc geninfo_unexecuted_blocks=1 00:30:25.255 00:30:25.255 ' 00:30:25.255 19:20:42 ftl.ftl_restore_fast -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:25.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:25.255 --rc genhtml_branch_coverage=1 00:30:25.255 --rc genhtml_function_coverage=1 00:30:25.255 --rc genhtml_legend=1 00:30:25.255 --rc geninfo_all_blocks=1 00:30:25.255 --rc geninfo_unexecuted_blocks=1 00:30:25.255 00:30:25.255 ' 00:30:25.255 19:20:42 ftl.ftl_restore_fast -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:25.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:25.255 --rc genhtml_branch_coverage=1 00:30:25.255 --rc genhtml_function_coverage=1 00:30:25.255 --rc genhtml_legend=1 00:30:25.255 --rc geninfo_all_blocks=1 00:30:25.255 --rc geninfo_unexecuted_blocks=1 00:30:25.255 00:30:25.255 ' 00:30:25.255 19:20:42 ftl.ftl_restore_fast -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:25.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:25.255 --rc genhtml_branch_coverage=1 00:30:25.255 --rc genhtml_function_coverage=1 00:30:25.255 --rc genhtml_legend=1 00:30:25.255 --rc geninfo_all_blocks=1 00:30:25.255 --rc geninfo_unexecuted_blocks=1 00:30:25.255 00:30:25.255 ' 00:30:25.255 19:20:42 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:30:25.255 19:20:42 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:30:25.255 19:20:42 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:30:25.255 19:20:42 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:30:25.255 19:20:42 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:30:25.255 19:20:42 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:30:25.255 19:20:42 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:25.255 19:20:42 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:30:25.255 19:20:42 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:30:25.255 19:20:42 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:25.255 19:20:42 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:25.255 19:20:42 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:30:25.255 19:20:42 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:30:25.256 19:20:42 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:25.256 19:20:42 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:25.256 19:20:42 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:30:25.256 19:20:42 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:30:25.256 19:20:42 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:25.256 19:20:42 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:25.256 19:20:42 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:30:25.256 19:20:42 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:30:25.256 19:20:42 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:30:25.256 19:20:42 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:30:25.256 19:20:42 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:25.256 19:20:42 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:25.256 19:20:42 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:30:25.256 19:20:42 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:30:25.256 19:20:42 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:25.256 19:20:42 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:25.256 19:20:42 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:25.256 19:20:42 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:30:25.256 19:20:42 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.28W1VmXIHa 00:30:25.256 19:20:42 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:30:25.256 19:20:42 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:30:25.256 19:20:42 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:30:25.256 19:20:42 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:30:25.256 19:20:42 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:30:25.256 19:20:42 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:30:25.256 19:20:42 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:30:25.256 19:20:42 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:30:25.256 19:20:42 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:30:25.256 19:20:42 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:30:25.256 19:20:42 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:30:25.256 19:20:42 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=94660 00:30:25.256 19:20:42 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 94660 00:30:25.256 19:20:42 ftl.ftl_restore_fast -- common/autotest_common.sh@835 -- # '[' -z 94660 ']' 00:30:25.256 19:20:42 ftl.ftl_restore_fast -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:25.256 19:20:42 ftl.ftl_restore_fast -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:25.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:25.256 19:20:42 ftl.ftl_restore_fast -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:25.256 19:20:42 ftl.ftl_restore_fast -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:25.256 19:20:42 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:30:25.256 19:20:42 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:25.256 [2024-12-05 19:20:42.746914] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:30:25.256 [2024-12-05 19:20:42.747066] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94660 ] 00:30:25.518 [2024-12-05 19:20:42.894358] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:25.518 [2024-12-05 19:20:42.936151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:26.090 19:20:43 ftl.ftl_restore_fast -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:26.090 19:20:43 ftl.ftl_restore_fast -- common/autotest_common.sh@868 -- # return 0 00:30:26.090 19:20:43 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:30:26.090 19:20:43 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:30:26.090 19:20:43 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:30:26.090 19:20:43 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:30:26.090 19:20:43 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:30:26.090 19:20:43 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:30:26.351 19:20:43 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:30:26.351 19:20:43 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:30:26.351 19:20:43 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:30:26.351 19:20:43 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:30:26.351 19:20:43 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:30:26.351 19:20:43 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:30:26.351 19:20:43 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:30:26.351 19:20:43 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:30:26.612 19:20:44 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:30:26.612 { 00:30:26.612 "name": "nvme0n1", 00:30:26.612 "aliases": [ 00:30:26.612 "b599d54e-c68b-4735-a4c6-e32192b5cd65" 00:30:26.612 ], 00:30:26.612 "product_name": "NVMe disk", 00:30:26.612 "block_size": 4096, 00:30:26.612 "num_blocks": 1310720, 00:30:26.612 "uuid": "b599d54e-c68b-4735-a4c6-e32192b5cd65", 00:30:26.612 "numa_id": -1, 00:30:26.612 "assigned_rate_limits": { 00:30:26.612 "rw_ios_per_sec": 0, 00:30:26.612 "rw_mbytes_per_sec": 0, 00:30:26.612 "r_mbytes_per_sec": 0, 00:30:26.612 "w_mbytes_per_sec": 0 00:30:26.612 }, 00:30:26.612 "claimed": true, 00:30:26.612 "claim_type": "read_many_write_one", 00:30:26.612 "zoned": false, 00:30:26.612 "supported_io_types": { 00:30:26.612 "read": true, 00:30:26.612 "write": true, 00:30:26.612 "unmap": true, 00:30:26.612 "flush": true, 00:30:26.612 "reset": true, 00:30:26.612 "nvme_admin": true, 00:30:26.612 "nvme_io": true, 00:30:26.612 "nvme_io_md": false, 00:30:26.612 "write_zeroes": true, 00:30:26.612 "zcopy": false, 00:30:26.612 "get_zone_info": false, 00:30:26.612 "zone_management": false, 00:30:26.612 "zone_append": false, 00:30:26.612 "compare": true, 00:30:26.612 "compare_and_write": false, 00:30:26.612 "abort": true, 00:30:26.612 "seek_hole": false, 00:30:26.612 "seek_data": false, 00:30:26.612 "copy": true, 00:30:26.612 "nvme_iov_md": false 00:30:26.612 }, 00:30:26.612 "driver_specific": { 00:30:26.612 "nvme": [ 00:30:26.612 { 00:30:26.612 "pci_address": "0000:00:11.0", 00:30:26.612 "trid": { 00:30:26.612 "trtype": "PCIe", 00:30:26.612 "traddr": "0000:00:11.0" 00:30:26.612 }, 00:30:26.612 "ctrlr_data": { 00:30:26.612 "cntlid": 0, 00:30:26.612 "vendor_id": "0x1b36", 00:30:26.612 "model_number": "QEMU NVMe Ctrl", 00:30:26.612 "serial_number": "12341", 00:30:26.612 "firmware_revision": "8.0.0", 00:30:26.612 "subnqn": "nqn.2019-08.org.qemu:12341", 00:30:26.612 "oacs": { 00:30:26.612 "security": 0, 00:30:26.612 "format": 1, 00:30:26.612 "firmware": 0, 00:30:26.612 "ns_manage": 1 00:30:26.613 }, 00:30:26.613 "multi_ctrlr": false, 00:30:26.613 "ana_reporting": false 00:30:26.613 }, 00:30:26.613 "vs": { 00:30:26.613 "nvme_version": "1.4" 00:30:26.613 }, 00:30:26.613 "ns_data": { 00:30:26.613 "id": 1, 00:30:26.613 "can_share": false 00:30:26.613 } 00:30:26.613 } 00:30:26.613 ], 00:30:26.613 "mp_policy": "active_passive" 00:30:26.613 } 00:30:26.613 } 00:30:26.613 ]' 00:30:26.613 19:20:44 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:30:26.613 19:20:44 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:30:26.613 19:20:44 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:30:26.875 19:20:44 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=1310720 00:30:26.875 19:20:44 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:30:26.875 19:20:44 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 5120 00:30:26.875 19:20:44 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:30:26.875 19:20:44 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:30:26.875 19:20:44 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:30:26.875 19:20:44 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:26.875 19:20:44 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:30:26.875 19:20:44 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=4aa7393c-67ca-4cbe-9867-69576c539c5c 00:30:26.875 19:20:44 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:30:26.875 19:20:44 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4aa7393c-67ca-4cbe-9867-69576c539c5c 00:30:27.136 19:20:44 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:30:27.397 19:20:44 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=ac906215-a184-4d8b-aac9-ad4edb6337ef 00:30:27.397 19:20:44 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u ac906215-a184-4d8b-aac9-ad4edb6337ef 00:30:27.660 19:20:45 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=c1f94281-b7bd-47b2-bb03-e4e77fba2463 00:30:27.660 19:20:45 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:30:27.660 19:20:45 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 c1f94281-b7bd-47b2-bb03-e4e77fba2463 00:30:27.660 19:20:45 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:30:27.660 19:20:45 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:30:27.660 19:20:45 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=c1f94281-b7bd-47b2-bb03-e4e77fba2463 00:30:27.660 19:20:45 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:30:27.660 19:20:45 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size c1f94281-b7bd-47b2-bb03-e4e77fba2463 00:30:27.660 19:20:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=c1f94281-b7bd-47b2-bb03-e4e77fba2463 00:30:27.660 19:20:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:30:27.660 19:20:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:30:27.660 19:20:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:30:27.660 19:20:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c1f94281-b7bd-47b2-bb03-e4e77fba2463 00:30:27.923 19:20:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:30:27.923 { 00:30:27.923 "name": "c1f94281-b7bd-47b2-bb03-e4e77fba2463", 00:30:27.923 "aliases": [ 00:30:27.923 "lvs/nvme0n1p0" 00:30:27.923 ], 00:30:27.923 "product_name": "Logical Volume", 00:30:27.923 "block_size": 4096, 00:30:27.923 "num_blocks": 26476544, 00:30:27.923 "uuid": "c1f94281-b7bd-47b2-bb03-e4e77fba2463", 00:30:27.923 "assigned_rate_limits": { 00:30:27.923 "rw_ios_per_sec": 0, 00:30:27.923 "rw_mbytes_per_sec": 0, 00:30:27.923 "r_mbytes_per_sec": 0, 00:30:27.923 "w_mbytes_per_sec": 0 00:30:27.923 }, 00:30:27.923 "claimed": false, 00:30:27.923 "zoned": false, 00:30:27.923 "supported_io_types": { 00:30:27.923 "read": true, 00:30:27.923 "write": true, 00:30:27.923 "unmap": true, 00:30:27.923 "flush": false, 00:30:27.923 "reset": true, 00:30:27.923 "nvme_admin": false, 00:30:27.923 "nvme_io": false, 00:30:27.923 "nvme_io_md": false, 00:30:27.923 "write_zeroes": true, 00:30:27.923 "zcopy": false, 00:30:27.923 "get_zone_info": false, 00:30:27.923 "zone_management": false, 00:30:27.923 "zone_append": false, 00:30:27.923 "compare": false, 00:30:27.923 "compare_and_write": false, 00:30:27.923 "abort": false, 00:30:27.923 "seek_hole": true, 00:30:27.923 "seek_data": true, 00:30:27.923 "copy": false, 00:30:27.923 "nvme_iov_md": false 00:30:27.923 }, 00:30:27.923 "driver_specific": { 00:30:27.923 "lvol": { 00:30:27.923 "lvol_store_uuid": "ac906215-a184-4d8b-aac9-ad4edb6337ef", 00:30:27.923 "base_bdev": "nvme0n1", 00:30:27.923 "thin_provision": true, 00:30:27.923 "num_allocated_clusters": 0, 00:30:27.923 "snapshot": false, 00:30:27.923 "clone": false, 00:30:27.923 "esnap_clone": false 00:30:27.923 } 00:30:27.923 } 00:30:27.923 } 00:30:27.923 ]' 00:30:27.923 19:20:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:30:27.923 19:20:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:30:27.923 19:20:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:30:27.923 19:20:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:30:27.923 19:20:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:30:27.923 19:20:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:30:27.923 19:20:45 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:30:27.923 19:20:45 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:30:27.923 19:20:45 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:30:28.184 19:20:45 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:30:28.184 19:20:45 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:30:28.184 19:20:45 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size c1f94281-b7bd-47b2-bb03-e4e77fba2463 00:30:28.184 19:20:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=c1f94281-b7bd-47b2-bb03-e4e77fba2463 00:30:28.184 19:20:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:30:28.184 19:20:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:30:28.184 19:20:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:30:28.184 19:20:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c1f94281-b7bd-47b2-bb03-e4e77fba2463 00:30:28.447 19:20:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:30:28.447 { 00:30:28.447 "name": "c1f94281-b7bd-47b2-bb03-e4e77fba2463", 00:30:28.447 "aliases": [ 00:30:28.447 "lvs/nvme0n1p0" 00:30:28.447 ], 00:30:28.447 "product_name": "Logical Volume", 00:30:28.447 "block_size": 4096, 00:30:28.447 "num_blocks": 26476544, 00:30:28.447 "uuid": "c1f94281-b7bd-47b2-bb03-e4e77fba2463", 00:30:28.447 "assigned_rate_limits": { 00:30:28.447 "rw_ios_per_sec": 0, 00:30:28.447 "rw_mbytes_per_sec": 0, 00:30:28.447 "r_mbytes_per_sec": 0, 00:30:28.447 "w_mbytes_per_sec": 0 00:30:28.447 }, 00:30:28.447 "claimed": false, 00:30:28.447 "zoned": false, 00:30:28.447 "supported_io_types": { 00:30:28.447 "read": true, 00:30:28.447 "write": true, 00:30:28.447 "unmap": true, 00:30:28.447 "flush": false, 00:30:28.447 "reset": true, 00:30:28.447 "nvme_admin": false, 00:30:28.447 "nvme_io": false, 00:30:28.447 "nvme_io_md": false, 00:30:28.447 "write_zeroes": true, 00:30:28.447 "zcopy": false, 00:30:28.447 "get_zone_info": false, 00:30:28.447 "zone_management": false, 00:30:28.447 "zone_append": false, 00:30:28.447 "compare": false, 00:30:28.447 "compare_and_write": false, 00:30:28.447 "abort": false, 00:30:28.447 "seek_hole": true, 00:30:28.447 "seek_data": true, 00:30:28.447 "copy": false, 00:30:28.447 "nvme_iov_md": false 00:30:28.447 }, 00:30:28.447 "driver_specific": { 00:30:28.447 "lvol": { 00:30:28.447 "lvol_store_uuid": "ac906215-a184-4d8b-aac9-ad4edb6337ef", 00:30:28.447 "base_bdev": "nvme0n1", 00:30:28.447 "thin_provision": true, 00:30:28.448 "num_allocated_clusters": 0, 00:30:28.448 "snapshot": false, 00:30:28.448 "clone": false, 00:30:28.448 "esnap_clone": false 00:30:28.448 } 00:30:28.448 } 00:30:28.448 } 00:30:28.448 ]' 00:30:28.448 19:20:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:30:28.448 19:20:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:30:28.448 19:20:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:30:28.448 19:20:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:30:28.448 19:20:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:30:28.448 19:20:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:30:28.448 19:20:45 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:30:28.448 19:20:45 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:30:28.710 19:20:46 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:30:28.710 19:20:46 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size c1f94281-b7bd-47b2-bb03-e4e77fba2463 00:30:28.710 19:20:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=c1f94281-b7bd-47b2-bb03-e4e77fba2463 00:30:28.710 19:20:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:30:28.710 19:20:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:30:28.710 19:20:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:30:28.710 19:20:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c1f94281-b7bd-47b2-bb03-e4e77fba2463 00:30:28.972 19:20:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:30:28.972 { 00:30:28.972 "name": "c1f94281-b7bd-47b2-bb03-e4e77fba2463", 00:30:28.972 "aliases": [ 00:30:28.972 "lvs/nvme0n1p0" 00:30:28.972 ], 00:30:28.972 "product_name": "Logical Volume", 00:30:28.972 "block_size": 4096, 00:30:28.972 "num_blocks": 26476544, 00:30:28.972 "uuid": "c1f94281-b7bd-47b2-bb03-e4e77fba2463", 00:30:28.972 "assigned_rate_limits": { 00:30:28.972 "rw_ios_per_sec": 0, 00:30:28.972 "rw_mbytes_per_sec": 0, 00:30:28.972 "r_mbytes_per_sec": 0, 00:30:28.972 "w_mbytes_per_sec": 0 00:30:28.972 }, 00:30:28.972 "claimed": false, 00:30:28.972 "zoned": false, 00:30:28.972 "supported_io_types": { 00:30:28.972 "read": true, 00:30:28.972 "write": true, 00:30:28.972 "unmap": true, 00:30:28.972 "flush": false, 00:30:28.972 "reset": true, 00:30:28.972 "nvme_admin": false, 00:30:28.972 "nvme_io": false, 00:30:28.972 "nvme_io_md": false, 00:30:28.972 "write_zeroes": true, 00:30:28.972 "zcopy": false, 00:30:28.972 "get_zone_info": false, 00:30:28.972 "zone_management": false, 00:30:28.972 "zone_append": false, 00:30:28.972 "compare": false, 00:30:28.972 "compare_and_write": false, 00:30:28.972 "abort": false, 00:30:28.972 "seek_hole": true, 00:30:28.972 "seek_data": true, 00:30:28.972 "copy": false, 00:30:28.972 "nvme_iov_md": false 00:30:28.972 }, 00:30:28.972 "driver_specific": { 00:30:28.972 "lvol": { 00:30:28.972 "lvol_store_uuid": "ac906215-a184-4d8b-aac9-ad4edb6337ef", 00:30:28.972 "base_bdev": "nvme0n1", 00:30:28.972 "thin_provision": true, 00:30:28.972 "num_allocated_clusters": 0, 00:30:28.972 "snapshot": false, 00:30:28.972 "clone": false, 00:30:28.972 "esnap_clone": false 00:30:28.972 } 00:30:28.972 } 00:30:28.972 } 00:30:28.972 ]' 00:30:28.972 19:20:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:30:28.972 19:20:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:30:28.972 19:20:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:30:28.972 19:20:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:30:28.972 19:20:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:30:28.972 19:20:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:30:28.972 19:20:46 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:30:28.972 19:20:46 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d c1f94281-b7bd-47b2-bb03-e4e77fba2463 --l2p_dram_limit 10' 00:30:28.972 19:20:46 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:30:28.972 19:20:46 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:30:28.972 19:20:46 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:30:28.972 19:20:46 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:30:28.972 19:20:46 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:30:28.972 19:20:46 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d c1f94281-b7bd-47b2-bb03-e4e77fba2463 --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:30:29.233 [2024-12-05 19:20:46.692424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.233 [2024-12-05 19:20:46.692471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:29.233 [2024-12-05 19:20:46.692483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:29.233 [2024-12-05 19:20:46.692492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.233 [2024-12-05 19:20:46.692552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.233 [2024-12-05 19:20:46.692562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:29.233 [2024-12-05 19:20:46.692571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:30:29.233 [2024-12-05 19:20:46.692581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.233 [2024-12-05 19:20:46.692597] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:29.233 [2024-12-05 19:20:46.692830] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:29.233 [2024-12-05 19:20:46.692844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.233 [2024-12-05 19:20:46.692852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:29.233 [2024-12-05 19:20:46.692861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.251 ms 00:30:29.233 [2024-12-05 19:20:46.692872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.233 [2024-12-05 19:20:46.692899] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 42867315-b16e-4303-b9a5-ab6c21560170 00:30:29.233 [2024-12-05 19:20:46.694422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.233 [2024-12-05 19:20:46.694454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:30:29.233 [2024-12-05 19:20:46.694465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:30:29.233 [2024-12-05 19:20:46.694473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.233 [2024-12-05 19:20:46.702332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.233 [2024-12-05 19:20:46.702356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:29.233 [2024-12-05 19:20:46.702368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.819 ms 00:30:29.233 [2024-12-05 19:20:46.702375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.233 [2024-12-05 19:20:46.702482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.233 [2024-12-05 19:20:46.702491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:29.233 [2024-12-05 19:20:46.702500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:30:29.233 [2024-12-05 19:20:46.702506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.233 [2024-12-05 19:20:46.702557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.233 [2024-12-05 19:20:46.702565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:29.233 [2024-12-05 19:20:46.702573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:30:29.233 [2024-12-05 19:20:46.702582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.233 [2024-12-05 19:20:46.702605] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:29.234 [2024-12-05 19:20:46.704430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.234 [2024-12-05 19:20:46.704450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:29.234 [2024-12-05 19:20:46.704458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.832 ms 00:30:29.234 [2024-12-05 19:20:46.704465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.234 [2024-12-05 19:20:46.704495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.234 [2024-12-05 19:20:46.704505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:29.234 [2024-12-05 19:20:46.704512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:30:29.234 [2024-12-05 19:20:46.704521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.234 [2024-12-05 19:20:46.704538] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:30:29.234 [2024-12-05 19:20:46.704672] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:29.234 [2024-12-05 19:20:46.704684] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:29.234 [2024-12-05 19:20:46.704699] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:29.234 [2024-12-05 19:20:46.704707] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:29.234 [2024-12-05 19:20:46.704718] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:29.234 [2024-12-05 19:20:46.704726] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:29.234 [2024-12-05 19:20:46.704736] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:29.234 [2024-12-05 19:20:46.704742] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:29.234 [2024-12-05 19:20:46.704750] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:29.234 [2024-12-05 19:20:46.704757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.234 [2024-12-05 19:20:46.704764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:29.234 [2024-12-05 19:20:46.704771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.220 ms 00:30:29.234 [2024-12-05 19:20:46.704779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.234 [2024-12-05 19:20:46.704845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.234 [2024-12-05 19:20:46.704855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:29.234 [2024-12-05 19:20:46.704861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:30:29.234 [2024-12-05 19:20:46.704872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.234 [2024-12-05 19:20:46.704945] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:29.234 [2024-12-05 19:20:46.704961] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:29.234 [2024-12-05 19:20:46.704969] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:29.234 [2024-12-05 19:20:46.704977] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:29.234 [2024-12-05 19:20:46.704984] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:29.234 [2024-12-05 19:20:46.704991] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:29.234 [2024-12-05 19:20:46.704997] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:29.234 [2024-12-05 19:20:46.705004] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:29.234 [2024-12-05 19:20:46.705011] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:29.234 [2024-12-05 19:20:46.705023] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:29.234 [2024-12-05 19:20:46.705029] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:29.234 [2024-12-05 19:20:46.705036] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:29.234 [2024-12-05 19:20:46.705041] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:29.234 [2024-12-05 19:20:46.705050] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:29.234 [2024-12-05 19:20:46.705056] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:29.234 [2024-12-05 19:20:46.705063] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:29.234 [2024-12-05 19:20:46.705069] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:29.234 [2024-12-05 19:20:46.705075] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:29.234 [2024-12-05 19:20:46.705081] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:29.234 [2024-12-05 19:20:46.705089] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:29.234 [2024-12-05 19:20:46.705095] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:29.234 [2024-12-05 19:20:46.705103] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:29.234 [2024-12-05 19:20:46.705109] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:29.234 [2024-12-05 19:20:46.705117] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:29.234 [2024-12-05 19:20:46.705123] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:29.234 [2024-12-05 19:20:46.705132] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:29.234 [2024-12-05 19:20:46.705138] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:29.234 [2024-12-05 19:20:46.705145] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:29.234 [2024-12-05 19:20:46.705152] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:29.234 [2024-12-05 19:20:46.705161] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:29.234 [2024-12-05 19:20:46.705166] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:29.234 [2024-12-05 19:20:46.705174] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:29.234 [2024-12-05 19:20:46.705181] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:29.234 [2024-12-05 19:20:46.705187] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:29.234 [2024-12-05 19:20:46.705195] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:29.234 [2024-12-05 19:20:46.705203] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:29.234 [2024-12-05 19:20:46.705209] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:29.234 [2024-12-05 19:20:46.705218] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:29.234 [2024-12-05 19:20:46.705224] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:29.234 [2024-12-05 19:20:46.705232] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:29.234 [2024-12-05 19:20:46.705238] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:29.234 [2024-12-05 19:20:46.705248] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:29.234 [2024-12-05 19:20:46.705268] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:29.234 [2024-12-05 19:20:46.705276] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:29.234 [2024-12-05 19:20:46.705289] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:29.234 [2024-12-05 19:20:46.705300] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:29.234 [2024-12-05 19:20:46.705308] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:29.234 [2024-12-05 19:20:46.705320] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:29.234 [2024-12-05 19:20:46.705327] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:29.234 [2024-12-05 19:20:46.705335] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:29.234 [2024-12-05 19:20:46.705341] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:29.234 [2024-12-05 19:20:46.705360] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:29.234 [2024-12-05 19:20:46.705368] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:29.234 [2024-12-05 19:20:46.705378] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:29.234 [2024-12-05 19:20:46.705388] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:29.234 [2024-12-05 19:20:46.705398] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:29.234 [2024-12-05 19:20:46.705404] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:29.234 [2024-12-05 19:20:46.705413] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:29.234 [2024-12-05 19:20:46.705420] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:29.234 [2024-12-05 19:20:46.705429] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:29.234 [2024-12-05 19:20:46.705435] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:29.234 [2024-12-05 19:20:46.705444] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:29.234 [2024-12-05 19:20:46.705451] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:29.234 [2024-12-05 19:20:46.705459] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:29.234 [2024-12-05 19:20:46.705466] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:29.234 [2024-12-05 19:20:46.705474] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:29.234 [2024-12-05 19:20:46.705479] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:29.234 [2024-12-05 19:20:46.705488] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:29.234 [2024-12-05 19:20:46.705494] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:29.235 [2024-12-05 19:20:46.705501] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:29.235 [2024-12-05 19:20:46.705511] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:29.235 [2024-12-05 19:20:46.705520] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:29.235 [2024-12-05 19:20:46.705525] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:29.235 [2024-12-05 19:20:46.705533] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:29.235 [2024-12-05 19:20:46.705539] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:29.235 [2024-12-05 19:20:46.705546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.235 [2024-12-05 19:20:46.705553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:29.235 [2024-12-05 19:20:46.705562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.651 ms 00:30:29.235 [2024-12-05 19:20:46.705568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.235 [2024-12-05 19:20:46.705603] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:30:29.235 [2024-12-05 19:20:46.705611] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:30:33.449 [2024-12-05 19:20:50.224556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.449 [2024-12-05 19:20:50.224608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:30:33.449 [2024-12-05 19:20:50.224622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3518.935 ms 00:30:33.449 [2024-12-05 19:20:50.224630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.449 [2024-12-05 19:20:50.235063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.449 [2024-12-05 19:20:50.235103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:33.449 [2024-12-05 19:20:50.235120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.351 ms 00:30:33.449 [2024-12-05 19:20:50.235128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.449 [2024-12-05 19:20:50.235227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.449 [2024-12-05 19:20:50.235235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:33.449 [2024-12-05 19:20:50.235243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:30:33.449 [2024-12-05 19:20:50.235259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.449 [2024-12-05 19:20:50.245261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.449 [2024-12-05 19:20:50.245294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:33.449 [2024-12-05 19:20:50.245305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.956 ms 00:30:33.449 [2024-12-05 19:20:50.245314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.449 [2024-12-05 19:20:50.245340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.449 [2024-12-05 19:20:50.245347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:33.449 [2024-12-05 19:20:50.245356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:30:33.449 [2024-12-05 19:20:50.245365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.449 [2024-12-05 19:20:50.245767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.449 [2024-12-05 19:20:50.245785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:33.449 [2024-12-05 19:20:50.245794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.364 ms 00:30:33.449 [2024-12-05 19:20:50.245800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.449 [2024-12-05 19:20:50.245895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.449 [2024-12-05 19:20:50.245906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:33.449 [2024-12-05 19:20:50.245915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:30:33.449 [2024-12-05 19:20:50.245922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.449 [2024-12-05 19:20:50.252365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.449 [2024-12-05 19:20:50.252391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:33.449 [2024-12-05 19:20:50.252401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.425 ms 00:30:33.449 [2024-12-05 19:20:50.252408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.449 [2024-12-05 19:20:50.274835] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:33.449 [2024-12-05 19:20:50.278035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.449 [2024-12-05 19:20:50.278069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:33.449 [2024-12-05 19:20:50.278082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.542 ms 00:30:33.449 [2024-12-05 19:20:50.278099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.449 [2024-12-05 19:20:50.349688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.449 [2024-12-05 19:20:50.349737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:30:33.449 [2024-12-05 19:20:50.349753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 71.552 ms 00:30:33.449 [2024-12-05 19:20:50.349767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.449 [2024-12-05 19:20:50.349963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.449 [2024-12-05 19:20:50.349977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:33.449 [2024-12-05 19:20:50.349986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.152 ms 00:30:33.449 [2024-12-05 19:20:50.349996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.449 [2024-12-05 19:20:50.354308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.449 [2024-12-05 19:20:50.354350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:30:33.449 [2024-12-05 19:20:50.354363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.294 ms 00:30:33.449 [2024-12-05 19:20:50.354372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.449 [2024-12-05 19:20:50.357927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.449 [2024-12-05 19:20:50.357962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:30:33.449 [2024-12-05 19:20:50.357972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.521 ms 00:30:33.449 [2024-12-05 19:20:50.357981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.449 [2024-12-05 19:20:50.358346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.449 [2024-12-05 19:20:50.358360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:33.449 [2024-12-05 19:20:50.358369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.335 ms 00:30:33.449 [2024-12-05 19:20:50.358382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.449 [2024-12-05 19:20:50.389434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.449 [2024-12-05 19:20:50.389473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:30:33.449 [2024-12-05 19:20:50.389487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.033 ms 00:30:33.449 [2024-12-05 19:20:50.389498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.449 [2024-12-05 19:20:50.395007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.449 [2024-12-05 19:20:50.395048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:30:33.449 [2024-12-05 19:20:50.395058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.465 ms 00:30:33.449 [2024-12-05 19:20:50.395068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.449 [2024-12-05 19:20:50.398691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.449 [2024-12-05 19:20:50.398846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:30:33.449 [2024-12-05 19:20:50.398862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.589 ms 00:30:33.449 [2024-12-05 19:20:50.398871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.449 [2024-12-05 19:20:50.402832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.449 [2024-12-05 19:20:50.402870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:33.449 [2024-12-05 19:20:50.402879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.929 ms 00:30:33.450 [2024-12-05 19:20:50.402891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.450 [2024-12-05 19:20:50.402928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.450 [2024-12-05 19:20:50.402939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:33.450 [2024-12-05 19:20:50.402948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:30:33.450 [2024-12-05 19:20:50.402958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.450 [2024-12-05 19:20:50.403028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.450 [2024-12-05 19:20:50.403044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:33.450 [2024-12-05 19:20:50.403052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:30:33.450 [2024-12-05 19:20:50.403064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.450 [2024-12-05 19:20:50.404031] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3711.168 ms, result 0 00:30:33.450 { 00:30:33.450 "name": "ftl0", 00:30:33.450 "uuid": "42867315-b16e-4303-b9a5-ab6c21560170" 00:30:33.450 } 00:30:33.450 19:20:50 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:30:33.450 19:20:50 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:30:33.450 19:20:50 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:30:33.450 19:20:50 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:30:33.450 [2024-12-05 19:20:50.813180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.450 [2024-12-05 19:20:50.813333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:33.450 [2024-12-05 19:20:50.813400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:33.450 [2024-12-05 19:20:50.813426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.450 [2024-12-05 19:20:50.813471] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:33.450 [2024-12-05 19:20:50.814069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.450 [2024-12-05 19:20:50.814181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:33.450 [2024-12-05 19:20:50.814230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.559 ms 00:30:33.450 [2024-12-05 19:20:50.814267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.450 [2024-12-05 19:20:50.814533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.450 [2024-12-05 19:20:50.814563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:33.450 [2024-12-05 19:20:50.814583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.229 ms 00:30:33.450 [2024-12-05 19:20:50.814610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.450 [2024-12-05 19:20:50.817975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.450 [2024-12-05 19:20:50.818056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:30:33.450 [2024-12-05 19:20:50.818110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.338 ms 00:30:33.450 [2024-12-05 19:20:50.818135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.450 [2024-12-05 19:20:50.824516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.450 [2024-12-05 19:20:50.824618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:30:33.450 [2024-12-05 19:20:50.824676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.350 ms 00:30:33.450 [2024-12-05 19:20:50.824704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.450 [2024-12-05 19:20:50.827444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.450 [2024-12-05 19:20:50.827553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:30:33.450 [2024-12-05 19:20:50.827603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.658 ms 00:30:33.450 [2024-12-05 19:20:50.827627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.450 [2024-12-05 19:20:50.833059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.450 [2024-12-05 19:20:50.833177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:30:33.450 [2024-12-05 19:20:50.833230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.332 ms 00:30:33.450 [2024-12-05 19:20:50.833265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.450 [2024-12-05 19:20:50.833437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.450 [2024-12-05 19:20:50.833522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:30:33.450 [2024-12-05 19:20:50.833549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:30:33.450 [2024-12-05 19:20:50.833570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.450 [2024-12-05 19:20:50.836400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.450 [2024-12-05 19:20:50.836501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:30:33.450 [2024-12-05 19:20:50.836551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.743 ms 00:30:33.450 [2024-12-05 19:20:50.836576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.450 [2024-12-05 19:20:50.839276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.450 [2024-12-05 19:20:50.839379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:30:33.450 [2024-12-05 19:20:50.839436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.661 ms 00:30:33.450 [2024-12-05 19:20:50.839460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.450 [2024-12-05 19:20:50.841482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.450 [2024-12-05 19:20:50.841585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:30:33.450 [2024-12-05 19:20:50.841633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.948 ms 00:30:33.450 [2024-12-05 19:20:50.841656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.450 [2024-12-05 19:20:50.843473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.450 [2024-12-05 19:20:50.843574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:30:33.450 [2024-12-05 19:20:50.843622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.717 ms 00:30:33.450 [2024-12-05 19:20:50.843646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.450 [2024-12-05 19:20:50.843685] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:33.450 [2024-12-05 19:20:50.843715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:30:33.450 [2024-12-05 19:20:50.843746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:33.450 [2024-12-05 19:20:50.843779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:33.450 [2024-12-05 19:20:50.843808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:33.450 [2024-12-05 19:20:50.843869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:33.450 [2024-12-05 19:20:50.843899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:33.450 [2024-12-05 19:20:50.843930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:33.450 [2024-12-05 19:20:50.843997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:33.450 [2024-12-05 19:20:50.844030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:33.450 [2024-12-05 19:20:50.844060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:33.450 [2024-12-05 19:20:50.844111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:33.450 [2024-12-05 19:20:50.844143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:33.450 [2024-12-05 19:20:50.844173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:33.450 [2024-12-05 19:20:50.844202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:33.451 [2024-12-05 19:20:50.844233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:33.451 [2024-12-05 19:20:50.844350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:33.451 [2024-12-05 19:20:50.844386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:33.451 [2024-12-05 19:20:50.844407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:33.451 [2024-12-05 19:20:50.844417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:33.451 [2024-12-05 19:20:50.844425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:33.451 [2024-12-05 19:20:50.844436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:33.451 [2024-12-05 19:20:50.844444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:33.451 [2024-12-05 19:20:50.844453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:33.451 [2024-12-05 19:20:50.844460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:33.451 [2024-12-05 19:20:50.844470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:33.451 [2024-12-05 19:20:50.844477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:33.451 [2024-12-05 19:20:50.844487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:33.451 [2024-12-05 19:20:50.844494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:33.451 [2024-12-05 19:20:50.844505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:33.451 [2024-12-05 19:20:50.844513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:33.451 [2024-12-05 19:20:50.844522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:33.451 [2024-12-05 19:20:50.844530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:33.451 [2024-12-05 19:20:50.844539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:33.451 [2024-12-05 19:20:50.844547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:33.451 [2024-12-05 19:20:50.844556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:33.451 [2024-12-05 19:20:50.844563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:33.451 [2024-12-05 19:20:50.844574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:33.451 [2024-12-05 19:20:50.844581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:33.451 [2024-12-05 19:20:50.844590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:33.451 [2024-12-05 19:20:50.844597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:33.451 [2024-12-05 19:20:50.844607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:33.451 [2024-12-05 19:20:50.844614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:33.451 [2024-12-05 19:20:50.844623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:33.451 [2024-12-05 19:20:50.844630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:33.451 [2024-12-05 19:20:50.844639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:33.451 [2024-12-05 19:20:50.844646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:33.451 [2024-12-05 19:20:50.844656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:33.451 [2024-12-05 19:20:50.844665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:33.451 [2024-12-05 19:20:50.844675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:33.451 [2024-12-05 19:20:50.844683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:33.451 [2024-12-05 19:20:50.844692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:33.451 [2024-12-05 19:20:50.844700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:33.451 [2024-12-05 19:20:50.844711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:33.451 [2024-12-05 19:20:50.844719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:33.451 [2024-12-05 19:20:50.844729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:33.451 [2024-12-05 19:20:50.844736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:33.451 [2024-12-05 19:20:50.844745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:33.451 [2024-12-05 19:20:50.844753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:33.451 [2024-12-05 19:20:50.844762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:33.451 [2024-12-05 19:20:50.844770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:33.451 [2024-12-05 19:20:50.844779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:33.451 [2024-12-05 19:20:50.844786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:33.451 [2024-12-05 19:20:50.844796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:33.451 [2024-12-05 19:20:50.844803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:33.451 [2024-12-05 19:20:50.844812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:33.451 [2024-12-05 19:20:50.844819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:33.451 [2024-12-05 19:20:50.844828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:33.451 [2024-12-05 19:20:50.844836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:33.451 [2024-12-05 19:20:50.844846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:33.451 [2024-12-05 19:20:50.844854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:33.451 [2024-12-05 19:20:50.844862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:33.451 [2024-12-05 19:20:50.844870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:33.451 [2024-12-05 19:20:50.844879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:33.451 [2024-12-05 19:20:50.844887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:33.451 [2024-12-05 19:20:50.844896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:33.451 [2024-12-05 19:20:50.844903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:33.451 [2024-12-05 19:20:50.844913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:33.451 [2024-12-05 19:20:50.844920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:33.451 [2024-12-05 19:20:50.844930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:33.451 [2024-12-05 19:20:50.844938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:33.452 [2024-12-05 19:20:50.844948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:33.452 [2024-12-05 19:20:50.844955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:33.452 [2024-12-05 19:20:50.844966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:33.452 [2024-12-05 19:20:50.844974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:33.452 [2024-12-05 19:20:50.844986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:33.452 [2024-12-05 19:20:50.844993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:33.452 [2024-12-05 19:20:50.845002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:33.452 [2024-12-05 19:20:50.845010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:33.452 [2024-12-05 19:20:50.845019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:33.452 [2024-12-05 19:20:50.845026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:33.452 [2024-12-05 19:20:50.845035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:33.452 [2024-12-05 19:20:50.845043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:33.452 [2024-12-05 19:20:50.845052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:33.452 [2024-12-05 19:20:50.845059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:33.452 [2024-12-05 19:20:50.845068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:33.452 [2024-12-05 19:20:50.845075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:33.452 [2024-12-05 19:20:50.845085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:33.452 [2024-12-05 19:20:50.845092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:33.452 [2024-12-05 19:20:50.845101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:33.452 [2024-12-05 19:20:50.845108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:33.452 [2024-12-05 19:20:50.845127] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:33.452 [2024-12-05 19:20:50.845135] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 42867315-b16e-4303-b9a5-ab6c21560170 00:30:33.452 [2024-12-05 19:20:50.845145] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:30:33.452 [2024-12-05 19:20:50.845152] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:30:33.452 [2024-12-05 19:20:50.845161] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:30:33.452 [2024-12-05 19:20:50.845169] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:30:33.452 [2024-12-05 19:20:50.845178] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:33.452 [2024-12-05 19:20:50.845190] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:33.452 [2024-12-05 19:20:50.845199] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:33.452 [2024-12-05 19:20:50.845205] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:33.452 [2024-12-05 19:20:50.845217] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:33.452 [2024-12-05 19:20:50.845224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.452 [2024-12-05 19:20:50.845238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:33.452 [2024-12-05 19:20:50.845246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.540 ms 00:30:33.452 [2024-12-05 19:20:50.845268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.452 [2024-12-05 19:20:50.846931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.452 [2024-12-05 19:20:50.846954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:33.452 [2024-12-05 19:20:50.846963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.644 ms 00:30:33.452 [2024-12-05 19:20:50.846975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.452 [2024-12-05 19:20:50.847076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.452 [2024-12-05 19:20:50.847087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:33.452 [2024-12-05 19:20:50.847096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:30:33.452 [2024-12-05 19:20:50.847105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.452 [2024-12-05 19:20:50.853794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:33.452 [2024-12-05 19:20:50.853906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:33.452 [2024-12-05 19:20:50.853959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:33.452 [2024-12-05 19:20:50.853988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.452 [2024-12-05 19:20:50.854052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:33.452 [2024-12-05 19:20:50.854118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:33.452 [2024-12-05 19:20:50.854141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:33.452 [2024-12-05 19:20:50.854162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.452 [2024-12-05 19:20:50.854335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:33.452 [2024-12-05 19:20:50.854379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:33.452 [2024-12-05 19:20:50.854400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:33.452 [2024-12-05 19:20:50.854423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.452 [2024-12-05 19:20:50.854453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:33.452 [2024-12-05 19:20:50.854475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:33.452 [2024-12-05 19:20:50.854495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:33.452 [2024-12-05 19:20:50.854515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.452 [2024-12-05 19:20:50.866588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:33.452 [2024-12-05 19:20:50.866740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:33.452 [2024-12-05 19:20:50.866790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:33.452 [2024-12-05 19:20:50.866818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.452 [2024-12-05 19:20:50.876499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:33.452 [2024-12-05 19:20:50.876655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:33.452 [2024-12-05 19:20:50.876706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:33.452 [2024-12-05 19:20:50.876731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.452 [2024-12-05 19:20:50.876823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:33.452 [2024-12-05 19:20:50.876853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:33.452 [2024-12-05 19:20:50.876873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:33.452 [2024-12-05 19:20:50.876895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.452 [2024-12-05 19:20:50.876960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:33.452 [2024-12-05 19:20:50.877057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:33.452 [2024-12-05 19:20:50.877083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:33.452 [2024-12-05 19:20:50.877104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.452 [2024-12-05 19:20:50.877194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:33.452 [2024-12-05 19:20:50.877221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:33.452 [2024-12-05 19:20:50.877241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:33.453 [2024-12-05 19:20:50.877305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.453 [2024-12-05 19:20:50.877390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:33.453 [2024-12-05 19:20:50.877423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:33.453 [2024-12-05 19:20:50.877443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:33.453 [2024-12-05 19:20:50.877464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.453 [2024-12-05 19:20:50.877517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:33.453 [2024-12-05 19:20:50.877626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:33.453 [2024-12-05 19:20:50.877647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:33.453 [2024-12-05 19:20:50.877667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.453 [2024-12-05 19:20:50.877730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:33.453 [2024-12-05 19:20:50.877826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:33.453 [2024-12-05 19:20:50.877850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:33.453 [2024-12-05 19:20:50.877871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.453 [2024-12-05 19:20:50.878030] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 64.803 ms, result 0 00:30:33.453 true 00:30:33.453 19:20:50 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 94660 00:30:33.453 19:20:50 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 94660 ']' 00:30:33.453 19:20:50 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 94660 00:30:33.453 19:20:50 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # uname 00:30:33.453 19:20:50 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:33.453 19:20:50 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94660 00:30:33.453 19:20:50 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:33.453 killing process with pid 94660 00:30:33.453 19:20:50 ftl.ftl_restore_fast -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:33.453 19:20:50 ftl.ftl_restore_fast -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94660' 00:30:33.453 19:20:50 ftl.ftl_restore_fast -- common/autotest_common.sh@973 -- # kill 94660 00:30:33.453 19:20:50 ftl.ftl_restore_fast -- common/autotest_common.sh@978 -- # wait 94660 00:30:40.041 19:20:56 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:30:44.253 262144+0 records in 00:30:44.253 262144+0 records out 00:30:44.253 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.35372 s, 247 MB/s 00:30:44.253 19:21:01 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:30:46.171 19:21:03 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:30:46.171 [2024-12-05 19:21:03.615086] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:30:46.171 [2024-12-05 19:21:03.615234] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94874 ] 00:30:46.431 [2024-12-05 19:21:03.764159] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:46.431 [2024-12-05 19:21:03.793665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:46.431 [2024-12-05 19:21:03.913037] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:46.431 [2024-12-05 19:21:03.913127] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:46.695 [2024-12-05 19:21:04.076859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:46.695 [2024-12-05 19:21:04.077087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:46.695 [2024-12-05 19:21:04.077113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:46.695 [2024-12-05 19:21:04.077129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:46.695 [2024-12-05 19:21:04.077208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:46.695 [2024-12-05 19:21:04.077220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:46.695 [2024-12-05 19:21:04.077230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:30:46.695 [2024-12-05 19:21:04.077244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:46.695 [2024-12-05 19:21:04.077306] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:46.695 [2024-12-05 19:21:04.077572] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:46.695 [2024-12-05 19:21:04.077593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:46.695 [2024-12-05 19:21:04.077603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:46.695 [2024-12-05 19:21:04.077617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.299 ms 00:30:46.695 [2024-12-05 19:21:04.077628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:46.695 [2024-12-05 19:21:04.079353] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:30:46.695 [2024-12-05 19:21:04.083018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:46.695 [2024-12-05 19:21:04.083068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:46.695 [2024-12-05 19:21:04.083087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.668 ms 00:30:46.695 [2024-12-05 19:21:04.083102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:46.695 [2024-12-05 19:21:04.083177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:46.695 [2024-12-05 19:21:04.083191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:46.695 [2024-12-05 19:21:04.083200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:30:46.695 [2024-12-05 19:21:04.083212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:46.695 [2024-12-05 19:21:04.091507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:46.695 [2024-12-05 19:21:04.091547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:46.695 [2024-12-05 19:21:04.091568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.228 ms 00:30:46.695 [2024-12-05 19:21:04.091576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:46.695 [2024-12-05 19:21:04.091676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:46.695 [2024-12-05 19:21:04.091686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:46.695 [2024-12-05 19:21:04.091696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:30:46.695 [2024-12-05 19:21:04.091704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:46.695 [2024-12-05 19:21:04.091769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:46.695 [2024-12-05 19:21:04.091783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:46.695 [2024-12-05 19:21:04.091792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:30:46.695 [2024-12-05 19:21:04.091804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:46.695 [2024-12-05 19:21:04.091831] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:46.695 [2024-12-05 19:21:04.093929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:46.695 [2024-12-05 19:21:04.093969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:46.695 [2024-12-05 19:21:04.093980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.107 ms 00:30:46.695 [2024-12-05 19:21:04.093989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:46.695 [2024-12-05 19:21:04.094026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:46.695 [2024-12-05 19:21:04.094035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:46.695 [2024-12-05 19:21:04.094044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:30:46.695 [2024-12-05 19:21:04.094056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:46.695 [2024-12-05 19:21:04.094081] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:46.695 [2024-12-05 19:21:04.094131] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:30:46.695 [2024-12-05 19:21:04.094173] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:46.695 [2024-12-05 19:21:04.094196] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:30:46.695 [2024-12-05 19:21:04.094327] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:46.695 [2024-12-05 19:21:04.094341] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:46.695 [2024-12-05 19:21:04.094360] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:46.695 [2024-12-05 19:21:04.094372] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:46.695 [2024-12-05 19:21:04.094382] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:46.695 [2024-12-05 19:21:04.094391] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:46.695 [2024-12-05 19:21:04.094401] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:46.695 [2024-12-05 19:21:04.094410] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:46.695 [2024-12-05 19:21:04.094419] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:46.695 [2024-12-05 19:21:04.094428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:46.695 [2024-12-05 19:21:04.094436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:46.695 [2024-12-05 19:21:04.094444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.353 ms 00:30:46.695 [2024-12-05 19:21:04.094453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:46.695 [2024-12-05 19:21:04.094541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:46.695 [2024-12-05 19:21:04.094552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:46.695 [2024-12-05 19:21:04.094560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:30:46.695 [2024-12-05 19:21:04.094568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:46.695 [2024-12-05 19:21:04.094675] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:46.695 [2024-12-05 19:21:04.094707] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:46.695 [2024-12-05 19:21:04.094716] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:46.695 [2024-12-05 19:21:04.094724] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:46.695 [2024-12-05 19:21:04.094733] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:46.695 [2024-12-05 19:21:04.094742] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:46.695 [2024-12-05 19:21:04.094751] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:46.695 [2024-12-05 19:21:04.094759] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:46.695 [2024-12-05 19:21:04.094767] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:46.695 [2024-12-05 19:21:04.094774] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:46.695 [2024-12-05 19:21:04.094782] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:46.695 [2024-12-05 19:21:04.094795] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:46.695 [2024-12-05 19:21:04.094804] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:46.695 [2024-12-05 19:21:04.094812] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:46.695 [2024-12-05 19:21:04.094821] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:46.695 [2024-12-05 19:21:04.094832] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:46.695 [2024-12-05 19:21:04.094840] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:46.695 [2024-12-05 19:21:04.094848] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:46.695 [2024-12-05 19:21:04.094855] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:46.695 [2024-12-05 19:21:04.094862] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:46.695 [2024-12-05 19:21:04.094869] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:46.695 [2024-12-05 19:21:04.094875] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:46.695 [2024-12-05 19:21:04.094882] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:46.695 [2024-12-05 19:21:04.094890] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:46.695 [2024-12-05 19:21:04.094897] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:46.695 [2024-12-05 19:21:04.094903] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:46.695 [2024-12-05 19:21:04.094910] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:46.695 [2024-12-05 19:21:04.094921] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:46.696 [2024-12-05 19:21:04.094927] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:46.696 [2024-12-05 19:21:04.094934] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:46.696 [2024-12-05 19:21:04.094942] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:46.696 [2024-12-05 19:21:04.094949] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:46.696 [2024-12-05 19:21:04.094956] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:46.696 [2024-12-05 19:21:04.094962] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:46.696 [2024-12-05 19:21:04.094969] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:46.696 [2024-12-05 19:21:04.094975] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:46.696 [2024-12-05 19:21:04.094981] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:46.696 [2024-12-05 19:21:04.094988] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:46.696 [2024-12-05 19:21:04.094998] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:46.696 [2024-12-05 19:21:04.095006] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:46.696 [2024-12-05 19:21:04.095012] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:46.696 [2024-12-05 19:21:04.095019] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:46.696 [2024-12-05 19:21:04.095025] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:46.696 [2024-12-05 19:21:04.095034] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:46.696 [2024-12-05 19:21:04.095044] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:46.696 [2024-12-05 19:21:04.095053] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:46.696 [2024-12-05 19:21:04.095064] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:46.696 [2024-12-05 19:21:04.095073] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:46.696 [2024-12-05 19:21:04.095081] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:46.696 [2024-12-05 19:21:04.095089] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:46.696 [2024-12-05 19:21:04.095097] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:46.696 [2024-12-05 19:21:04.095103] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:46.696 [2024-12-05 19:21:04.095110] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:46.696 [2024-12-05 19:21:04.095119] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:46.696 [2024-12-05 19:21:04.095128] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:46.696 [2024-12-05 19:21:04.095139] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:46.696 [2024-12-05 19:21:04.095147] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:46.696 [2024-12-05 19:21:04.095154] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:46.696 [2024-12-05 19:21:04.095161] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:46.696 [2024-12-05 19:21:04.095170] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:46.696 [2024-12-05 19:21:04.095177] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:46.696 [2024-12-05 19:21:04.095185] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:46.696 [2024-12-05 19:21:04.095193] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:46.696 [2024-12-05 19:21:04.095201] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:46.696 [2024-12-05 19:21:04.095213] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:46.696 [2024-12-05 19:21:04.095220] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:46.696 [2024-12-05 19:21:04.095228] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:46.696 [2024-12-05 19:21:04.095236] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:46.696 [2024-12-05 19:21:04.095244] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:46.696 [2024-12-05 19:21:04.095296] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:46.696 [2024-12-05 19:21:04.095309] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:46.696 [2024-12-05 19:21:04.095317] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:46.696 [2024-12-05 19:21:04.095325] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:46.696 [2024-12-05 19:21:04.095335] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:46.696 [2024-12-05 19:21:04.095345] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:46.696 [2024-12-05 19:21:04.095356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:46.696 [2024-12-05 19:21:04.095365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:46.696 [2024-12-05 19:21:04.095373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.747 ms 00:30:46.696 [2024-12-05 19:21:04.095384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:46.696 [2024-12-05 19:21:04.109988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:46.696 [2024-12-05 19:21:04.110197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:46.696 [2024-12-05 19:21:04.110217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.555 ms 00:30:46.696 [2024-12-05 19:21:04.110226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:46.696 [2024-12-05 19:21:04.110341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:46.696 [2024-12-05 19:21:04.110351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:46.696 [2024-12-05 19:21:04.110360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:30:46.696 [2024-12-05 19:21:04.110369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:46.696 [2024-12-05 19:21:04.130874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:46.696 [2024-12-05 19:21:04.130947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:46.696 [2024-12-05 19:21:04.130968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.442 ms 00:30:46.696 [2024-12-05 19:21:04.130979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:46.696 [2024-12-05 19:21:04.131037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:46.696 [2024-12-05 19:21:04.131056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:46.696 [2024-12-05 19:21:04.131072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:46.696 [2024-12-05 19:21:04.131082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:46.696 [2024-12-05 19:21:04.131690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:46.696 [2024-12-05 19:21:04.131718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:46.696 [2024-12-05 19:21:04.131734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.527 ms 00:30:46.696 [2024-12-05 19:21:04.131747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:46.696 [2024-12-05 19:21:04.131936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:46.696 [2024-12-05 19:21:04.131952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:46.696 [2024-12-05 19:21:04.131965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.155 ms 00:30:46.696 [2024-12-05 19:21:04.131975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:46.696 [2024-12-05 19:21:04.141203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:46.696 [2024-12-05 19:21:04.141402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:46.696 [2024-12-05 19:21:04.141476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.200 ms 00:30:46.696 [2024-12-05 19:21:04.141515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:46.696 [2024-12-05 19:21:04.145561] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:30:46.696 [2024-12-05 19:21:04.145738] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:46.696 [2024-12-05 19:21:04.145805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:46.696 [2024-12-05 19:21:04.145827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:46.696 [2024-12-05 19:21:04.145848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.128 ms 00:30:46.696 [2024-12-05 19:21:04.145868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:46.696 [2024-12-05 19:21:04.161524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:46.696 [2024-12-05 19:21:04.161686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:46.696 [2024-12-05 19:21:04.161753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.600 ms 00:30:46.696 [2024-12-05 19:21:04.161777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:46.696 [2024-12-05 19:21:04.164919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:46.696 [2024-12-05 19:21:04.165071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:46.696 [2024-12-05 19:21:04.165126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.081 ms 00:30:46.696 [2024-12-05 19:21:04.165149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:46.696 [2024-12-05 19:21:04.167722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:46.696 [2024-12-05 19:21:04.167875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:46.697 [2024-12-05 19:21:04.167938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.528 ms 00:30:46.697 [2024-12-05 19:21:04.167963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:46.697 [2024-12-05 19:21:04.168750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:46.697 [2024-12-05 19:21:04.168798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:46.697 [2024-12-05 19:21:04.168812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.295 ms 00:30:46.697 [2024-12-05 19:21:04.168821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:46.697 [2024-12-05 19:21:04.192895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:46.697 [2024-12-05 19:21:04.192967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:46.697 [2024-12-05 19:21:04.192981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.054 ms 00:30:46.697 [2024-12-05 19:21:04.192990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:46.697 [2024-12-05 19:21:04.201503] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:46.697 [2024-12-05 19:21:04.205186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:46.697 [2024-12-05 19:21:04.205240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:46.697 [2024-12-05 19:21:04.205271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.138 ms 00:30:46.697 [2024-12-05 19:21:04.205283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:46.697 [2024-12-05 19:21:04.205376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:46.697 [2024-12-05 19:21:04.205388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:46.697 [2024-12-05 19:21:04.205402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:30:46.697 [2024-12-05 19:21:04.205419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:46.697 [2024-12-05 19:21:04.205493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:46.697 [2024-12-05 19:21:04.205506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:46.697 [2024-12-05 19:21:04.205515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:30:46.697 [2024-12-05 19:21:04.205527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:46.697 [2024-12-05 19:21:04.205548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:46.697 [2024-12-05 19:21:04.205558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:46.697 [2024-12-05 19:21:04.205566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:30:46.697 [2024-12-05 19:21:04.205576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:46.697 [2024-12-05 19:21:04.205618] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:46.697 [2024-12-05 19:21:04.205630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:46.697 [2024-12-05 19:21:04.205638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:46.697 [2024-12-05 19:21:04.205646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:30:46.697 [2024-12-05 19:21:04.205658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:46.697 [2024-12-05 19:21:04.211963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:46.697 [2024-12-05 19:21:04.212149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:46.697 [2024-12-05 19:21:04.212171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.284 ms 00:30:46.697 [2024-12-05 19:21:04.212181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:46.697 [2024-12-05 19:21:04.212280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:46.697 [2024-12-05 19:21:04.212294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:46.697 [2024-12-05 19:21:04.212314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:30:46.697 [2024-12-05 19:21:04.212326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:46.697 [2024-12-05 19:21:04.213532] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 136.175 ms, result 0 00:30:48.085  [2024-12-05T19:21:06.584Z] Copying: 9480/1048576 [kB] (9480 kBps) [2024-12-05T19:21:07.529Z] Copying: 19/1024 [MB] (10 MBps) [2024-12-05T19:21:08.556Z] Copying: 29064/1048576 [kB] (9160 kBps) [2024-12-05T19:21:09.499Z] Copying: 40/1024 [MB] (11 MBps) [2024-12-05T19:21:10.443Z] Copying: 53/1024 [MB] (13 MBps) [2024-12-05T19:21:11.388Z] Copying: 65/1024 [MB] (12 MBps) [2024-12-05T19:21:12.331Z] Copying: 77136/1048576 [kB] (9660 kBps) [2024-12-05T19:21:13.275Z] Copying: 87152/1048576 [kB] (10016 kBps) [2024-12-05T19:21:14.658Z] Copying: 97/1024 [MB] (12 MBps) [2024-12-05T19:21:15.231Z] Copying: 113/1024 [MB] (16 MBps) [2024-12-05T19:21:16.618Z] Copying: 124/1024 [MB] (10 MBps) [2024-12-05T19:21:17.565Z] Copying: 136920/1048576 [kB] (9232 kBps) [2024-12-05T19:21:18.573Z] Copying: 146960/1048576 [kB] (10040 kBps) [2024-12-05T19:21:19.547Z] Copying: 161/1024 [MB] (17 MBps) [2024-12-05T19:21:20.492Z] Copying: 184/1024 [MB] (23 MBps) [2024-12-05T19:21:21.436Z] Copying: 199/1024 [MB] (14 MBps) [2024-12-05T19:21:22.377Z] Copying: 209/1024 [MB] (10 MBps) [2024-12-05T19:21:23.319Z] Copying: 223/1024 [MB] (13 MBps) [2024-12-05T19:21:24.263Z] Copying: 234/1024 [MB] (10 MBps) [2024-12-05T19:21:25.648Z] Copying: 245/1024 [MB] (11 MBps) [2024-12-05T19:21:26.591Z] Copying: 261032/1048576 [kB] (9492 kBps) [2024-12-05T19:21:27.535Z] Copying: 270/1024 [MB] (15 MBps) [2024-12-05T19:21:28.480Z] Copying: 281/1024 [MB] (10 MBps) [2024-12-05T19:21:29.428Z] Copying: 294/1024 [MB] (12 MBps) [2024-12-05T19:21:30.372Z] Copying: 324/1024 [MB] (30 MBps) [2024-12-05T19:21:31.318Z] Copying: 338/1024 [MB] (13 MBps) [2024-12-05T19:21:32.263Z] Copying: 349/1024 [MB] (11 MBps) [2024-12-05T19:21:33.665Z] Copying: 366/1024 [MB] (16 MBps) [2024-12-05T19:21:34.238Z] Copying: 379/1024 [MB] (13 MBps) [2024-12-05T19:21:35.625Z] Copying: 392/1024 [MB] (12 MBps) [2024-12-05T19:21:36.569Z] Copying: 410/1024 [MB] (17 MBps) [2024-12-05T19:21:37.522Z] Copying: 429/1024 [MB] (19 MBps) [2024-12-05T19:21:38.469Z] Copying: 444/1024 [MB] (15 MBps) [2024-12-05T19:21:39.413Z] Copying: 472/1024 [MB] (27 MBps) [2024-12-05T19:21:40.358Z] Copying: 490/1024 [MB] (18 MBps) [2024-12-05T19:21:41.303Z] Copying: 505/1024 [MB] (14 MBps) [2024-12-05T19:21:42.247Z] Copying: 527152/1048576 [kB] (9528 kBps) [2024-12-05T19:21:43.655Z] Copying: 536544/1048576 [kB] (9392 kBps) [2024-12-05T19:21:44.292Z] Copying: 535/1024 [MB] (11 MBps) [2024-12-05T19:21:45.234Z] Copying: 548/1024 [MB] (13 MBps) [2024-12-05T19:21:46.615Z] Copying: 571472/1048576 [kB] (9616 kBps) [2024-12-05T19:21:47.557Z] Copying: 580480/1048576 [kB] (9008 kBps) [2024-12-05T19:21:48.498Z] Copying: 590672/1048576 [kB] (10192 kBps) [2024-12-05T19:21:49.439Z] Copying: 599240/1048576 [kB] (8568 kBps) [2024-12-05T19:21:50.380Z] Copying: 608096/1048576 [kB] (8856 kBps) [2024-12-05T19:21:51.328Z] Copying: 617276/1048576 [kB] (9180 kBps) [2024-12-05T19:21:52.306Z] Copying: 626356/1048576 [kB] (9080 kBps) [2024-12-05T19:21:53.249Z] Copying: 635096/1048576 [kB] (8740 kBps) [2024-12-05T19:21:54.637Z] Copying: 644224/1048576 [kB] (9128 kBps) [2024-12-05T19:21:55.586Z] Copying: 653368/1048576 [kB] (9144 kBps) [2024-12-05T19:21:56.527Z] Copying: 663460/1048576 [kB] (10092 kBps) [2024-12-05T19:21:57.473Z] Copying: 672352/1048576 [kB] (8892 kBps) [2024-12-05T19:21:58.420Z] Copying: 680920/1048576 [kB] (8568 kBps) [2024-12-05T19:21:59.367Z] Copying: 689400/1048576 [kB] (8480 kBps) [2024-12-05T19:22:00.309Z] Copying: 698504/1048576 [kB] (9104 kBps) [2024-12-05T19:22:01.253Z] Copying: 708720/1048576 [kB] (10216 kBps) [2024-12-05T19:22:02.639Z] Copying: 717064/1048576 [kB] (8344 kBps) [2024-12-05T19:22:03.258Z] Copying: 726600/1048576 [kB] (9536 kBps) [2024-12-05T19:22:04.646Z] Copying: 735576/1048576 [kB] (8976 kBps) [2024-12-05T19:22:05.589Z] Copying: 744740/1048576 [kB] (9164 kBps) [2024-12-05T19:22:06.523Z] Copying: 737/1024 [MB] (10 MBps) [2024-12-05T19:22:07.461Z] Copying: 765168/1048576 [kB] (9888 kBps) [2024-12-05T19:22:08.400Z] Copying: 774744/1048576 [kB] (9576 kBps) [2024-12-05T19:22:09.340Z] Copying: 784552/1048576 [kB] (9808 kBps) [2024-12-05T19:22:10.277Z] Copying: 794436/1048576 [kB] (9884 kBps) [2024-12-05T19:22:11.235Z] Copying: 804064/1048576 [kB] (9628 kBps) [2024-12-05T19:22:12.664Z] Copying: 797/1024 [MB] (12 MBps) [2024-12-05T19:22:13.235Z] Copying: 826648/1048576 [kB] (10048 kBps) [2024-12-05T19:22:14.615Z] Copying: 836464/1048576 [kB] (9816 kBps) [2024-12-05T19:22:15.558Z] Copying: 845672/1048576 [kB] (9208 kBps) [2024-12-05T19:22:16.499Z] Copying: 838/1024 [MB] (12 MBps) [2024-12-05T19:22:17.437Z] Copying: 849/1024 [MB] (11 MBps) [2024-12-05T19:22:18.375Z] Copying: 860/1024 [MB] (10 MBps) [2024-12-05T19:22:19.314Z] Copying: 890712/1048576 [kB] (9536 kBps) [2024-12-05T19:22:20.256Z] Copying: 882/1024 [MB] (12 MBps) [2024-12-05T19:22:21.645Z] Copying: 893/1024 [MB] (11 MBps) [2024-12-05T19:22:22.592Z] Copying: 925244/1048576 [kB] (10012 kBps) [2024-12-05T19:22:23.538Z] Copying: 934048/1048576 [kB] (8804 kBps) [2024-12-05T19:22:24.554Z] Copying: 942688/1048576 [kB] (8640 kBps) [2024-12-05T19:22:25.513Z] Copying: 951788/1048576 [kB] (9100 kBps) [2024-12-05T19:22:26.453Z] Copying: 960688/1048576 [kB] (8900 kBps) [2024-12-05T19:22:27.390Z] Copying: 970608/1048576 [kB] (9920 kBps) [2024-12-05T19:22:28.326Z] Copying: 980072/1048576 [kB] (9464 kBps) [2024-12-05T19:22:29.264Z] Copying: 989200/1048576 [kB] (9128 kBps) [2024-12-05T19:22:30.647Z] Copying: 998568/1048576 [kB] (9368 kBps) [2024-12-05T19:22:31.586Z] Copying: 1007396/1048576 [kB] (8828 kBps) [2024-12-05T19:22:32.547Z] Copying: 1016640/1048576 [kB] (9244 kBps) [2024-12-05T19:22:33.492Z] Copying: 1026540/1048576 [kB] (9900 kBps) [2024-12-05T19:22:34.433Z] Copying: 1036056/1048576 [kB] (9516 kBps) [2024-12-05T19:22:34.695Z] Copying: 1045520/1048576 [kB] (9464 kBps) [2024-12-05T19:22:34.695Z] Copying: 1024/1024 [MB] (average 11 MBps)[2024-12-05 19:22:34.551153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.136 [2024-12-05 19:22:34.551208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:17.136 [2024-12-05 19:22:34.551222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:17.136 [2024-12-05 19:22:34.551239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.136 [2024-12-05 19:22:34.551277] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:17.136 [2024-12-05 19:22:34.551721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.136 [2024-12-05 19:22:34.551739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:17.136 [2024-12-05 19:22:34.551748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.429 ms 00:32:17.136 [2024-12-05 19:22:34.551755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.136 [2024-12-05 19:22:34.554318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.136 [2024-12-05 19:22:34.554350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:17.136 [2024-12-05 19:22:34.554359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.544 ms 00:32:17.136 [2024-12-05 19:22:34.554367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.136 [2024-12-05 19:22:34.554403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.136 [2024-12-05 19:22:34.554411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:32:17.136 [2024-12-05 19:22:34.554420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:32:17.136 [2024-12-05 19:22:34.554427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.137 [2024-12-05 19:22:34.554472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.137 [2024-12-05 19:22:34.554480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:32:17.137 [2024-12-05 19:22:34.554488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:32:17.137 [2024-12-05 19:22:34.554495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.137 [2024-12-05 19:22:34.554508] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:17.137 [2024-12-05 19:22:34.554521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:32:17.137 [2024-12-05 19:22:34.554530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:17.137 [2024-12-05 19:22:34.554538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:17.137 [2024-12-05 19:22:34.554545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:17.137 [2024-12-05 19:22:34.554553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:17.137 [2024-12-05 19:22:34.554560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:17.137 [2024-12-05 19:22:34.554568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:17.137 [2024-12-05 19:22:34.554575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:17.137 [2024-12-05 19:22:34.554582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:17.137 [2024-12-05 19:22:34.554589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:17.137 [2024-12-05 19:22:34.554596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:17.137 [2024-12-05 19:22:34.554604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:17.137 [2024-12-05 19:22:34.554611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:17.137 [2024-12-05 19:22:34.554618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:17.137 [2024-12-05 19:22:34.554625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:17.137 [2024-12-05 19:22:34.554633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:17.137 [2024-12-05 19:22:34.554640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:17.137 [2024-12-05 19:22:34.554647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:17.137 [2024-12-05 19:22:34.554654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:17.137 [2024-12-05 19:22:34.554662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:17.137 [2024-12-05 19:22:34.554670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:17.137 [2024-12-05 19:22:34.554678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:17.137 [2024-12-05 19:22:34.554686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:17.137 [2024-12-05 19:22:34.554693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:17.137 [2024-12-05 19:22:34.554700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:17.137 [2024-12-05 19:22:34.554707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:17.137 [2024-12-05 19:22:34.554715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:17.137 [2024-12-05 19:22:34.554722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:17.137 [2024-12-05 19:22:34.554729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:17.137 [2024-12-05 19:22:34.554737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:17.137 [2024-12-05 19:22:34.554744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:17.137 [2024-12-05 19:22:34.554751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:17.137 [2024-12-05 19:22:34.554758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:17.137 [2024-12-05 19:22:34.554765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:17.137 [2024-12-05 19:22:34.554772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:17.137 [2024-12-05 19:22:34.554779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:17.137 [2024-12-05 19:22:34.554787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:17.137 [2024-12-05 19:22:34.554794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:17.137 [2024-12-05 19:22:34.554801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:17.137 [2024-12-05 19:22:34.554808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:17.137 [2024-12-05 19:22:34.554815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:17.137 [2024-12-05 19:22:34.554823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:17.137 [2024-12-05 19:22:34.554830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:17.137 [2024-12-05 19:22:34.554837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:17.137 [2024-12-05 19:22:34.554844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:17.137 [2024-12-05 19:22:34.554851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:17.137 [2024-12-05 19:22:34.554858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:17.137 [2024-12-05 19:22:34.554866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:17.137 [2024-12-05 19:22:34.554873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:17.137 [2024-12-05 19:22:34.554880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:17.137 [2024-12-05 19:22:34.554889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:17.137 [2024-12-05 19:22:34.554897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:17.137 [2024-12-05 19:22:34.554904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:17.137 [2024-12-05 19:22:34.554912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:17.137 [2024-12-05 19:22:34.554920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:17.137 [2024-12-05 19:22:34.554927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:17.137 [2024-12-05 19:22:34.554934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:17.137 [2024-12-05 19:22:34.554941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:17.137 [2024-12-05 19:22:34.554948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:17.137 [2024-12-05 19:22:34.554956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:17.137 [2024-12-05 19:22:34.554963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:17.137 [2024-12-05 19:22:34.554970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:17.137 [2024-12-05 19:22:34.554977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:17.137 [2024-12-05 19:22:34.554984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:17.137 [2024-12-05 19:22:34.554992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:17.137 [2024-12-05 19:22:34.555003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:17.137 [2024-12-05 19:22:34.555010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:17.137 [2024-12-05 19:22:34.555017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:17.137 [2024-12-05 19:22:34.555024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:17.137 [2024-12-05 19:22:34.555031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:17.137 [2024-12-05 19:22:34.555039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:17.137 [2024-12-05 19:22:34.555052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:17.137 [2024-12-05 19:22:34.555059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:17.137 [2024-12-05 19:22:34.555066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:17.137 [2024-12-05 19:22:34.555073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:17.137 [2024-12-05 19:22:34.555081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:17.137 [2024-12-05 19:22:34.555088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:17.137 [2024-12-05 19:22:34.555095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:17.137 [2024-12-05 19:22:34.555102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:17.137 [2024-12-05 19:22:34.555109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:17.138 [2024-12-05 19:22:34.555117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:17.138 [2024-12-05 19:22:34.555124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:17.138 [2024-12-05 19:22:34.555132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:17.138 [2024-12-05 19:22:34.555140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:17.138 [2024-12-05 19:22:34.555147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:17.138 [2024-12-05 19:22:34.555155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:17.138 [2024-12-05 19:22:34.555162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:17.138 [2024-12-05 19:22:34.555170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:17.138 [2024-12-05 19:22:34.555177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:17.138 [2024-12-05 19:22:34.555185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:17.138 [2024-12-05 19:22:34.555192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:17.138 [2024-12-05 19:22:34.555199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:17.138 [2024-12-05 19:22:34.555207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:17.138 [2024-12-05 19:22:34.555214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:17.138 [2024-12-05 19:22:34.555221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:17.138 [2024-12-05 19:22:34.555229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:17.138 [2024-12-05 19:22:34.555236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:17.138 [2024-12-05 19:22:34.555243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:17.138 [2024-12-05 19:22:34.555260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:17.138 [2024-12-05 19:22:34.555268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:17.138 [2024-12-05 19:22:34.555284] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:17.138 [2024-12-05 19:22:34.555291] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 42867315-b16e-4303-b9a5-ab6c21560170 00:32:17.138 [2024-12-05 19:22:34.555299] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:32:17.138 [2024-12-05 19:22:34.555306] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:32:17.138 [2024-12-05 19:22:34.555313] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:32:17.138 [2024-12-05 19:22:34.555323] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:32:17.138 [2024-12-05 19:22:34.555330] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:17.138 [2024-12-05 19:22:34.555337] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:17.138 [2024-12-05 19:22:34.555345] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:17.138 [2024-12-05 19:22:34.555352] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:17.138 [2024-12-05 19:22:34.555358] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:17.138 [2024-12-05 19:22:34.555365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.138 [2024-12-05 19:22:34.555373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:17.138 [2024-12-05 19:22:34.555382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.858 ms 00:32:17.138 [2024-12-05 19:22:34.555389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.138 [2024-12-05 19:22:34.556796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.138 [2024-12-05 19:22:34.556813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:17.138 [2024-12-05 19:22:34.556822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.393 ms 00:32:17.138 [2024-12-05 19:22:34.556833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.138 [2024-12-05 19:22:34.556908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.138 [2024-12-05 19:22:34.556918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:17.138 [2024-12-05 19:22:34.556927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:32:17.138 [2024-12-05 19:22:34.556937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.138 [2024-12-05 19:22:34.561881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:17.138 [2024-12-05 19:22:34.562038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:17.138 [2024-12-05 19:22:34.562054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:17.138 [2024-12-05 19:22:34.562064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.138 [2024-12-05 19:22:34.562133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:17.138 [2024-12-05 19:22:34.562147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:17.138 [2024-12-05 19:22:34.562155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:17.138 [2024-12-05 19:22:34.562162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.138 [2024-12-05 19:22:34.562193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:17.138 [2024-12-05 19:22:34.562202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:17.138 [2024-12-05 19:22:34.562209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:17.138 [2024-12-05 19:22:34.562216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.138 [2024-12-05 19:22:34.562231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:17.138 [2024-12-05 19:22:34.562239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:17.138 [2024-12-05 19:22:34.562279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:17.138 [2024-12-05 19:22:34.562288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.138 [2024-12-05 19:22:34.571623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:17.138 [2024-12-05 19:22:34.571673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:17.138 [2024-12-05 19:22:34.571685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:17.138 [2024-12-05 19:22:34.571692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.138 [2024-12-05 19:22:34.578721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:17.138 [2024-12-05 19:22:34.578771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:17.138 [2024-12-05 19:22:34.578788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:17.138 [2024-12-05 19:22:34.578796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.138 [2024-12-05 19:22:34.578847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:17.138 [2024-12-05 19:22:34.578856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:17.138 [2024-12-05 19:22:34.578864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:17.138 [2024-12-05 19:22:34.578871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.138 [2024-12-05 19:22:34.578912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:17.138 [2024-12-05 19:22:34.578921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:17.138 [2024-12-05 19:22:34.578929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:17.138 [2024-12-05 19:22:34.578940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.138 [2024-12-05 19:22:34.578987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:17.138 [2024-12-05 19:22:34.578995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:17.138 [2024-12-05 19:22:34.579003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:17.138 [2024-12-05 19:22:34.579015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.138 [2024-12-05 19:22:34.579039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:17.138 [2024-12-05 19:22:34.579048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:17.138 [2024-12-05 19:22:34.579055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:17.138 [2024-12-05 19:22:34.579062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.138 [2024-12-05 19:22:34.579096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:17.138 [2024-12-05 19:22:34.579104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:17.138 [2024-12-05 19:22:34.579111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:17.138 [2024-12-05 19:22:34.579118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.138 [2024-12-05 19:22:34.579157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:17.138 [2024-12-05 19:22:34.579166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:17.138 [2024-12-05 19:22:34.579174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:17.138 [2024-12-05 19:22:34.579183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.138 [2024-12-05 19:22:34.579315] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 28.112 ms, result 0 00:32:17.714 00:32:17.714 00:32:17.714 19:22:35 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:32:17.714 [2024-12-05 19:22:35.072034] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:32:17.714 [2024-12-05 19:22:35.072155] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95794 ] 00:32:17.714 [2024-12-05 19:22:35.214701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:17.714 [2024-12-05 19:22:35.235147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:17.984 [2024-12-05 19:22:35.326092] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:17.984 [2024-12-05 19:22:35.326393] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:17.984 [2024-12-05 19:22:35.482167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.984 [2024-12-05 19:22:35.482399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:17.984 [2024-12-05 19:22:35.482422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:17.984 [2024-12-05 19:22:35.482431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.984 [2024-12-05 19:22:35.482501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.984 [2024-12-05 19:22:35.482512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:17.984 [2024-12-05 19:22:35.482524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:32:17.984 [2024-12-05 19:22:35.482542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.984 [2024-12-05 19:22:35.482569] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:17.984 [2024-12-05 19:22:35.482817] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:17.984 [2024-12-05 19:22:35.482832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.984 [2024-12-05 19:22:35.482845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:17.984 [2024-12-05 19:22:35.482856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:32:17.984 [2024-12-05 19:22:35.482863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.984 [2024-12-05 19:22:35.483135] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:32:17.984 [2024-12-05 19:22:35.483157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.984 [2024-12-05 19:22:35.483166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:32:17.984 [2024-12-05 19:22:35.483176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:32:17.984 [2024-12-05 19:22:35.483186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.984 [2024-12-05 19:22:35.483236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.984 [2024-12-05 19:22:35.483246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:32:17.984 [2024-12-05 19:22:35.483274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:32:17.984 [2024-12-05 19:22:35.483282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.984 [2024-12-05 19:22:35.483512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.984 [2024-12-05 19:22:35.483528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:17.984 [2024-12-05 19:22:35.483538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.196 ms 00:32:17.984 [2024-12-05 19:22:35.483552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.984 [2024-12-05 19:22:35.483624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.984 [2024-12-05 19:22:35.483633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:17.984 [2024-12-05 19:22:35.483641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:32:17.984 [2024-12-05 19:22:35.483652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.984 [2024-12-05 19:22:35.483674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.984 [2024-12-05 19:22:35.483682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:17.984 [2024-12-05 19:22:35.483690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:32:17.984 [2024-12-05 19:22:35.483697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.984 [2024-12-05 19:22:35.483718] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:17.984 [2024-12-05 19:22:35.485202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.984 [2024-12-05 19:22:35.485236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:17.984 [2024-12-05 19:22:35.485245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.488 ms 00:32:17.984 [2024-12-05 19:22:35.485275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.984 [2024-12-05 19:22:35.485310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.984 [2024-12-05 19:22:35.485319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:17.984 [2024-12-05 19:22:35.485327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:32:17.984 [2024-12-05 19:22:35.485334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.984 [2024-12-05 19:22:35.485353] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:32:17.984 [2024-12-05 19:22:35.485374] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:32:17.984 [2024-12-05 19:22:35.485415] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:32:17.984 [2024-12-05 19:22:35.485430] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:32:17.984 [2024-12-05 19:22:35.485531] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:17.984 [2024-12-05 19:22:35.485541] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:17.984 [2024-12-05 19:22:35.485551] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:32:17.984 [2024-12-05 19:22:35.485560] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:17.984 [2024-12-05 19:22:35.485572] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:17.984 [2024-12-05 19:22:35.485582] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:32:17.984 [2024-12-05 19:22:35.485589] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:17.984 [2024-12-05 19:22:35.485596] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:17.984 [2024-12-05 19:22:35.485604] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:17.984 [2024-12-05 19:22:35.485614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.984 [2024-12-05 19:22:35.485622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:17.984 [2024-12-05 19:22:35.485629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:32:17.984 [2024-12-05 19:22:35.485636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.984 [2024-12-05 19:22:35.485725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.984 [2024-12-05 19:22:35.485737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:17.984 [2024-12-05 19:22:35.485749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:32:17.984 [2024-12-05 19:22:35.485756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.984 [2024-12-05 19:22:35.485856] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:17.984 [2024-12-05 19:22:35.485869] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:17.984 [2024-12-05 19:22:35.485878] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:17.984 [2024-12-05 19:22:35.485888] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:17.984 [2024-12-05 19:22:35.485897] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:17.985 [2024-12-05 19:22:35.485904] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:17.985 [2024-12-05 19:22:35.485912] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:32:17.985 [2024-12-05 19:22:35.485920] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:17.985 [2024-12-05 19:22:35.485928] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:32:17.985 [2024-12-05 19:22:35.485936] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:17.985 [2024-12-05 19:22:35.485943] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:17.985 [2024-12-05 19:22:35.485951] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:32:17.985 [2024-12-05 19:22:35.485959] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:17.985 [2024-12-05 19:22:35.485966] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:17.985 [2024-12-05 19:22:35.485974] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:32:17.985 [2024-12-05 19:22:35.485982] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:17.985 [2024-12-05 19:22:35.485990] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:17.985 [2024-12-05 19:22:35.485997] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:32:17.985 [2024-12-05 19:22:35.486005] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:17.985 [2024-12-05 19:22:35.486015] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:17.985 [2024-12-05 19:22:35.486023] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:32:17.985 [2024-12-05 19:22:35.486030] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:17.985 [2024-12-05 19:22:35.486037] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:17.985 [2024-12-05 19:22:35.486045] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:32:17.985 [2024-12-05 19:22:35.486052] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:17.985 [2024-12-05 19:22:35.486059] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:17.985 [2024-12-05 19:22:35.486066] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:32:17.985 [2024-12-05 19:22:35.486073] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:17.985 [2024-12-05 19:22:35.486081] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:17.985 [2024-12-05 19:22:35.486088] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:32:17.985 [2024-12-05 19:22:35.486095] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:17.985 [2024-12-05 19:22:35.486114] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:17.985 [2024-12-05 19:22:35.486121] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:32:17.985 [2024-12-05 19:22:35.486128] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:17.985 [2024-12-05 19:22:35.486136] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:17.985 [2024-12-05 19:22:35.486148] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:32:17.985 [2024-12-05 19:22:35.486155] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:17.985 [2024-12-05 19:22:35.486163] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:17.985 [2024-12-05 19:22:35.486170] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:32:17.985 [2024-12-05 19:22:35.486178] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:17.985 [2024-12-05 19:22:35.486186] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:17.985 [2024-12-05 19:22:35.486193] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:32:17.985 [2024-12-05 19:22:35.486201] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:17.985 [2024-12-05 19:22:35.486208] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:17.985 [2024-12-05 19:22:35.486220] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:17.985 [2024-12-05 19:22:35.486229] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:17.985 [2024-12-05 19:22:35.486239] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:17.985 [2024-12-05 19:22:35.486425] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:17.985 [2024-12-05 19:22:35.486462] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:17.985 [2024-12-05 19:22:35.486484] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:17.985 [2024-12-05 19:22:35.486506] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:17.985 [2024-12-05 19:22:35.486530] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:17.985 [2024-12-05 19:22:35.486552] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:17.985 [2024-12-05 19:22:35.486574] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:17.985 [2024-12-05 19:22:35.486609] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:17.985 [2024-12-05 19:22:35.486644] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:32:17.985 [2024-12-05 19:22:35.486676] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:32:17.985 [2024-12-05 19:22:35.486813] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:32:17.985 [2024-12-05 19:22:35.486915] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:32:17.985 [2024-12-05 19:22:35.486950] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:32:17.985 [2024-12-05 19:22:35.486982] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:32:17.985 [2024-12-05 19:22:35.487014] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:32:17.985 [2024-12-05 19:22:35.487047] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:32:17.985 [2024-12-05 19:22:35.487161] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:32:17.985 [2024-12-05 19:22:35.487195] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:32:17.985 [2024-12-05 19:22:35.487228] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:32:17.985 [2024-12-05 19:22:35.487283] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:32:17.985 [2024-12-05 19:22:35.487349] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:32:17.985 [2024-12-05 19:22:35.487383] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:32:17.985 [2024-12-05 19:22:35.487416] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:17.985 [2024-12-05 19:22:35.487450] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:17.985 [2024-12-05 19:22:35.487551] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:17.985 [2024-12-05 19:22:35.487583] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:17.985 [2024-12-05 19:22:35.487616] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:17.985 [2024-12-05 19:22:35.487708] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:17.985 [2024-12-05 19:22:35.487721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.985 [2024-12-05 19:22:35.487730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:17.985 [2024-12-05 19:22:35.487739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.933 ms 00:32:17.985 [2024-12-05 19:22:35.487747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.985 [2024-12-05 19:22:35.494145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.985 [2024-12-05 19:22:35.494185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:17.985 [2024-12-05 19:22:35.494199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.319 ms 00:32:17.985 [2024-12-05 19:22:35.494207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.985 [2024-12-05 19:22:35.494306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.985 [2024-12-05 19:22:35.494315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:17.985 [2024-12-05 19:22:35.494323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:32:17.985 [2024-12-05 19:22:35.494331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.985 [2024-12-05 19:22:35.513477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.985 [2024-12-05 19:22:35.513545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:17.985 [2024-12-05 19:22:35.513563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.100 ms 00:32:17.985 [2024-12-05 19:22:35.513575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.985 [2024-12-05 19:22:35.513648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.985 [2024-12-05 19:22:35.513661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:17.985 [2024-12-05 19:22:35.513673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:17.985 [2024-12-05 19:22:35.513683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.985 [2024-12-05 19:22:35.513826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.985 [2024-12-05 19:22:35.513845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:17.985 [2024-12-05 19:22:35.513856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:32:17.985 [2024-12-05 19:22:35.513867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.985 [2024-12-05 19:22:35.514019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.985 [2024-12-05 19:22:35.514031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:17.985 [2024-12-05 19:22:35.514042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:32:17.985 [2024-12-05 19:22:35.514055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.985 [2024-12-05 19:22:35.519999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.985 [2024-12-05 19:22:35.520047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:17.985 [2024-12-05 19:22:35.520060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.918 ms 00:32:17.985 [2024-12-05 19:22:35.520068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.985 [2024-12-05 19:22:35.520195] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:32:17.985 [2024-12-05 19:22:35.520207] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:32:17.985 [2024-12-05 19:22:35.520219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.985 [2024-12-05 19:22:35.520228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:32:17.985 [2024-12-05 19:22:35.520236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:32:17.985 [2024-12-05 19:22:35.520266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.985 [2024-12-05 19:22:35.532647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.985 [2024-12-05 19:22:35.532697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:32:17.985 [2024-12-05 19:22:35.532709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.356 ms 00:32:17.985 [2024-12-05 19:22:35.532717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.985 [2024-12-05 19:22:35.532848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.985 [2024-12-05 19:22:35.532856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:32:17.985 [2024-12-05 19:22:35.532864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:32:17.985 [2024-12-05 19:22:35.532875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.985 [2024-12-05 19:22:35.532932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.985 [2024-12-05 19:22:35.532944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:32:17.985 [2024-12-05 19:22:35.532952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:32:17.985 [2024-12-05 19:22:35.532959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.985 [2024-12-05 19:22:35.533307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.985 [2024-12-05 19:22:35.533320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:17.985 [2024-12-05 19:22:35.533328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.310 ms 00:32:17.985 [2024-12-05 19:22:35.533335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.985 [2024-12-05 19:22:35.533353] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:32:17.985 [2024-12-05 19:22:35.533368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.985 [2024-12-05 19:22:35.533378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:32:17.985 [2024-12-05 19:22:35.533386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:32:17.985 [2024-12-05 19:22:35.533393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.247 [2024-12-05 19:22:35.541529] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:32:18.247 [2024-12-05 19:22:35.541694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:18.247 [2024-12-05 19:22:35.541705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:18.247 [2024-12-05 19:22:35.541716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.283 ms 00:32:18.247 [2024-12-05 19:22:35.541725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.247 [2024-12-05 19:22:35.544038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:18.247 [2024-12-05 19:22:35.544067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:32:18.247 [2024-12-05 19:22:35.544082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.278 ms 00:32:18.247 [2024-12-05 19:22:35.544091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.247 [2024-12-05 19:22:35.544184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:18.247 [2024-12-05 19:22:35.544195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:18.247 [2024-12-05 19:22:35.544204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:32:18.247 [2024-12-05 19:22:35.544215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.247 [2024-12-05 19:22:35.544272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:18.247 [2024-12-05 19:22:35.544281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:18.247 [2024-12-05 19:22:35.544289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:18.247 [2024-12-05 19:22:35.544297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.247 [2024-12-05 19:22:35.544332] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:32:18.247 [2024-12-05 19:22:35.544342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:18.247 [2024-12-05 19:22:35.544351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:32:18.247 [2024-12-05 19:22:35.544359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:32:18.247 [2024-12-05 19:22:35.544367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.247 [2024-12-05 19:22:35.549061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:18.247 [2024-12-05 19:22:35.549110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:18.247 [2024-12-05 19:22:35.549122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.674 ms 00:32:18.247 [2024-12-05 19:22:35.549130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.247 [2024-12-05 19:22:35.549204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:18.247 [2024-12-05 19:22:35.549213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:18.247 [2024-12-05 19:22:35.549221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:32:18.247 [2024-12-05 19:22:35.549228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.247 [2024-12-05 19:22:35.550196] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 67.596 ms, result 0 00:32:19.191  [2024-12-05T19:22:38.152Z] Copying: 12/1024 [MB] (12 MBps) [2024-12-05T19:22:39.096Z] Copying: 23/1024 [MB] (11 MBps) [2024-12-05T19:22:40.041Z] Copying: 34476/1048576 [kB] (9992 kBps) [2024-12-05T19:22:40.984Z] Copying: 44664/1048576 [kB] (10188 kBps) [2024-12-05T19:22:41.924Z] Copying: 53/1024 [MB] (10 MBps) [2024-12-05T19:22:42.867Z] Copying: 63/1024 [MB] (10 MBps) [2024-12-05T19:22:43.808Z] Copying: 74816/1048576 [kB] (9580 kBps) [2024-12-05T19:22:44.752Z] Copying: 84/1024 [MB] (11 MBps) [2024-12-05T19:22:46.137Z] Copying: 95092/1048576 [kB] (8880 kBps) [2024-12-05T19:22:47.114Z] Copying: 104516/1048576 [kB] (9424 kBps) [2024-12-05T19:22:47.748Z] Copying: 114280/1048576 [kB] (9764 kBps) [2024-12-05T19:22:49.130Z] Copying: 121/1024 [MB] (10 MBps) [2024-12-05T19:22:50.074Z] Copying: 132/1024 [MB] (10 MBps) [2024-12-05T19:22:51.019Z] Copying: 144780/1048576 [kB] (9464 kBps) [2024-12-05T19:22:51.962Z] Copying: 154232/1048576 [kB] (9452 kBps) [2024-12-05T19:22:52.908Z] Copying: 164128/1048576 [kB] (9896 kBps) [2024-12-05T19:22:53.903Z] Copying: 174256/1048576 [kB] (10128 kBps) [2024-12-05T19:22:54.848Z] Copying: 180/1024 [MB] (10 MBps) [2024-12-05T19:22:55.799Z] Copying: 190/1024 [MB] (10 MBps) [2024-12-05T19:22:56.741Z] Copying: 200/1024 [MB] (10 MBps) [2024-12-05T19:22:58.152Z] Copying: 211/1024 [MB] (10 MBps) [2024-12-05T19:22:59.096Z] Copying: 222/1024 [MB] (11 MBps) [2024-12-05T19:23:00.040Z] Copying: 237936/1048576 [kB] (9884 kBps) [2024-12-05T19:23:00.980Z] Copying: 243/1024 [MB] (11 MBps) [2024-12-05T19:23:01.922Z] Copying: 254/1024 [MB] (10 MBps) [2024-12-05T19:23:02.863Z] Copying: 270/1024 [MB] (15 MBps) [2024-12-05T19:23:03.803Z] Copying: 283/1024 [MB] (13 MBps) [2024-12-05T19:23:04.744Z] Copying: 300/1024 [MB] (17 MBps) [2024-12-05T19:23:06.123Z] Copying: 315/1024 [MB] (14 MBps) [2024-12-05T19:23:07.065Z] Copying: 338/1024 [MB] (23 MBps) [2024-12-05T19:23:08.009Z] Copying: 351/1024 [MB] (13 MBps) [2024-12-05T19:23:08.950Z] Copying: 363/1024 [MB] (12 MBps) [2024-12-05T19:23:09.891Z] Copying: 375/1024 [MB] (11 MBps) [2024-12-05T19:23:10.832Z] Copying: 388/1024 [MB] (13 MBps) [2024-12-05T19:23:11.782Z] Copying: 398/1024 [MB] (10 MBps) [2024-12-05T19:23:12.767Z] Copying: 410/1024 [MB] (11 MBps) [2024-12-05T19:23:13.821Z] Copying: 426/1024 [MB] (16 MBps) [2024-12-05T19:23:14.765Z] Copying: 437/1024 [MB] (10 MBps) [2024-12-05T19:23:16.148Z] Copying: 447/1024 [MB] (10 MBps) [2024-12-05T19:23:17.112Z] Copying: 464/1024 [MB] (16 MBps) [2024-12-05T19:23:18.049Z] Copying: 487/1024 [MB] (23 MBps) [2024-12-05T19:23:18.990Z] Copying: 502/1024 [MB] (14 MBps) [2024-12-05T19:23:19.927Z] Copying: 523/1024 [MB] (21 MBps) [2024-12-05T19:23:20.867Z] Copying: 552/1024 [MB] (29 MBps) [2024-12-05T19:23:21.805Z] Copying: 581/1024 [MB] (28 MBps) [2024-12-05T19:23:22.742Z] Copying: 596/1024 [MB] (15 MBps) [2024-12-05T19:23:24.126Z] Copying: 628/1024 [MB] (31 MBps) [2024-12-05T19:23:25.068Z] Copying: 647/1024 [MB] (18 MBps) [2024-12-05T19:23:26.006Z] Copying: 663/1024 [MB] (16 MBps) [2024-12-05T19:23:26.949Z] Copying: 684/1024 [MB] (20 MBps) [2024-12-05T19:23:27.891Z] Copying: 707/1024 [MB] (23 MBps) [2024-12-05T19:23:28.834Z] Copying: 724/1024 [MB] (16 MBps) [2024-12-05T19:23:29.778Z] Copying: 738/1024 [MB] (14 MBps) [2024-12-05T19:23:31.166Z] Copying: 760/1024 [MB] (22 MBps) [2024-12-05T19:23:31.735Z] Copying: 781/1024 [MB] (20 MBps) [2024-12-05T19:23:33.121Z] Copying: 806/1024 [MB] (25 MBps) [2024-12-05T19:23:34.066Z] Copying: 823/1024 [MB] (16 MBps) [2024-12-05T19:23:35.010Z] Copying: 838/1024 [MB] (14 MBps) [2024-12-05T19:23:35.949Z] Copying: 858/1024 [MB] (19 MBps) [2024-12-05T19:23:36.893Z] Copying: 870/1024 [MB] (12 MBps) [2024-12-05T19:23:37.833Z] Copying: 881/1024 [MB] (10 MBps) [2024-12-05T19:23:38.768Z] Copying: 900/1024 [MB] (19 MBps) [2024-12-05T19:23:40.146Z] Copying: 938/1024 [MB] (37 MBps) [2024-12-05T19:23:41.095Z] Copying: 960/1024 [MB] (22 MBps) [2024-12-05T19:23:42.041Z] Copying: 983/1024 [MB] (22 MBps) [2024-12-05T19:23:42.981Z] Copying: 998/1024 [MB] (15 MBps) [2024-12-05T19:23:43.247Z] Copying: 1018/1024 [MB] (20 MBps) [2024-12-05T19:23:43.247Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-12-05 19:23:43.164908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:25.688 [2024-12-05 19:23:43.164959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:33:25.688 [2024-12-05 19:23:43.164973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:33:25.688 [2024-12-05 19:23:43.164982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:25.688 [2024-12-05 19:23:43.165007] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:33:25.688 [2024-12-05 19:23:43.165477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:25.688 [2024-12-05 19:23:43.165495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:33:25.688 [2024-12-05 19:23:43.165504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.456 ms 00:33:25.688 [2024-12-05 19:23:43.165515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:25.688 [2024-12-05 19:23:43.165722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:25.688 [2024-12-05 19:23:43.165732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:33:25.688 [2024-12-05 19:23:43.165776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.189 ms 00:33:25.688 [2024-12-05 19:23:43.165789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:25.688 [2024-12-05 19:23:43.165819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:25.688 [2024-12-05 19:23:43.165828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:33:25.688 [2024-12-05 19:23:43.165837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:25.688 [2024-12-05 19:23:43.165845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:25.688 [2024-12-05 19:23:43.165895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:25.688 [2024-12-05 19:23:43.165903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:33:25.688 [2024-12-05 19:23:43.165912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:33:25.688 [2024-12-05 19:23:43.165920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:25.688 [2024-12-05 19:23:43.165934] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:33:25.688 [2024-12-05 19:23:43.165947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:33:25.688 [2024-12-05 19:23:43.165960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:33:25.688 [2024-12-05 19:23:43.165968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:33:25.688 [2024-12-05 19:23:43.165976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:25.688 [2024-12-05 19:23:43.165985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:25.688 [2024-12-05 19:23:43.165993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:25.688 [2024-12-05 19:23:43.166002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:25.688 [2024-12-05 19:23:43.166010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:25.688 [2024-12-05 19:23:43.166019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:25.688 [2024-12-05 19:23:43.166028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:25.688 [2024-12-05 19:23:43.166036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:25.688 [2024-12-05 19:23:43.166045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:25.688 [2024-12-05 19:23:43.166054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:25.688 [2024-12-05 19:23:43.166062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:25.688 [2024-12-05 19:23:43.166071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:25.688 [2024-12-05 19:23:43.166080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:25.688 [2024-12-05 19:23:43.166088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:25.688 [2024-12-05 19:23:43.166096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:25.688 [2024-12-05 19:23:43.166125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:33:25.688 [2024-12-05 19:23:43.166134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:33:25.688 [2024-12-05 19:23:43.166142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:33:25.688 [2024-12-05 19:23:43.166150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:33:25.688 [2024-12-05 19:23:43.166159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:33:25.688 [2024-12-05 19:23:43.166168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:33:25.688 [2024-12-05 19:23:43.166176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:33:25.688 [2024-12-05 19:23:43.166184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:33:25.688 [2024-12-05 19:23:43.166193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:33:25.688 [2024-12-05 19:23:43.166202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:33:25.688 [2024-12-05 19:23:43.166210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:33:25.688 [2024-12-05 19:23:43.166219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:33:25.688 [2024-12-05 19:23:43.166227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:33:25.688 [2024-12-05 19:23:43.166235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:33:25.688 [2024-12-05 19:23:43.166244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:33:25.688 [2024-12-05 19:23:43.166269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:33:25.688 [2024-12-05 19:23:43.166277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:33:25.688 [2024-12-05 19:23:43.166284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:33:25.688 [2024-12-05 19:23:43.166293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:33:25.688 [2024-12-05 19:23:43.166300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:33:25.688 [2024-12-05 19:23:43.166308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:33:25.688 [2024-12-05 19:23:43.166315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:33:25.688 [2024-12-05 19:23:43.166322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:33:25.688 [2024-12-05 19:23:43.166329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:33:25.688 [2024-12-05 19:23:43.166337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:33:25.688 [2024-12-05 19:23:43.166344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:33:25.688 [2024-12-05 19:23:43.166351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:33:25.688 [2024-12-05 19:23:43.166358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:33:25.688 [2024-12-05 19:23:43.166366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:33:25.688 [2024-12-05 19:23:43.166373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:33:25.689 [2024-12-05 19:23:43.166380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:33:25.689 [2024-12-05 19:23:43.166387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:33:25.689 [2024-12-05 19:23:43.166394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:33:25.689 [2024-12-05 19:23:43.166401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:33:25.689 [2024-12-05 19:23:43.166408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:33:25.689 [2024-12-05 19:23:43.166415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:33:25.689 [2024-12-05 19:23:43.166423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:33:25.689 [2024-12-05 19:23:43.166430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:33:25.689 [2024-12-05 19:23:43.166437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:33:25.689 [2024-12-05 19:23:43.166444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:33:25.689 [2024-12-05 19:23:43.166451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:33:25.689 [2024-12-05 19:23:43.166458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:33:25.689 [2024-12-05 19:23:43.166465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:33:25.689 [2024-12-05 19:23:43.166472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:33:25.689 [2024-12-05 19:23:43.166480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:33:25.689 [2024-12-05 19:23:43.166487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:33:25.689 [2024-12-05 19:23:43.166494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:33:25.689 [2024-12-05 19:23:43.166501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:33:25.689 [2024-12-05 19:23:43.166508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:33:25.689 [2024-12-05 19:23:43.166516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:33:25.689 [2024-12-05 19:23:43.166523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:33:25.689 [2024-12-05 19:23:43.166530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:33:25.689 [2024-12-05 19:23:43.166537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:33:25.689 [2024-12-05 19:23:43.166544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:33:25.689 [2024-12-05 19:23:43.166551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:33:25.689 [2024-12-05 19:23:43.166558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:33:25.689 [2024-12-05 19:23:43.166565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:33:25.689 [2024-12-05 19:23:43.166572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:33:25.689 [2024-12-05 19:23:43.166579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:33:25.689 [2024-12-05 19:23:43.166587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:33:25.689 [2024-12-05 19:23:43.166594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:33:25.689 [2024-12-05 19:23:43.166601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:33:25.689 [2024-12-05 19:23:43.166608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:33:25.689 [2024-12-05 19:23:43.166615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:33:25.689 [2024-12-05 19:23:43.166622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:33:25.689 [2024-12-05 19:23:43.166628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:33:25.689 [2024-12-05 19:23:43.166636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:33:25.689 [2024-12-05 19:23:43.166643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:33:25.689 [2024-12-05 19:23:43.166650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:33:25.689 [2024-12-05 19:23:43.166665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:33:25.689 [2024-12-05 19:23:43.166672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:33:25.689 [2024-12-05 19:23:43.166679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:33:25.689 [2024-12-05 19:23:43.166686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:33:25.689 [2024-12-05 19:23:43.166693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:33:25.689 [2024-12-05 19:23:43.166700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:33:25.689 [2024-12-05 19:23:43.166707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:33:25.689 [2024-12-05 19:23:43.166714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:33:25.689 [2024-12-05 19:23:43.166722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:33:25.689 [2024-12-05 19:23:43.166729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:33:25.689 [2024-12-05 19:23:43.166737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:33:25.689 [2024-12-05 19:23:43.166744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:33:25.689 [2024-12-05 19:23:43.166752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:33:25.689 [2024-12-05 19:23:43.166766] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:33:25.689 [2024-12-05 19:23:43.166774] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 42867315-b16e-4303-b9a5-ab6c21560170 00:33:25.689 [2024-12-05 19:23:43.166783] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:33:25.689 [2024-12-05 19:23:43.166790] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:33:25.689 [2024-12-05 19:23:43.166797] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:33:25.689 [2024-12-05 19:23:43.166804] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:33:25.689 [2024-12-05 19:23:43.166814] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:33:25.689 [2024-12-05 19:23:43.166821] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:33:25.689 [2024-12-05 19:23:43.166828] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:33:25.689 [2024-12-05 19:23:43.166835] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:33:25.689 [2024-12-05 19:23:43.166841] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:33:25.689 [2024-12-05 19:23:43.166848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:25.689 [2024-12-05 19:23:43.166855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:33:25.689 [2024-12-05 19:23:43.166863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.915 ms 00:33:25.689 [2024-12-05 19:23:43.166872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:25.689 [2024-12-05 19:23:43.168280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:25.689 [2024-12-05 19:23:43.168303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:33:25.689 [2024-12-05 19:23:43.168311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.395 ms 00:33:25.689 [2024-12-05 19:23:43.168318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:25.689 [2024-12-05 19:23:43.168392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:25.689 [2024-12-05 19:23:43.168400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:33:25.689 [2024-12-05 19:23:43.168411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:33:25.689 [2024-12-05 19:23:43.168418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:25.689 [2024-12-05 19:23:43.173192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:25.689 [2024-12-05 19:23:43.173216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:25.689 [2024-12-05 19:23:43.173225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:25.689 [2024-12-05 19:23:43.173233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:25.689 [2024-12-05 19:23:43.173295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:25.689 [2024-12-05 19:23:43.173304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:25.689 [2024-12-05 19:23:43.173315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:25.689 [2024-12-05 19:23:43.173326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:25.689 [2024-12-05 19:23:43.173355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:25.689 [2024-12-05 19:23:43.173363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:25.689 [2024-12-05 19:23:43.173370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:25.689 [2024-12-05 19:23:43.173378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:25.689 [2024-12-05 19:23:43.173406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:25.689 [2024-12-05 19:23:43.173415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:25.689 [2024-12-05 19:23:43.173422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:25.689 [2024-12-05 19:23:43.173431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:25.689 [2024-12-05 19:23:43.182464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:25.689 [2024-12-05 19:23:43.182515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:25.689 [2024-12-05 19:23:43.182529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:25.689 [2024-12-05 19:23:43.182539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:25.689 [2024-12-05 19:23:43.190138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:25.689 [2024-12-05 19:23:43.190180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:25.689 [2024-12-05 19:23:43.190196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:25.689 [2024-12-05 19:23:43.190203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:25.689 [2024-12-05 19:23:43.190231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:25.689 [2024-12-05 19:23:43.190238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:25.689 [2024-12-05 19:23:43.190246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:25.689 [2024-12-05 19:23:43.190273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:25.689 [2024-12-05 19:23:43.190314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:25.689 [2024-12-05 19:23:43.190322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:25.689 [2024-12-05 19:23:43.190330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:25.689 [2024-12-05 19:23:43.190337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:25.689 [2024-12-05 19:23:43.190385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:25.689 [2024-12-05 19:23:43.190393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:25.689 [2024-12-05 19:23:43.190401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:25.689 [2024-12-05 19:23:43.190408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:25.689 [2024-12-05 19:23:43.190429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:25.689 [2024-12-05 19:23:43.190437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:33:25.689 [2024-12-05 19:23:43.190444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:25.689 [2024-12-05 19:23:43.190454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:25.689 [2024-12-05 19:23:43.190491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:25.689 [2024-12-05 19:23:43.190499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:25.689 [2024-12-05 19:23:43.190507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:25.689 [2024-12-05 19:23:43.190514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:25.689 [2024-12-05 19:23:43.190550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:25.689 [2024-12-05 19:23:43.190558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:25.689 [2024-12-05 19:23:43.190569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:25.689 [2024-12-05 19:23:43.190578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:25.689 [2024-12-05 19:23:43.190693] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 25.753 ms, result 0 00:33:25.981 00:33:25.981 00:33:25.981 19:23:43 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:33:28.523 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:33:28.523 19:23:45 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:33:28.523 [2024-12-05 19:23:45.573363] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:33:28.523 [2024-12-05 19:23:45.573484] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96506 ] 00:33:28.523 [2024-12-05 19:23:45.717182] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:28.523 [2024-12-05 19:23:45.736993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:28.523 [2024-12-05 19:23:45.826747] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:28.523 [2024-12-05 19:23:45.826952] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:28.523 [2024-12-05 19:23:45.983372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:28.523 [2024-12-05 19:23:45.983423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:33:28.523 [2024-12-05 19:23:45.983436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:28.523 [2024-12-05 19:23:45.983445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.523 [2024-12-05 19:23:45.983492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:28.523 [2024-12-05 19:23:45.983502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:28.523 [2024-12-05 19:23:45.983511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:33:28.523 [2024-12-05 19:23:45.983523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.523 [2024-12-05 19:23:45.983551] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:33:28.523 [2024-12-05 19:23:45.983789] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:33:28.523 [2024-12-05 19:23:45.983803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:28.523 [2024-12-05 19:23:45.983813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:28.523 [2024-12-05 19:23:45.983823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.260 ms 00:33:28.523 [2024-12-05 19:23:45.983831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.523 [2024-12-05 19:23:45.984086] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:33:28.523 [2024-12-05 19:23:45.984109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:28.523 [2024-12-05 19:23:45.984117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:33:28.523 [2024-12-05 19:23:45.984126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:33:28.523 [2024-12-05 19:23:45.984137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.523 [2024-12-05 19:23:45.984214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:28.523 [2024-12-05 19:23:45.984230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:33:28.523 [2024-12-05 19:23:45.984239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:33:28.523 [2024-12-05 19:23:45.984246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.523 [2024-12-05 19:23:45.984515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:28.523 [2024-12-05 19:23:45.984530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:28.523 [2024-12-05 19:23:45.984539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.210 ms 00:33:28.523 [2024-12-05 19:23:45.984546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.523 [2024-12-05 19:23:45.984620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:28.523 [2024-12-05 19:23:45.984632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:28.523 [2024-12-05 19:23:45.984640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:33:28.523 [2024-12-05 19:23:45.984647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.523 [2024-12-05 19:23:45.984670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:28.523 [2024-12-05 19:23:45.984683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:33:28.523 [2024-12-05 19:23:45.984691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:33:28.523 [2024-12-05 19:23:45.984698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.523 [2024-12-05 19:23:45.984717] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:33:28.523 [2024-12-05 19:23:45.986193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:28.523 [2024-12-05 19:23:45.986222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:28.523 [2024-12-05 19:23:45.986231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.479 ms 00:33:28.523 [2024-12-05 19:23:45.986238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.523 [2024-12-05 19:23:45.986279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:28.523 [2024-12-05 19:23:45.986287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:33:28.523 [2024-12-05 19:23:45.986295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:33:28.523 [2024-12-05 19:23:45.986302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.523 [2024-12-05 19:23:45.986320] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:33:28.523 [2024-12-05 19:23:45.986341] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:33:28.523 [2024-12-05 19:23:45.986381] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:33:28.523 [2024-12-05 19:23:45.986397] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:33:28.523 [2024-12-05 19:23:45.986498] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:33:28.523 [2024-12-05 19:23:45.986507] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:33:28.523 [2024-12-05 19:23:45.986517] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:33:28.523 [2024-12-05 19:23:45.986527] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:33:28.523 [2024-12-05 19:23:45.986537] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:33:28.523 [2024-12-05 19:23:45.986547] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:33:28.523 [2024-12-05 19:23:45.986554] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:33:28.523 [2024-12-05 19:23:45.986564] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:33:28.523 [2024-12-05 19:23:45.986571] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:33:28.523 [2024-12-05 19:23:45.986578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:28.523 [2024-12-05 19:23:45.986585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:33:28.523 [2024-12-05 19:23:45.986596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.260 ms 00:33:28.523 [2024-12-05 19:23:45.986603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.523 [2024-12-05 19:23:45.986685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:28.523 [2024-12-05 19:23:45.986697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:33:28.523 [2024-12-05 19:23:45.986711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:33:28.523 [2024-12-05 19:23:45.986718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.523 [2024-12-05 19:23:45.986824] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:33:28.523 [2024-12-05 19:23:45.986834] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:33:28.523 [2024-12-05 19:23:45.986847] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:28.523 [2024-12-05 19:23:45.986860] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:28.523 [2024-12-05 19:23:45.986870] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:33:28.523 [2024-12-05 19:23:45.986878] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:33:28.523 [2024-12-05 19:23:45.986886] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:33:28.523 [2024-12-05 19:23:45.986894] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:33:28.523 [2024-12-05 19:23:45.986902] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:33:28.523 [2024-12-05 19:23:45.986909] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:28.523 [2024-12-05 19:23:45.986917] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:33:28.523 [2024-12-05 19:23:45.986924] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:33:28.523 [2024-12-05 19:23:45.986932] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:28.523 [2024-12-05 19:23:45.986940] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:33:28.523 [2024-12-05 19:23:45.986947] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:33:28.523 [2024-12-05 19:23:45.986955] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:28.523 [2024-12-05 19:23:45.986962] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:33:28.523 [2024-12-05 19:23:45.986970] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:33:28.523 [2024-12-05 19:23:45.986977] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:28.523 [2024-12-05 19:23:45.986986] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:33:28.523 [2024-12-05 19:23:45.986994] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:33:28.523 [2024-12-05 19:23:45.987001] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:28.523 [2024-12-05 19:23:45.987009] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:33:28.523 [2024-12-05 19:23:45.987017] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:33:28.523 [2024-12-05 19:23:45.987024] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:28.523 [2024-12-05 19:23:45.987031] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:33:28.523 [2024-12-05 19:23:45.987038] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:33:28.524 [2024-12-05 19:23:45.987045] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:28.524 [2024-12-05 19:23:45.987052] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:33:28.524 [2024-12-05 19:23:45.987060] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:33:28.524 [2024-12-05 19:23:45.987067] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:28.524 [2024-12-05 19:23:45.987074] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:33:28.524 [2024-12-05 19:23:45.987082] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:33:28.524 [2024-12-05 19:23:45.987090] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:28.524 [2024-12-05 19:23:45.987097] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:33:28.524 [2024-12-05 19:23:45.987108] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:33:28.524 [2024-12-05 19:23:45.987116] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:28.524 [2024-12-05 19:23:45.987124] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:33:28.524 [2024-12-05 19:23:45.987131] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:33:28.524 [2024-12-05 19:23:45.987138] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:28.524 [2024-12-05 19:23:45.987146] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:33:28.524 [2024-12-05 19:23:45.987153] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:33:28.524 [2024-12-05 19:23:45.987161] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:28.524 [2024-12-05 19:23:45.987168] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:33:28.524 [2024-12-05 19:23:45.987176] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:33:28.524 [2024-12-05 19:23:45.987184] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:28.524 [2024-12-05 19:23:45.987196] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:28.524 [2024-12-05 19:23:45.987205] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:33:28.524 [2024-12-05 19:23:45.987212] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:33:28.524 [2024-12-05 19:23:45.987220] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:33:28.524 [2024-12-05 19:23:45.987227] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:33:28.524 [2024-12-05 19:23:45.987236] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:33:28.524 [2024-12-05 19:23:45.987244] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:33:28.524 [2024-12-05 19:23:45.987264] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:33:28.524 [2024-12-05 19:23:45.987274] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:28.524 [2024-12-05 19:23:45.987283] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:33:28.524 [2024-12-05 19:23:45.987291] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:33:28.524 [2024-12-05 19:23:45.987299] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:33:28.524 [2024-12-05 19:23:45.987307] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:33:28.524 [2024-12-05 19:23:45.987315] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:33:28.524 [2024-12-05 19:23:45.987323] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:33:28.524 [2024-12-05 19:23:45.987331] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:33:28.524 [2024-12-05 19:23:45.987339] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:33:28.524 [2024-12-05 19:23:45.987347] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:33:28.524 [2024-12-05 19:23:45.987356] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:33:28.524 [2024-12-05 19:23:45.987364] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:33:28.524 [2024-12-05 19:23:45.987377] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:33:28.524 [2024-12-05 19:23:45.987387] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:33:28.524 [2024-12-05 19:23:45.987397] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:33:28.524 [2024-12-05 19:23:45.987405] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:33:28.524 [2024-12-05 19:23:45.987414] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:28.524 [2024-12-05 19:23:45.987423] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:28.524 [2024-12-05 19:23:45.987431] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:33:28.524 [2024-12-05 19:23:45.987442] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:33:28.524 [2024-12-05 19:23:45.987450] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:33:28.524 [2024-12-05 19:23:45.987459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:28.524 [2024-12-05 19:23:45.987467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:33:28.524 [2024-12-05 19:23:45.987478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.702 ms 00:33:28.524 [2024-12-05 19:23:45.987486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.524 [2024-12-05 19:23:45.993695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:28.524 [2024-12-05 19:23:45.993731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:28.524 [2024-12-05 19:23:45.993740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.154 ms 00:33:28.524 [2024-12-05 19:23:45.993747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.524 [2024-12-05 19:23:45.993825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:28.524 [2024-12-05 19:23:45.993833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:33:28.524 [2024-12-05 19:23:45.993845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:33:28.524 [2024-12-05 19:23:45.993852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.524 [2024-12-05 19:23:46.016840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:28.524 [2024-12-05 19:23:46.016937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:28.524 [2024-12-05 19:23:46.016969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.937 ms 00:33:28.524 [2024-12-05 19:23:46.016990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.524 [2024-12-05 19:23:46.017090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:28.524 [2024-12-05 19:23:46.017128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:28.524 [2024-12-05 19:23:46.017150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:33:28.524 [2024-12-05 19:23:46.017169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.524 [2024-12-05 19:23:46.017426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:28.524 [2024-12-05 19:23:46.017487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:28.524 [2024-12-05 19:23:46.017513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:33:28.524 [2024-12-05 19:23:46.017536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.524 [2024-12-05 19:23:46.017838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:28.524 [2024-12-05 19:23:46.017874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:28.524 [2024-12-05 19:23:46.017897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.253 ms 00:33:28.524 [2024-12-05 19:23:46.017924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.524 [2024-12-05 19:23:46.023273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:28.524 [2024-12-05 19:23:46.023303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:28.524 [2024-12-05 19:23:46.023316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.287 ms 00:33:28.524 [2024-12-05 19:23:46.023327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.524 [2024-12-05 19:23:46.023428] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:33:28.524 [2024-12-05 19:23:46.023440] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:33:28.524 [2024-12-05 19:23:46.023449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:28.524 [2024-12-05 19:23:46.023457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:33:28.524 [2024-12-05 19:23:46.023466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:33:28.524 [2024-12-05 19:23:46.023475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.524 [2024-12-05 19:23:46.035737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:28.524 [2024-12-05 19:23:46.035765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:33:28.524 [2024-12-05 19:23:46.035776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.247 ms 00:33:28.524 [2024-12-05 19:23:46.035784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.524 [2024-12-05 19:23:46.035897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:28.524 [2024-12-05 19:23:46.035906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:33:28.524 [2024-12-05 19:23:46.035914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:33:28.524 [2024-12-05 19:23:46.035924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.524 [2024-12-05 19:23:46.035966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:28.524 [2024-12-05 19:23:46.035977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:33:28.524 [2024-12-05 19:23:46.035985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:33:28.524 [2024-12-05 19:23:46.035992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.524 [2024-12-05 19:23:46.036350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:28.524 [2024-12-05 19:23:46.036362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:33:28.524 [2024-12-05 19:23:46.036370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.322 ms 00:33:28.524 [2024-12-05 19:23:46.036381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.525 [2024-12-05 19:23:46.036403] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:33:28.525 [2024-12-05 19:23:46.036415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:28.525 [2024-12-05 19:23:46.036424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:33:28.525 [2024-12-05 19:23:46.036432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:33:28.525 [2024-12-05 19:23:46.036442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.525 [2024-12-05 19:23:46.044273] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:33:28.525 [2024-12-05 19:23:46.044402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:28.525 [2024-12-05 19:23:46.044416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:33:28.525 [2024-12-05 19:23:46.044431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.943 ms 00:33:28.525 [2024-12-05 19:23:46.044439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.525 [2024-12-05 19:23:46.046731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:28.525 [2024-12-05 19:23:46.046859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:33:28.525 [2024-12-05 19:23:46.046874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.272 ms 00:33:28.525 [2024-12-05 19:23:46.046881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.525 [2024-12-05 19:23:46.046953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:28.525 [2024-12-05 19:23:46.046967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:33:28.525 [2024-12-05 19:23:46.046975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:33:28.525 [2024-12-05 19:23:46.046986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.525 [2024-12-05 19:23:46.047023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:28.525 [2024-12-05 19:23:46.047035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:33:28.525 [2024-12-05 19:23:46.047043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:28.525 [2024-12-05 19:23:46.047050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.525 [2024-12-05 19:23:46.047080] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:33:28.525 [2024-12-05 19:23:46.047090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:28.525 [2024-12-05 19:23:46.047099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:33:28.525 [2024-12-05 19:23:46.047107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:33:28.525 [2024-12-05 19:23:46.047114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.525 [2024-12-05 19:23:46.050410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:28.525 [2024-12-05 19:23:46.050443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:33:28.525 [2024-12-05 19:23:46.050457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.277 ms 00:33:28.525 [2024-12-05 19:23:46.050465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.525 [2024-12-05 19:23:46.050529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:28.525 [2024-12-05 19:23:46.050539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:33:28.525 [2024-12-05 19:23:46.050550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:33:28.525 [2024-12-05 19:23:46.050560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.525 [2024-12-05 19:23:46.051472] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 67.715 ms, result 0 00:33:29.950  [2024-12-05T19:23:48.080Z] Copying: 21/1024 [MB] (21 MBps) [2024-12-05T19:23:49.469Z] Copying: 39/1024 [MB] (18 MBps) [2024-12-05T19:23:50.408Z] Copying: 60/1024 [MB] (20 MBps) [2024-12-05T19:23:51.349Z] Copying: 79/1024 [MB] (19 MBps) [2024-12-05T19:23:52.290Z] Copying: 100/1024 [MB] (20 MBps) [2024-12-05T19:23:53.356Z] Copying: 110/1024 [MB] (10 MBps) [2024-12-05T19:23:54.298Z] Copying: 122/1024 [MB] (11 MBps) [2024-12-05T19:23:55.239Z] Copying: 132/1024 [MB] (10 MBps) [2024-12-05T19:23:56.176Z] Copying: 155/1024 [MB] (23 MBps) [2024-12-05T19:23:57.141Z] Copying: 180/1024 [MB] (25 MBps) [2024-12-05T19:23:58.086Z] Copying: 200/1024 [MB] (19 MBps) [2024-12-05T19:23:59.469Z] Copying: 215/1024 [MB] (15 MBps) [2024-12-05T19:24:00.402Z] Copying: 230/1024 [MB] (15 MBps) [2024-12-05T19:24:01.338Z] Copying: 252/1024 [MB] (21 MBps) [2024-12-05T19:24:02.282Z] Copying: 297/1024 [MB] (44 MBps) [2024-12-05T19:24:03.227Z] Copying: 317/1024 [MB] (20 MBps) [2024-12-05T19:24:04.164Z] Copying: 333/1024 [MB] (16 MBps) [2024-12-05T19:24:05.111Z] Copying: 366/1024 [MB] (32 MBps) [2024-12-05T19:24:06.491Z] Copying: 395/1024 [MB] (29 MBps) [2024-12-05T19:24:07.430Z] Copying: 409/1024 [MB] (13 MBps) [2024-12-05T19:24:08.371Z] Copying: 419/1024 [MB] (10 MBps) [2024-12-05T19:24:09.378Z] Copying: 434/1024 [MB] (14 MBps) [2024-12-05T19:24:10.320Z] Copying: 444/1024 [MB] (10 MBps) [2024-12-05T19:24:11.260Z] Copying: 464804/1048576 [kB] (9308 kBps) [2024-12-05T19:24:12.198Z] Copying: 464/1024 [MB] (10 MBps) [2024-12-05T19:24:13.138Z] Copying: 485208/1048576 [kB] (9504 kBps) [2024-12-05T19:24:14.080Z] Copying: 495148/1048576 [kB] (9940 kBps) [2024-12-05T19:24:15.489Z] Copying: 504412/1048576 [kB] (9264 kBps) [2024-12-05T19:24:16.433Z] Copying: 513808/1048576 [kB] (9396 kBps) [2024-12-05T19:24:17.377Z] Copying: 523468/1048576 [kB] (9660 kBps) [2024-12-05T19:24:18.320Z] Copying: 532608/1048576 [kB] (9140 kBps) [2024-12-05T19:24:19.266Z] Copying: 541664/1048576 [kB] (9056 kBps) [2024-12-05T19:24:20.208Z] Copying: 551264/1048576 [kB] (9600 kBps) [2024-12-05T19:24:21.153Z] Copying: 560652/1048576 [kB] (9388 kBps) [2024-12-05T19:24:22.094Z] Copying: 569928/1048576 [kB] (9276 kBps) [2024-12-05T19:24:23.096Z] Copying: 568/1024 [MB] (11 MBps) [2024-12-05T19:24:24.485Z] Copying: 591600/1048576 [kB] (9496 kBps) [2024-12-05T19:24:25.427Z] Copying: 601404/1048576 [kB] (9804 kBps) [2024-12-05T19:24:26.371Z] Copying: 603/1024 [MB] (16 MBps) [2024-12-05T19:24:27.310Z] Copying: 628560/1048576 [kB] (10204 kBps) [2024-12-05T19:24:28.249Z] Copying: 638704/1048576 [kB] (10144 kBps) [2024-12-05T19:24:29.191Z] Copying: 634/1024 [MB] (10 MBps) [2024-12-05T19:24:30.132Z] Copying: 645/1024 [MB] (11 MBps) [2024-12-05T19:24:31.075Z] Copying: 656/1024 [MB] (10 MBps) [2024-12-05T19:24:32.462Z] Copying: 681312/1048576 [kB] (9552 kBps) [2024-12-05T19:24:33.406Z] Copying: 675/1024 [MB] (10 MBps) [2024-12-05T19:24:34.368Z] Copying: 686/1024 [MB] (10 MBps) [2024-12-05T19:24:35.311Z] Copying: 696/1024 [MB] (10 MBps) [2024-12-05T19:24:36.253Z] Copying: 723348/1048576 [kB] (9636 kBps) [2024-12-05T19:24:37.197Z] Copying: 716/1024 [MB] (10 MBps) [2024-12-05T19:24:38.210Z] Copying: 743700/1048576 [kB] (9796 kBps) [2024-12-05T19:24:39.155Z] Copying: 753816/1048576 [kB] (10116 kBps) [2024-12-05T19:24:40.097Z] Copying: 763844/1048576 [kB] (10028 kBps) [2024-12-05T19:24:41.548Z] Copying: 773868/1048576 [kB] (10024 kBps) [2024-12-05T19:24:42.122Z] Copying: 766/1024 [MB] (10 MBps) [2024-12-05T19:24:43.064Z] Copying: 794208/1048576 [kB] (9760 kBps) [2024-12-05T19:24:44.445Z] Copying: 804176/1048576 [kB] (9968 kBps) [2024-12-05T19:24:45.383Z] Copying: 814232/1048576 [kB] (10056 kBps) [2024-12-05T19:24:46.319Z] Copying: 824272/1048576 [kB] (10040 kBps) [2024-12-05T19:24:47.264Z] Copying: 815/1024 [MB] (10 MBps) [2024-12-05T19:24:48.210Z] Copying: 844520/1048576 [kB] (9880 kBps) [2024-12-05T19:24:49.156Z] Copying: 834/1024 [MB] (10 MBps) [2024-12-05T19:24:50.124Z] Copying: 844/1024 [MB] (10 MBps) [2024-12-05T19:24:51.066Z] Copying: 874832/1048576 [kB] (9684 kBps) [2024-12-05T19:24:52.453Z] Copying: 884320/1048576 [kB] (9488 kBps) [2024-12-05T19:24:53.394Z] Copying: 893752/1048576 [kB] (9432 kBps) [2024-12-05T19:24:54.339Z] Copying: 903152/1048576 [kB] (9400 kBps) [2024-12-05T19:24:55.279Z] Copying: 892/1024 [MB] (10 MBps) [2024-12-05T19:24:56.220Z] Copying: 903/1024 [MB] (10 MBps) [2024-12-05T19:24:57.185Z] Copying: 913/1024 [MB] (10 MBps) [2024-12-05T19:24:58.128Z] Copying: 923/1024 [MB] (10 MBps) [2024-12-05T19:24:59.074Z] Copying: 955740/1048576 [kB] (10132 kBps) [2024-12-05T19:25:00.511Z] Copying: 965700/1048576 [kB] (9960 kBps) [2024-12-05T19:25:01.145Z] Copying: 953/1024 [MB] (10 MBps) [2024-12-05T19:25:02.085Z] Copying: 969/1024 [MB] (15 MBps) [2024-12-05T19:25:03.469Z] Copying: 979/1024 [MB] (10 MBps) [2024-12-05T19:25:04.409Z] Copying: 992/1024 [MB] (12 MBps) [2024-12-05T19:25:05.349Z] Copying: 1025700/1048576 [kB] (9196 kBps) [2024-12-05T19:25:06.287Z] Copying: 1035656/1048576 [kB] (9956 kBps) [2024-12-05T19:25:07.229Z] Copying: 1045328/1048576 [kB] (9672 kBps) [2024-12-05T19:25:07.491Z] Copying: 1048292/1048576 [kB] (2964 kBps) [2024-12-05T19:25:07.491Z] Copying: 1024/1024 [MB] (average 12 MBps)[2024-12-05 19:25:07.397983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:49.932 [2024-12-05 19:25:07.398061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:34:49.932 [2024-12-05 19:25:07.398076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:34:49.932 [2024-12-05 19:25:07.398092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:49.932 [2024-12-05 19:25:07.398783] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:34:49.932 [2024-12-05 19:25:07.401765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:49.932 [2024-12-05 19:25:07.401803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:34:49.932 [2024-12-05 19:25:07.401814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.960 ms 00:34:49.932 [2024-12-05 19:25:07.401821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:49.932 [2024-12-05 19:25:07.422264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:49.932 [2024-12-05 19:25:07.422316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:34:49.932 [2024-12-05 19:25:07.422330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.734 ms 00:34:49.932 [2024-12-05 19:25:07.422338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:49.932 [2024-12-05 19:25:07.422375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:49.932 [2024-12-05 19:25:07.422385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:34:49.932 [2024-12-05 19:25:07.422393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:34:49.933 [2024-12-05 19:25:07.422401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:49.933 [2024-12-05 19:25:07.422447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:49.933 [2024-12-05 19:25:07.422458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:34:49.933 [2024-12-05 19:25:07.422466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:34:49.933 [2024-12-05 19:25:07.422473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:49.933 [2024-12-05 19:25:07.422485] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:34:49.933 [2024-12-05 19:25:07.422496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 125696 / 261120 wr_cnt: 1 state: open 00:34:49.933 [2024-12-05 19:25:07.422506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:34:49.933 [2024-12-05 19:25:07.422514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:34:49.933 [2024-12-05 19:25:07.422521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:34:49.933 [2024-12-05 19:25:07.422529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:34:49.933 [2024-12-05 19:25:07.422536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:34:49.933 [2024-12-05 19:25:07.422543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:34:49.933 [2024-12-05 19:25:07.422551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:34:49.933 [2024-12-05 19:25:07.422558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:34:49.933 [2024-12-05 19:25:07.422566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:34:49.933 [2024-12-05 19:25:07.422573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:34:49.933 [2024-12-05 19:25:07.422581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:34:49.933 [2024-12-05 19:25:07.422588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:34:49.933 [2024-12-05 19:25:07.422595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:34:49.933 [2024-12-05 19:25:07.422602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:34:49.933 [2024-12-05 19:25:07.422609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:34:49.933 [2024-12-05 19:25:07.422616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:34:49.933 [2024-12-05 19:25:07.422623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:34:49.933 [2024-12-05 19:25:07.422630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:34:49.933 [2024-12-05 19:25:07.422637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:34:49.933 [2024-12-05 19:25:07.422644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:34:49.933 [2024-12-05 19:25:07.422651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:34:49.933 [2024-12-05 19:25:07.422659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:34:49.933 [2024-12-05 19:25:07.422666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:34:49.933 [2024-12-05 19:25:07.422673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:34:49.933 [2024-12-05 19:25:07.422680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:34:49.933 [2024-12-05 19:25:07.422687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:34:49.933 [2024-12-05 19:25:07.422695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:34:49.933 [2024-12-05 19:25:07.422702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:34:49.933 [2024-12-05 19:25:07.422709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:34:49.933 [2024-12-05 19:25:07.422716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:34:49.933 [2024-12-05 19:25:07.422724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:34:49.933 [2024-12-05 19:25:07.422731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:34:49.933 [2024-12-05 19:25:07.422738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:34:49.933 [2024-12-05 19:25:07.422745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:34:49.933 [2024-12-05 19:25:07.422752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:34:49.933 [2024-12-05 19:25:07.422760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:34:49.933 [2024-12-05 19:25:07.422767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:34:49.933 [2024-12-05 19:25:07.422774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:34:49.933 [2024-12-05 19:25:07.422781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:34:49.933 [2024-12-05 19:25:07.422788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:34:49.933 [2024-12-05 19:25:07.422796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:34:49.933 [2024-12-05 19:25:07.422803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:34:49.933 [2024-12-05 19:25:07.422810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:34:49.933 [2024-12-05 19:25:07.422817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:34:49.933 [2024-12-05 19:25:07.422824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:34:49.933 [2024-12-05 19:25:07.422831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:34:49.933 [2024-12-05 19:25:07.422839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:34:49.933 [2024-12-05 19:25:07.422847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:34:49.933 [2024-12-05 19:25:07.422855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:34:49.933 [2024-12-05 19:25:07.422862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:34:49.933 [2024-12-05 19:25:07.422869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:34:49.933 [2024-12-05 19:25:07.422877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:34:49.933 [2024-12-05 19:25:07.422884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:34:49.933 [2024-12-05 19:25:07.422891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:34:49.933 [2024-12-05 19:25:07.422898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:34:49.933 [2024-12-05 19:25:07.422905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:34:49.933 [2024-12-05 19:25:07.422912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:34:49.933 [2024-12-05 19:25:07.422919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:34:49.933 [2024-12-05 19:25:07.422926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:34:49.933 [2024-12-05 19:25:07.422934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:34:49.933 [2024-12-05 19:25:07.422941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:34:49.933 [2024-12-05 19:25:07.422947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:34:49.933 [2024-12-05 19:25:07.422957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:34:49.933 [2024-12-05 19:25:07.422964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:34:49.933 [2024-12-05 19:25:07.422978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:34:49.933 [2024-12-05 19:25:07.422985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:34:49.933 [2024-12-05 19:25:07.422992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:34:49.933 [2024-12-05 19:25:07.423000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:34:49.933 [2024-12-05 19:25:07.423007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:34:49.933 [2024-12-05 19:25:07.423015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:34:49.933 [2024-12-05 19:25:07.423023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:34:49.933 [2024-12-05 19:25:07.423030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:34:49.933 [2024-12-05 19:25:07.423037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:34:49.933 [2024-12-05 19:25:07.423045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:34:49.933 [2024-12-05 19:25:07.423052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:34:49.933 [2024-12-05 19:25:07.423060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:34:49.933 [2024-12-05 19:25:07.423067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:34:49.933 [2024-12-05 19:25:07.423074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:34:49.933 [2024-12-05 19:25:07.423082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:34:49.933 [2024-12-05 19:25:07.423089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:34:49.933 [2024-12-05 19:25:07.423097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:34:49.933 [2024-12-05 19:25:07.423104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:34:49.933 [2024-12-05 19:25:07.423111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:34:49.934 [2024-12-05 19:25:07.423118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:34:49.934 [2024-12-05 19:25:07.423125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:34:49.934 [2024-12-05 19:25:07.423133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:34:49.934 [2024-12-05 19:25:07.423147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:34:49.934 [2024-12-05 19:25:07.423154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:34:49.934 [2024-12-05 19:25:07.423161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:34:49.934 [2024-12-05 19:25:07.423168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:34:49.934 [2024-12-05 19:25:07.423176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:34:49.934 [2024-12-05 19:25:07.423184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:34:49.934 [2024-12-05 19:25:07.423191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:34:49.934 [2024-12-05 19:25:07.423198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:34:49.934 [2024-12-05 19:25:07.423205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:34:49.934 [2024-12-05 19:25:07.423212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:34:49.934 [2024-12-05 19:25:07.423220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:34:49.934 [2024-12-05 19:25:07.423227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:34:49.934 [2024-12-05 19:25:07.423235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:34:49.934 [2024-12-05 19:25:07.423271] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:34:49.934 [2024-12-05 19:25:07.423283] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 42867315-b16e-4303-b9a5-ab6c21560170 00:34:49.934 [2024-12-05 19:25:07.423291] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 125696 00:34:49.934 [2024-12-05 19:25:07.423298] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 125728 00:34:49.934 [2024-12-05 19:25:07.423305] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 125696 00:34:49.934 [2024-12-05 19:25:07.423314] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0003 00:34:49.934 [2024-12-05 19:25:07.423327] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:34:49.934 [2024-12-05 19:25:07.423334] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:34:49.934 [2024-12-05 19:25:07.423341] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:34:49.934 [2024-12-05 19:25:07.423351] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:34:49.934 [2024-12-05 19:25:07.423357] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:34:49.934 [2024-12-05 19:25:07.423364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:49.934 [2024-12-05 19:25:07.423372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:34:49.934 [2024-12-05 19:25:07.423380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.880 ms 00:34:49.934 [2024-12-05 19:25:07.423388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:49.934 [2024-12-05 19:25:07.424873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:49.934 [2024-12-05 19:25:07.424903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:34:49.934 [2024-12-05 19:25:07.424915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.471 ms 00:34:49.934 [2024-12-05 19:25:07.424923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:49.934 [2024-12-05 19:25:07.425008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:49.934 [2024-12-05 19:25:07.425016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:34:49.934 [2024-12-05 19:25:07.425027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:34:49.934 [2024-12-05 19:25:07.425034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:49.934 [2024-12-05 19:25:07.430033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:49.934 [2024-12-05 19:25:07.430071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:49.934 [2024-12-05 19:25:07.430082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:49.934 [2024-12-05 19:25:07.430089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:49.934 [2024-12-05 19:25:07.430166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:49.934 [2024-12-05 19:25:07.430175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:49.934 [2024-12-05 19:25:07.430183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:49.934 [2024-12-05 19:25:07.430190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:49.934 [2024-12-05 19:25:07.430219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:49.934 [2024-12-05 19:25:07.430233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:49.934 [2024-12-05 19:25:07.430243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:49.934 [2024-12-05 19:25:07.430270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:49.934 [2024-12-05 19:25:07.430286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:49.934 [2024-12-05 19:25:07.430293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:49.934 [2024-12-05 19:25:07.430301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:49.934 [2024-12-05 19:25:07.430308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:49.934 [2024-12-05 19:25:07.439621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:49.934 [2024-12-05 19:25:07.439670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:49.934 [2024-12-05 19:25:07.439682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:49.934 [2024-12-05 19:25:07.439690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:49.934 [2024-12-05 19:25:07.447485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:49.934 [2024-12-05 19:25:07.447532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:49.934 [2024-12-05 19:25:07.447543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:49.934 [2024-12-05 19:25:07.447550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:49.934 [2024-12-05 19:25:07.447595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:49.934 [2024-12-05 19:25:07.447604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:49.934 [2024-12-05 19:25:07.447612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:49.934 [2024-12-05 19:25:07.447625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:49.934 [2024-12-05 19:25:07.447648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:49.934 [2024-12-05 19:25:07.447657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:49.934 [2024-12-05 19:25:07.447664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:49.934 [2024-12-05 19:25:07.447672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:49.934 [2024-12-05 19:25:07.447721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:49.934 [2024-12-05 19:25:07.447730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:49.934 [2024-12-05 19:25:07.447738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:49.934 [2024-12-05 19:25:07.447747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:49.934 [2024-12-05 19:25:07.447770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:49.934 [2024-12-05 19:25:07.447779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:34:49.934 [2024-12-05 19:25:07.447786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:49.934 [2024-12-05 19:25:07.447793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:49.934 [2024-12-05 19:25:07.447827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:49.934 [2024-12-05 19:25:07.447836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:49.934 [2024-12-05 19:25:07.447843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:49.934 [2024-12-05 19:25:07.447850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:49.934 [2024-12-05 19:25:07.447892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:49.934 [2024-12-05 19:25:07.447901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:49.934 [2024-12-05 19:25:07.447909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:49.934 [2024-12-05 19:25:07.447916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:49.934 [2024-12-05 19:25:07.448027] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 52.846 ms, result 0 00:34:50.876 00:34:50.876 00:34:50.876 19:25:08 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:34:50.876 [2024-12-05 19:25:08.345430] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:34:50.876 [2024-12-05 19:25:08.345551] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97334 ] 00:34:51.136 [2024-12-05 19:25:08.490084] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:51.136 [2024-12-05 19:25:08.511290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:51.136 [2024-12-05 19:25:08.605481] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:34:51.136 [2024-12-05 19:25:08.605554] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:34:51.397 [2024-12-05 19:25:08.765074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:51.397 [2024-12-05 19:25:08.765137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:34:51.397 [2024-12-05 19:25:08.765152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:34:51.397 [2024-12-05 19:25:08.765160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:51.397 [2024-12-05 19:25:08.765212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:51.397 [2024-12-05 19:25:08.765227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:51.397 [2024-12-05 19:25:08.765236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:34:51.397 [2024-12-05 19:25:08.765249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:51.397 [2024-12-05 19:25:08.765301] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:34:51.397 [2024-12-05 19:25:08.765563] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:34:51.397 [2024-12-05 19:25:08.765577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:51.397 [2024-12-05 19:25:08.765585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:51.397 [2024-12-05 19:25:08.765599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.284 ms 00:34:51.397 [2024-12-05 19:25:08.765606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:51.397 [2024-12-05 19:25:08.765876] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:34:51.397 [2024-12-05 19:25:08.765897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:51.397 [2024-12-05 19:25:08.765907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:34:51.397 [2024-12-05 19:25:08.765915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:34:51.397 [2024-12-05 19:25:08.765925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:51.397 [2024-12-05 19:25:08.765973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:51.397 [2024-12-05 19:25:08.765982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:34:51.397 [2024-12-05 19:25:08.765990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:34:51.397 [2024-12-05 19:25:08.765997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:51.397 [2024-12-05 19:25:08.766243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:51.397 [2024-12-05 19:25:08.766275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:51.397 [2024-12-05 19:25:08.766283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.209 ms 00:34:51.397 [2024-12-05 19:25:08.766291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:51.397 [2024-12-05 19:25:08.766365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:51.397 [2024-12-05 19:25:08.766375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:51.397 [2024-12-05 19:25:08.766385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:34:51.397 [2024-12-05 19:25:08.766393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:51.397 [2024-12-05 19:25:08.766418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:51.397 [2024-12-05 19:25:08.766427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:34:51.397 [2024-12-05 19:25:08.766435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:34:51.397 [2024-12-05 19:25:08.766442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:51.397 [2024-12-05 19:25:08.766461] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:34:51.397 [2024-12-05 19:25:08.768105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:51.397 [2024-12-05 19:25:08.768215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:51.397 [2024-12-05 19:25:08.768291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.648 ms 00:34:51.397 [2024-12-05 19:25:08.768316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:51.397 [2024-12-05 19:25:08.768440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:51.397 [2024-12-05 19:25:08.768464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:34:51.397 [2024-12-05 19:25:08.768518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:34:51.397 [2024-12-05 19:25:08.768541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:51.397 [2024-12-05 19:25:08.768586] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:34:51.397 [2024-12-05 19:25:08.768667] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:34:51.397 [2024-12-05 19:25:08.768761] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:34:51.397 [2024-12-05 19:25:08.768832] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:34:51.397 [2024-12-05 19:25:08.768962] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:34:51.397 [2024-12-05 19:25:08.769002] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:34:51.397 [2024-12-05 19:25:08.769100] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:34:51.397 [2024-12-05 19:25:08.769140] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:34:51.397 [2024-12-05 19:25:08.769196] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:34:51.397 [2024-12-05 19:25:08.769232] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:34:51.397 [2024-12-05 19:25:08.769268] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:34:51.397 [2024-12-05 19:25:08.769327] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:34:51.397 [2024-12-05 19:25:08.769360] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:34:51.397 [2024-12-05 19:25:08.769379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:51.397 [2024-12-05 19:25:08.769396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:34:51.397 [2024-12-05 19:25:08.769422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.795 ms 00:34:51.397 [2024-12-05 19:25:08.769466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:51.397 [2024-12-05 19:25:08.769576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:51.397 [2024-12-05 19:25:08.769614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:34:51.397 [2024-12-05 19:25:08.769656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:34:51.397 [2024-12-05 19:25:08.769676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:51.397 [2024-12-05 19:25:08.769818] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:34:51.397 [2024-12-05 19:25:08.769845] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:34:51.397 [2024-12-05 19:25:08.769868] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:34:51.397 [2024-12-05 19:25:08.769914] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:51.397 [2024-12-05 19:25:08.769924] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:34:51.397 [2024-12-05 19:25:08.769931] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:34:51.397 [2024-12-05 19:25:08.769938] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:34:51.397 [2024-12-05 19:25:08.769944] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:34:51.397 [2024-12-05 19:25:08.769951] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:34:51.397 [2024-12-05 19:25:08.769958] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:34:51.397 [2024-12-05 19:25:08.769964] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:34:51.397 [2024-12-05 19:25:08.769973] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:34:51.397 [2024-12-05 19:25:08.769979] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:34:51.397 [2024-12-05 19:25:08.769986] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:34:51.397 [2024-12-05 19:25:08.769993] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:34:51.397 [2024-12-05 19:25:08.769999] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:51.397 [2024-12-05 19:25:08.770005] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:34:51.397 [2024-12-05 19:25:08.770011] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:34:51.397 [2024-12-05 19:25:08.770018] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:51.397 [2024-12-05 19:25:08.770025] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:34:51.397 [2024-12-05 19:25:08.770032] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:34:51.397 [2024-12-05 19:25:08.770038] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:51.398 [2024-12-05 19:25:08.770044] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:34:51.398 [2024-12-05 19:25:08.770050] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:34:51.398 [2024-12-05 19:25:08.770057] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:51.398 [2024-12-05 19:25:08.770064] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:34:51.398 [2024-12-05 19:25:08.770070] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:34:51.398 [2024-12-05 19:25:08.770078] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:51.398 [2024-12-05 19:25:08.770084] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:34:51.398 [2024-12-05 19:25:08.770091] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:34:51.398 [2024-12-05 19:25:08.770120] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:51.398 [2024-12-05 19:25:08.770127] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:34:51.398 [2024-12-05 19:25:08.770134] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:34:51.398 [2024-12-05 19:25:08.770141] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:34:51.398 [2024-12-05 19:25:08.770147] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:34:51.398 [2024-12-05 19:25:08.770154] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:34:51.398 [2024-12-05 19:25:08.770161] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:34:51.398 [2024-12-05 19:25:08.770167] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:34:51.398 [2024-12-05 19:25:08.770176] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:34:51.398 [2024-12-05 19:25:08.770182] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:51.398 [2024-12-05 19:25:08.770190] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:34:51.398 [2024-12-05 19:25:08.770196] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:34:51.398 [2024-12-05 19:25:08.770203] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:51.398 [2024-12-05 19:25:08.770211] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:34:51.398 [2024-12-05 19:25:08.770219] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:34:51.398 [2024-12-05 19:25:08.770227] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:34:51.398 [2024-12-05 19:25:08.770235] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:51.398 [2024-12-05 19:25:08.770243] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:34:51.398 [2024-12-05 19:25:08.770260] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:34:51.398 [2024-12-05 19:25:08.770268] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:34:51.398 [2024-12-05 19:25:08.770275] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:34:51.398 [2024-12-05 19:25:08.770281] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:34:51.398 [2024-12-05 19:25:08.770288] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:34:51.398 [2024-12-05 19:25:08.770296] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:34:51.398 [2024-12-05 19:25:08.770306] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:51.398 [2024-12-05 19:25:08.770314] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:34:51.398 [2024-12-05 19:25:08.770322] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:34:51.398 [2024-12-05 19:25:08.770330] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:34:51.398 [2024-12-05 19:25:08.770337] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:34:51.398 [2024-12-05 19:25:08.770347] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:34:51.398 [2024-12-05 19:25:08.770355] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:34:51.398 [2024-12-05 19:25:08.770362] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:34:51.398 [2024-12-05 19:25:08.770369] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:34:51.398 [2024-12-05 19:25:08.770376] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:34:51.398 [2024-12-05 19:25:08.770383] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:34:51.398 [2024-12-05 19:25:08.770390] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:34:51.398 [2024-12-05 19:25:08.770402] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:34:51.398 [2024-12-05 19:25:08.770409] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:34:51.398 [2024-12-05 19:25:08.770417] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:34:51.398 [2024-12-05 19:25:08.770425] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:34:51.398 [2024-12-05 19:25:08.770433] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:51.398 [2024-12-05 19:25:08.770442] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:34:51.398 [2024-12-05 19:25:08.770449] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:34:51.398 [2024-12-05 19:25:08.770456] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:34:51.398 [2024-12-05 19:25:08.770463] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:34:51.398 [2024-12-05 19:25:08.770473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:51.398 [2024-12-05 19:25:08.770481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:34:51.398 [2024-12-05 19:25:08.770488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.721 ms 00:34:51.398 [2024-12-05 19:25:08.770495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:51.398 [2024-12-05 19:25:08.777015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:51.398 [2024-12-05 19:25:08.777060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:51.398 [2024-12-05 19:25:08.777070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.476 ms 00:34:51.398 [2024-12-05 19:25:08.777078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:51.398 [2024-12-05 19:25:08.777169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:51.398 [2024-12-05 19:25:08.777177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:34:51.398 [2024-12-05 19:25:08.777185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:34:51.398 [2024-12-05 19:25:08.777193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:51.398 [2024-12-05 19:25:08.793129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:51.398 [2024-12-05 19:25:08.793188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:51.398 [2024-12-05 19:25:08.793206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.890 ms 00:34:51.398 [2024-12-05 19:25:08.793214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:51.398 [2024-12-05 19:25:08.793292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:51.398 [2024-12-05 19:25:08.793302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:51.398 [2024-12-05 19:25:08.793311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:34:51.398 [2024-12-05 19:25:08.793319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:51.398 [2024-12-05 19:25:08.793428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:51.398 [2024-12-05 19:25:08.793441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:51.398 [2024-12-05 19:25:08.793449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:34:51.398 [2024-12-05 19:25:08.793460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:51.398 [2024-12-05 19:25:08.793569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:51.398 [2024-12-05 19:25:08.793578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:51.398 [2024-12-05 19:25:08.793585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:34:51.398 [2024-12-05 19:25:08.793592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:51.398 [2024-12-05 19:25:08.799416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:51.398 [2024-12-05 19:25:08.799610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:51.398 [2024-12-05 19:25:08.799635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.804 ms 00:34:51.398 [2024-12-05 19:25:08.799646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:51.398 [2024-12-05 19:25:08.799790] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:34:51.398 [2024-12-05 19:25:08.799805] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:34:51.398 [2024-12-05 19:25:08.799817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:51.398 [2024-12-05 19:25:08.799830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:34:51.398 [2024-12-05 19:25:08.799841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:34:51.398 [2024-12-05 19:25:08.799852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:51.398 [2024-12-05 19:25:08.813822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:51.398 [2024-12-05 19:25:08.813969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:34:51.398 [2024-12-05 19:25:08.813987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.951 ms 00:34:51.398 [2024-12-05 19:25:08.813994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:51.398 [2024-12-05 19:25:08.814131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:51.398 [2024-12-05 19:25:08.814149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:34:51.398 [2024-12-05 19:25:08.814157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:34:51.398 [2024-12-05 19:25:08.814171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:51.398 [2024-12-05 19:25:08.814227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:51.399 [2024-12-05 19:25:08.814240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:34:51.399 [2024-12-05 19:25:08.814248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:34:51.399 [2024-12-05 19:25:08.814281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:51.399 [2024-12-05 19:25:08.814592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:51.399 [2024-12-05 19:25:08.814606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:34:51.399 [2024-12-05 19:25:08.814614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.272 ms 00:34:51.399 [2024-12-05 19:25:08.814620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:51.399 [2024-12-05 19:25:08.814640] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:34:51.399 [2024-12-05 19:25:08.814649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:51.399 [2024-12-05 19:25:08.814659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:34:51.399 [2024-12-05 19:25:08.814667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:34:51.399 [2024-12-05 19:25:08.814674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:51.399 [2024-12-05 19:25:08.822815] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:34:51.399 [2024-12-05 19:25:08.822973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:51.399 [2024-12-05 19:25:08.822986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:34:51.399 [2024-12-05 19:25:08.822997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.283 ms 00:34:51.399 [2024-12-05 19:25:08.823005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:51.399 [2024-12-05 19:25:08.825397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:51.399 [2024-12-05 19:25:08.825439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:34:51.399 [2024-12-05 19:25:08.825449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.360 ms 00:34:51.399 [2024-12-05 19:25:08.825458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:51.399 [2024-12-05 19:25:08.825527] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:34:51.399 [2024-12-05 19:25:08.826110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:51.399 [2024-12-05 19:25:08.826124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:34:51.399 [2024-12-05 19:25:08.826133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.603 ms 00:34:51.399 [2024-12-05 19:25:08.826142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:51.399 [2024-12-05 19:25:08.826187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:51.399 [2024-12-05 19:25:08.826196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:34:51.399 [2024-12-05 19:25:08.826203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:34:51.399 [2024-12-05 19:25:08.826210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:51.399 [2024-12-05 19:25:08.826246] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:34:51.399 [2024-12-05 19:25:08.826267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:51.399 [2024-12-05 19:25:08.826274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:34:51.399 [2024-12-05 19:25:08.826284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:34:51.399 [2024-12-05 19:25:08.826298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:51.399 [2024-12-05 19:25:08.830720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:51.399 [2024-12-05 19:25:08.830758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:34:51.399 [2024-12-05 19:25:08.830768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.402 ms 00:34:51.399 [2024-12-05 19:25:08.830781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:51.399 [2024-12-05 19:25:08.830848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:51.399 [2024-12-05 19:25:08.830857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:34:51.399 [2024-12-05 19:25:08.830865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:34:51.399 [2024-12-05 19:25:08.830872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:51.399 [2024-12-05 19:25:08.831925] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 66.428 ms, result 0 00:34:52.901  [2024-12-05T19:25:11.095Z] Copying: 10/1024 [MB] (10 MBps) [2024-12-05T19:25:12.041Z] Copying: 20/1024 [MB] (10 MBps) [2024-12-05T19:25:13.425Z] Copying: 30572/1048576 [kB] (9864 kBps) [2024-12-05T19:25:14.370Z] Copying: 40/1024 [MB] (10 MBps) [2024-12-05T19:25:15.311Z] Copying: 51492/1048576 [kB] (9960 kBps) [2024-12-05T19:25:16.254Z] Copying: 60/1024 [MB] (10 MBps) [2024-12-05T19:25:17.196Z] Copying: 71808/1048576 [kB] (9872 kBps) [2024-12-05T19:25:18.133Z] Copying: 81/1024 [MB] (11 MBps) [2024-12-05T19:25:19.073Z] Copying: 92/1024 [MB] (11 MBps) [2024-12-05T19:25:20.046Z] Copying: 102/1024 [MB] (10 MBps) [2024-12-05T19:25:21.425Z] Copying: 113/1024 [MB] (10 MBps) [2024-12-05T19:25:22.366Z] Copying: 124/1024 [MB] (10 MBps) [2024-12-05T19:25:23.309Z] Copying: 134/1024 [MB] (10 MBps) [2024-12-05T19:25:24.252Z] Copying: 145/1024 [MB] (10 MBps) [2024-12-05T19:25:25.195Z] Copying: 158684/1048576 [kB] (10192 kBps) [2024-12-05T19:25:26.135Z] Copying: 164/1024 [MB] (10 MBps) [2024-12-05T19:25:27.082Z] Copying: 175/1024 [MB] (10 MBps) [2024-12-05T19:25:28.024Z] Copying: 185/1024 [MB] (10 MBps) [2024-12-05T19:25:29.407Z] Copying: 195/1024 [MB] (10 MBps) [2024-12-05T19:25:30.350Z] Copying: 206/1024 [MB] (10 MBps) [2024-12-05T19:25:31.296Z] Copying: 216/1024 [MB] (10 MBps) [2024-12-05T19:25:32.238Z] Copying: 231328/1048576 [kB] (9684 kBps) [2024-12-05T19:25:33.181Z] Copying: 237/1024 [MB] (11 MBps) [2024-12-05T19:25:34.123Z] Copying: 248/1024 [MB] (10 MBps) [2024-12-05T19:25:35.082Z] Copying: 258/1024 [MB] (10 MBps) [2024-12-05T19:25:36.035Z] Copying: 274764/1048576 [kB] (10112 kBps) [2024-12-05T19:25:37.415Z] Copying: 284940/1048576 [kB] (10176 kBps) [2024-12-05T19:25:38.356Z] Copying: 289/1024 [MB] (11 MBps) [2024-12-05T19:25:39.300Z] Copying: 300/1024 [MB] (10 MBps) [2024-12-05T19:25:40.240Z] Copying: 311/1024 [MB] (11 MBps) [2024-12-05T19:25:41.182Z] Copying: 329108/1048576 [kB] (9980 kBps) [2024-12-05T19:25:42.123Z] Copying: 339052/1048576 [kB] (9944 kBps) [2024-12-05T19:25:43.069Z] Copying: 343/1024 [MB] (12 MBps) [2024-12-05T19:25:44.457Z] Copying: 361564/1048576 [kB] (9656 kBps) [2024-12-05T19:25:45.030Z] Copying: 371216/1048576 [kB] (9652 kBps) [2024-12-05T19:25:46.427Z] Copying: 373/1024 [MB] (11 MBps) [2024-12-05T19:25:47.376Z] Copying: 391944/1048576 [kB] (9008 kBps) [2024-12-05T19:25:48.320Z] Copying: 400976/1048576 [kB] (9032 kBps) [2024-12-05T19:25:49.265Z] Copying: 410744/1048576 [kB] (9768 kBps) [2024-12-05T19:25:50.210Z] Copying: 411/1024 [MB] (10 MBps) [2024-12-05T19:25:51.152Z] Copying: 431796/1048576 [kB] (10060 kBps) [2024-12-05T19:25:52.096Z] Copying: 440712/1048576 [kB] (8916 kBps) [2024-12-05T19:25:53.040Z] Copying: 450288/1048576 [kB] (9576 kBps) [2024-12-05T19:25:54.426Z] Copying: 450/1024 [MB] (10 MBps) [2024-12-05T19:25:55.370Z] Copying: 470936/1048576 [kB] (10108 kBps) [2024-12-05T19:25:56.313Z] Copying: 480756/1048576 [kB] (9820 kBps) [2024-12-05T19:25:57.288Z] Copying: 490220/1048576 [kB] (9464 kBps) [2024-12-05T19:25:58.233Z] Copying: 499564/1048576 [kB] (9344 kBps) [2024-12-05T19:25:59.176Z] Copying: 509296/1048576 [kB] (9732 kBps) [2024-12-05T19:26:00.117Z] Copying: 519056/1048576 [kB] (9760 kBps) [2024-12-05T19:26:01.065Z] Copying: 528516/1048576 [kB] (9460 kBps) [2024-12-05T19:26:02.452Z] Copying: 537724/1048576 [kB] (9208 kBps) [2024-12-05T19:26:03.024Z] Copying: 547356/1048576 [kB] (9632 kBps) [2024-12-05T19:26:04.449Z] Copying: 557024/1048576 [kB] (9668 kBps) [2024-12-05T19:26:05.388Z] Copying: 566580/1048576 [kB] (9556 kBps) [2024-12-05T19:26:06.328Z] Copying: 576076/1048576 [kB] (9496 kBps) [2024-12-05T19:26:07.268Z] Copying: 573/1024 [MB] (10 MBps) [2024-12-05T19:26:08.211Z] Copying: 596912/1048576 [kB] (10108 kBps) [2024-12-05T19:26:09.154Z] Copying: 606540/1048576 [kB] (9628 kBps) [2024-12-05T19:26:10.094Z] Copying: 615408/1048576 [kB] (8868 kBps) [2024-12-05T19:26:11.034Z] Copying: 611/1024 [MB] (10 MBps) [2024-12-05T19:26:12.418Z] Copying: 636544/1048576 [kB] (10012 kBps) [2024-12-05T19:26:13.361Z] Copying: 646308/1048576 [kB] (9764 kBps) [2024-12-05T19:26:14.304Z] Copying: 655596/1048576 [kB] (9288 kBps) [2024-12-05T19:26:15.291Z] Copying: 664876/1048576 [kB] (9280 kBps) [2024-12-05T19:26:16.235Z] Copying: 674932/1048576 [kB] (10056 kBps) [2024-12-05T19:26:17.179Z] Copying: 684484/1048576 [kB] (9552 kBps) [2024-12-05T19:26:18.163Z] Copying: 694060/1048576 [kB] (9576 kBps) [2024-12-05T19:26:19.104Z] Copying: 689/1024 [MB] (11 MBps) [2024-12-05T19:26:20.047Z] Copying: 701/1024 [MB] (11 MBps) [2024-12-05T19:26:21.435Z] Copying: 727732/1048576 [kB] (9836 kBps) [2024-12-05T19:26:22.375Z] Copying: 736816/1048576 [kB] (9084 kBps) [2024-12-05T19:26:23.314Z] Copying: 746892/1048576 [kB] (10076 kBps) [2024-12-05T19:26:24.256Z] Copying: 740/1024 [MB] (11 MBps) [2024-12-05T19:26:25.254Z] Copying: 752/1024 [MB] (11 MBps) [2024-12-05T19:26:26.191Z] Copying: 763/1024 [MB] (11 MBps) [2024-12-05T19:26:27.132Z] Copying: 790508/1048576 [kB] (8908 kBps) [2024-12-05T19:26:28.079Z] Copying: 799708/1048576 [kB] (9200 kBps) [2024-12-05T19:26:29.055Z] Copying: 809084/1048576 [kB] (9376 kBps) [2024-12-05T19:26:30.440Z] Copying: 818648/1048576 [kB] (9564 kBps) [2024-12-05T19:26:31.377Z] Copying: 828576/1048576 [kB] (9928 kBps) [2024-12-05T19:26:32.332Z] Copying: 838092/1048576 [kB] (9516 kBps) [2024-12-05T19:26:33.273Z] Copying: 846964/1048576 [kB] (8872 kBps) [2024-12-05T19:26:34.218Z] Copying: 838/1024 [MB] (10 MBps) [2024-12-05T19:26:35.271Z] Copying: 848/1024 [MB] (10 MBps) [2024-12-05T19:26:36.214Z] Copying: 858/1024 [MB] (10 MBps) [2024-12-05T19:26:37.154Z] Copying: 869/1024 [MB] (10 MBps) [2024-12-05T19:26:38.091Z] Copying: 882/1024 [MB] (12 MBps) [2024-12-05T19:26:39.041Z] Copying: 913364/1048576 [kB] (10088 kBps) [2024-12-05T19:26:40.425Z] Copying: 922888/1048576 [kB] (9524 kBps) [2024-12-05T19:26:41.367Z] Copying: 933044/1048576 [kB] (10156 kBps) [2024-12-05T19:26:42.309Z] Copying: 921/1024 [MB] (10 MBps) [2024-12-05T19:26:43.251Z] Copying: 932/1024 [MB] (10 MBps) [2024-12-05T19:26:44.194Z] Copying: 943/1024 [MB] (11 MBps) [2024-12-05T19:26:45.137Z] Copying: 955/1024 [MB] (11 MBps) [2024-12-05T19:26:46.082Z] Copying: 966/1024 [MB] (10 MBps) [2024-12-05T19:26:47.048Z] Copying: 976/1024 [MB] (10 MBps) [2024-12-05T19:26:48.431Z] Copying: 987/1024 [MB] (11 MBps) [2024-12-05T19:26:49.371Z] Copying: 1000/1024 [MB] (12 MBps) [2024-12-05T19:26:50.313Z] Copying: 1012/1024 [MB] (12 MBps) [2024-12-05T19:26:50.313Z] Copying: 1047044/1048576 [kB] (10160 kBps) [2024-12-05T19:26:50.313Z] Copying: 1024/1024 [MB] (average 10 MBps)[2024-12-05 19:26:50.275459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:32.754 [2024-12-05 19:26:50.275551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:36:32.754 [2024-12-05 19:26:50.275573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:36:32.754 [2024-12-05 19:26:50.275586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:32.754 [2024-12-05 19:26:50.275620] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:36:32.754 [2024-12-05 19:26:50.276442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:32.754 [2024-12-05 19:26:50.276476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:36:32.754 [2024-12-05 19:26:50.276493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.793 ms 00:36:32.754 [2024-12-05 19:26:50.276507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:32.754 [2024-12-05 19:26:50.276860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:32.754 [2024-12-05 19:26:50.276955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:36:32.754 [2024-12-05 19:26:50.276974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.312 ms 00:36:32.754 [2024-12-05 19:26:50.276988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:32.754 [2024-12-05 19:26:50.277046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:32.754 [2024-12-05 19:26:50.277062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:36:32.754 [2024-12-05 19:26:50.277077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:36:32.754 [2024-12-05 19:26:50.277090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:32.754 [2024-12-05 19:26:50.277169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:32.754 [2024-12-05 19:26:50.277188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:36:32.754 [2024-12-05 19:26:50.277202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:36:32.754 [2024-12-05 19:26:50.277216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:32.754 [2024-12-05 19:26:50.277238] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:36:32.754 [2024-12-05 19:26:50.277283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:36:32.754 [2024-12-05 19:26:50.277301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:36:32.754 [2024-12-05 19:26:50.277315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:36:32.754 [2024-12-05 19:26:50.277330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:36:32.754 [2024-12-05 19:26:50.277345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:36:32.754 [2024-12-05 19:26:50.277359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:36:32.754 [2024-12-05 19:26:50.277373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:36:32.754 [2024-12-05 19:26:50.277387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:36:32.754 [2024-12-05 19:26:50.277401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:36:32.754 [2024-12-05 19:26:50.277415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:36:32.754 [2024-12-05 19:26:50.277430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:36:32.754 [2024-12-05 19:26:50.277444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:36:32.754 [2024-12-05 19:26:50.277458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:36:32.754 [2024-12-05 19:26:50.277472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:36:32.754 [2024-12-05 19:26:50.277487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:36:32.754 [2024-12-05 19:26:50.277501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:36:32.754 [2024-12-05 19:26:50.277515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:36:32.754 [2024-12-05 19:26:50.277530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:36:32.754 [2024-12-05 19:26:50.277544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:36:32.754 [2024-12-05 19:26:50.277558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:36:32.754 [2024-12-05 19:26:50.277573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:36:32.754 [2024-12-05 19:26:50.277588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:36:32.754 [2024-12-05 19:26:50.277602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:36:32.754 [2024-12-05 19:26:50.277616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:36:32.754 [2024-12-05 19:26:50.277629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:36:32.754 [2024-12-05 19:26:50.277644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:36:32.754 [2024-12-05 19:26:50.277658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:36:32.754 [2024-12-05 19:26:50.277672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:36:32.754 [2024-12-05 19:26:50.277686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:36:32.754 [2024-12-05 19:26:50.277701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:36:32.754 [2024-12-05 19:26:50.277715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:36:32.754 [2024-12-05 19:26:50.277729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:36:32.754 [2024-12-05 19:26:50.277744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:36:32.754 [2024-12-05 19:26:50.277758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:36:32.754 [2024-12-05 19:26:50.277772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:36:32.754 [2024-12-05 19:26:50.277785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:36:32.754 [2024-12-05 19:26:50.277799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:36:32.754 [2024-12-05 19:26:50.277813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:36:32.754 [2024-12-05 19:26:50.277827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:36:32.754 [2024-12-05 19:26:50.277840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:36:32.754 [2024-12-05 19:26:50.277854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:36:32.754 [2024-12-05 19:26:50.277868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:36:32.754 [2024-12-05 19:26:50.277882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:36:32.754 [2024-12-05 19:26:50.277896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:36:32.754 [2024-12-05 19:26:50.277909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:36:32.754 [2024-12-05 19:26:50.277923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:36:32.754 [2024-12-05 19:26:50.277937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:36:32.754 [2024-12-05 19:26:50.277951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:36:32.754 [2024-12-05 19:26:50.277966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:36:32.754 [2024-12-05 19:26:50.277979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:36:32.754 [2024-12-05 19:26:50.277994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:36:32.754 [2024-12-05 19:26:50.278008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:36:32.754 [2024-12-05 19:26:50.278021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:36:32.754 [2024-12-05 19:26:50.278036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:36:32.754 [2024-12-05 19:26:50.278050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:36:32.754 [2024-12-05 19:26:50.278063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:36:32.755 [2024-12-05 19:26:50.278077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:36:32.755 [2024-12-05 19:26:50.278109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:36:32.755 [2024-12-05 19:26:50.278124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:36:32.755 [2024-12-05 19:26:50.278138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:36:32.755 [2024-12-05 19:26:50.278152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:36:32.755 [2024-12-05 19:26:50.278166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:36:32.755 [2024-12-05 19:26:50.278181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:36:32.755 [2024-12-05 19:26:50.278195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:36:32.755 [2024-12-05 19:26:50.278209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:36:32.755 [2024-12-05 19:26:50.278223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:36:32.755 [2024-12-05 19:26:50.278236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:36:32.755 [2024-12-05 19:26:50.278264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:36:32.755 [2024-12-05 19:26:50.278279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:36:32.755 [2024-12-05 19:26:50.278294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:36:32.755 [2024-12-05 19:26:50.278308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:36:32.755 [2024-12-05 19:26:50.278322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:36:32.755 [2024-12-05 19:26:50.278335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:36:32.755 [2024-12-05 19:26:50.278349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:36:32.755 [2024-12-05 19:26:50.278363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:36:32.755 [2024-12-05 19:26:50.278376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:36:32.755 [2024-12-05 19:26:50.278390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:36:32.755 [2024-12-05 19:26:50.278404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:36:32.755 [2024-12-05 19:26:50.278418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:36:32.755 [2024-12-05 19:26:50.278432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:36:32.755 [2024-12-05 19:26:50.278446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:36:32.755 [2024-12-05 19:26:50.278460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:36:32.755 [2024-12-05 19:26:50.278476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:36:32.755 [2024-12-05 19:26:50.278489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:36:32.755 [2024-12-05 19:26:50.278503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:36:32.755 [2024-12-05 19:26:50.278518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:36:32.755 [2024-12-05 19:26:50.278532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:36:32.755 [2024-12-05 19:26:50.278555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:36:32.755 [2024-12-05 19:26:50.278569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:36:32.755 [2024-12-05 19:26:50.278583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:36:32.755 [2024-12-05 19:26:50.278596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:36:32.755 [2024-12-05 19:26:50.278620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:36:32.755 [2024-12-05 19:26:50.278634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:36:32.755 [2024-12-05 19:26:50.278648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:36:32.755 [2024-12-05 19:26:50.278663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:36:32.755 [2024-12-05 19:26:50.278677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:36:32.755 [2024-12-05 19:26:50.278691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:36:32.755 [2024-12-05 19:26:50.278705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:36:32.755 [2024-12-05 19:26:50.278720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:36:32.755 [2024-12-05 19:26:50.278734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:36:32.755 [2024-12-05 19:26:50.278760] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:36:32.755 [2024-12-05 19:26:50.278779] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 42867315-b16e-4303-b9a5-ab6c21560170 00:36:32.755 [2024-12-05 19:26:50.278794] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:36:32.755 [2024-12-05 19:26:50.278806] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 5408 00:36:32.755 [2024-12-05 19:26:50.278819] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 5376 00:36:32.755 [2024-12-05 19:26:50.278834] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0060 00:36:32.755 [2024-12-05 19:26:50.278850] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:36:32.755 [2024-12-05 19:26:50.278864] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:36:32.755 [2024-12-05 19:26:50.278878] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:36:32.755 [2024-12-05 19:26:50.278890] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:36:32.755 [2024-12-05 19:26:50.278903] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:36:32.755 [2024-12-05 19:26:50.278915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:32.755 [2024-12-05 19:26:50.278929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:36:32.755 [2024-12-05 19:26:50.278943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.678 ms 00:36:32.755 [2024-12-05 19:26:50.278957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:32.755 [2024-12-05 19:26:50.281872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:32.755 [2024-12-05 19:26:50.282043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:36:32.755 [2024-12-05 19:26:50.282306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.889 ms 00:36:32.755 [2024-12-05 19:26:50.282402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:32.755 [2024-12-05 19:26:50.282574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:32.755 [2024-12-05 19:26:50.282698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:36:32.755 [2024-12-05 19:26:50.282739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:36:32.755 [2024-12-05 19:26:50.282803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:32.755 [2024-12-05 19:26:50.290701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:32.755 [2024-12-05 19:26:50.290862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:36:32.755 [2024-12-05 19:26:50.290917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:32.755 [2024-12-05 19:26:50.290939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:32.755 [2024-12-05 19:26:50.291019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:32.755 [2024-12-05 19:26:50.291040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:36:32.755 [2024-12-05 19:26:50.291060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:32.756 [2024-12-05 19:26:50.291079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:32.756 [2024-12-05 19:26:50.291152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:32.756 [2024-12-05 19:26:50.291176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:36:32.756 [2024-12-05 19:26:50.291202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:32.756 [2024-12-05 19:26:50.291271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:32.756 [2024-12-05 19:26:50.291307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:32.756 [2024-12-05 19:26:50.291328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:36:32.756 [2024-12-05 19:26:50.291348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:32.756 [2024-12-05 19:26:50.291367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:32.756 [2024-12-05 19:26:50.305339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:32.756 [2024-12-05 19:26:50.305521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:36:32.756 [2024-12-05 19:26:50.305576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:32.756 [2024-12-05 19:26:50.305599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:33.016 [2024-12-05 19:26:50.317437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:33.016 [2024-12-05 19:26:50.317620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:36:33.016 [2024-12-05 19:26:50.317676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:33.016 [2024-12-05 19:26:50.317698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:33.016 [2024-12-05 19:26:50.317777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:33.016 [2024-12-05 19:26:50.317799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:36:33.016 [2024-12-05 19:26:50.317819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:33.016 [2024-12-05 19:26:50.317843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:33.016 [2024-12-05 19:26:50.317887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:33.016 [2024-12-05 19:26:50.317908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:36:33.016 [2024-12-05 19:26:50.317929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:33.016 [2024-12-05 19:26:50.317991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:33.016 [2024-12-05 19:26:50.318070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:33.016 [2024-12-05 19:26:50.318106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:36:33.016 [2024-12-05 19:26:50.318127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:33.016 [2024-12-05 19:26:50.318212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:33.016 [2024-12-05 19:26:50.318299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:33.016 [2024-12-05 19:26:50.318324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:36:33.016 [2024-12-05 19:26:50.318344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:33.016 [2024-12-05 19:26:50.318362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:33.016 [2024-12-05 19:26:50.318447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:33.016 [2024-12-05 19:26:50.318470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:36:33.017 [2024-12-05 19:26:50.318490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:33.017 [2024-12-05 19:26:50.318507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:33.017 [2024-12-05 19:26:50.318568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:33.017 [2024-12-05 19:26:50.318591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:36:33.017 [2024-12-05 19:26:50.318617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:33.017 [2024-12-05 19:26:50.318635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:33.017 [2024-12-05 19:26:50.318773] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 43.290 ms, result 0 00:36:33.277 00:36:33.277 00:36:33.277 19:26:50 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:36:35.827 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:36:35.827 19:26:52 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:36:35.827 19:26:52 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:36:35.827 19:26:52 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:36:35.827 19:26:52 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:36:35.827 19:26:52 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:36:35.827 Process with pid 94660 is not found 00:36:35.827 Remove shared memory files 00:36:35.827 19:26:52 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 94660 00:36:35.827 19:26:52 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 94660 ']' 00:36:35.827 19:26:52 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 94660 00:36:35.827 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (94660) - No such process 00:36:35.827 19:26:52 ftl.ftl_restore_fast -- common/autotest_common.sh@981 -- # echo 'Process with pid 94660 is not found' 00:36:35.827 19:26:52 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:36:35.827 19:26:52 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:36:35.827 19:26:52 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:36:35.827 19:26:53 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_42867315-b16e-4303-b9a5-ab6c21560170_band_md /dev/hugepages/ftl_42867315-b16e-4303-b9a5-ab6c21560170_l2p_l1 /dev/hugepages/ftl_42867315-b16e-4303-b9a5-ab6c21560170_l2p_l2 /dev/hugepages/ftl_42867315-b16e-4303-b9a5-ab6c21560170_l2p_l2_ctx /dev/hugepages/ftl_42867315-b16e-4303-b9a5-ab6c21560170_nvc_md /dev/hugepages/ftl_42867315-b16e-4303-b9a5-ab6c21560170_p2l_pool /dev/hugepages/ftl_42867315-b16e-4303-b9a5-ab6c21560170_sb /dev/hugepages/ftl_42867315-b16e-4303-b9a5-ab6c21560170_sb_shm /dev/hugepages/ftl_42867315-b16e-4303-b9a5-ab6c21560170_trim_bitmap /dev/hugepages/ftl_42867315-b16e-4303-b9a5-ab6c21560170_trim_log /dev/hugepages/ftl_42867315-b16e-4303-b9a5-ab6c21560170_trim_md /dev/hugepages/ftl_42867315-b16e-4303-b9a5-ab6c21560170_vmap 00:36:35.827 19:26:53 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:36:35.827 19:26:53 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:36:35.827 19:26:53 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:36:35.827 ************************************ 00:36:35.827 END TEST ftl_restore_fast 00:36:35.827 ************************************ 00:36:35.827 00:36:35.827 real 6m10.546s 00:36:35.827 user 5m57.676s 00:36:35.827 sys 0m12.529s 00:36:35.827 19:26:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:35.827 19:26:53 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:36:35.827 19:26:53 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:36:35.827 19:26:53 ftl -- ftl/ftl.sh@14 -- # killprocess 85947 00:36:35.827 Process with pid 85947 is not found 00:36:35.827 19:26:53 ftl -- common/autotest_common.sh@954 -- # '[' -z 85947 ']' 00:36:35.827 19:26:53 ftl -- common/autotest_common.sh@958 -- # kill -0 85947 00:36:35.827 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (85947) - No such process 00:36:35.827 19:26:53 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 85947 is not found' 00:36:35.827 19:26:53 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:36:35.827 19:26:53 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=98380 00:36:35.827 19:26:53 ftl -- ftl/ftl.sh@20 -- # waitforlisten 98380 00:36:35.827 19:26:53 ftl -- common/autotest_common.sh@835 -- # '[' -z 98380 ']' 00:36:35.827 19:26:53 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:35.827 19:26:53 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:35.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:35.827 19:26:53 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:35.827 19:26:53 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:35.827 19:26:53 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:36:35.827 19:26:53 ftl -- common/autotest_common.sh@10 -- # set +x 00:36:35.827 [2024-12-05 19:26:53.158743] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 22.11.4 initialization... 00:36:35.827 [2024-12-05 19:26:53.159159] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98380 ] 00:36:35.827 [2024-12-05 19:26:53.306481] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:35.827 [2024-12-05 19:26:53.343038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:36.771 19:26:54 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:36.771 19:26:54 ftl -- common/autotest_common.sh@868 -- # return 0 00:36:36.771 19:26:54 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:36:37.032 nvme0n1 00:36:37.032 19:26:54 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:36:37.032 19:26:54 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:36:37.032 19:26:54 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:36:37.293 19:26:54 ftl -- ftl/common.sh@28 -- # stores=ac906215-a184-4d8b-aac9-ad4edb6337ef 00:36:37.293 19:26:54 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:36:37.293 19:26:54 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ac906215-a184-4d8b-aac9-ad4edb6337ef 00:36:37.576 19:26:54 ftl -- ftl/ftl.sh@23 -- # killprocess 98380 00:36:37.576 19:26:54 ftl -- common/autotest_common.sh@954 -- # '[' -z 98380 ']' 00:36:37.576 19:26:54 ftl -- common/autotest_common.sh@958 -- # kill -0 98380 00:36:37.576 19:26:54 ftl -- common/autotest_common.sh@959 -- # uname 00:36:37.576 19:26:54 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:37.576 19:26:54 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 98380 00:36:37.576 19:26:54 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:37.576 killing process with pid 98380 00:36:37.576 19:26:54 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:37.576 19:26:54 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 98380' 00:36:37.576 19:26:54 ftl -- common/autotest_common.sh@973 -- # kill 98380 00:36:37.576 19:26:54 ftl -- common/autotest_common.sh@978 -- # wait 98380 00:36:37.834 19:26:55 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:36:38.091 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:38.091 Waiting for block devices as requested 00:36:38.091 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:36:38.091 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:36:38.351 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:36:38.351 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:36:43.640 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:36:43.640 19:27:00 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:36:43.640 Remove shared memory files 00:36:43.640 19:27:00 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:36:43.640 19:27:00 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:36:43.640 19:27:00 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:36:43.640 19:27:00 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:36:43.640 19:27:00 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:36:43.640 19:27:00 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:36:43.640 00:36:43.640 real 19m8.715s 00:36:43.640 user 21m1.954s 00:36:43.640 sys 1m20.599s 00:36:43.640 19:27:00 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:43.640 ************************************ 00:36:43.640 END TEST ftl 00:36:43.640 ************************************ 00:36:43.640 19:27:00 ftl -- common/autotest_common.sh@10 -- # set +x 00:36:43.640 19:27:00 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:36:43.640 19:27:00 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:36:43.640 19:27:00 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:36:43.640 19:27:00 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:36:43.640 19:27:00 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:36:43.640 19:27:00 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:36:43.640 19:27:00 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:36:43.640 19:27:00 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:36:43.640 19:27:00 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:36:43.640 19:27:00 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:36:43.640 19:27:00 -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:43.640 19:27:00 -- common/autotest_common.sh@10 -- # set +x 00:36:43.640 19:27:00 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:36:43.640 19:27:00 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:36:43.640 19:27:00 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:36:43.640 19:27:00 -- common/autotest_common.sh@10 -- # set +x 00:36:45.017 INFO: APP EXITING 00:36:45.018 INFO: killing all VMs 00:36:45.018 INFO: killing vhost app 00:36:45.018 INFO: EXIT DONE 00:36:45.278 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:45.539 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:36:45.539 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:36:45.539 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:36:45.539 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:36:46.110 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:46.370 Cleaning 00:36:46.370 Removing: /var/run/dpdk/spdk0/config 00:36:46.370 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:36:46.370 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:36:46.370 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:36:46.370 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:36:46.370 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:36:46.370 Removing: /var/run/dpdk/spdk0/hugepage_info 00:36:46.370 Removing: /var/run/dpdk/spdk0 00:36:46.370 Removing: /var/run/dpdk/spdk_pid68907 00:36:46.370 Removing: /var/run/dpdk/spdk_pid69059 00:36:46.370 Removing: /var/run/dpdk/spdk_pid69261 00:36:46.370 Removing: /var/run/dpdk/spdk_pid69343 00:36:46.370 Removing: /var/run/dpdk/spdk_pid69371 00:36:46.370 Removing: /var/run/dpdk/spdk_pid69477 00:36:46.370 Removing: /var/run/dpdk/spdk_pid69495 00:36:46.370 Removing: /var/run/dpdk/spdk_pid69678 00:36:46.370 Removing: /var/run/dpdk/spdk_pid69751 00:36:46.370 Removing: /var/run/dpdk/spdk_pid69831 00:36:46.370 Removing: /var/run/dpdk/spdk_pid69925 00:36:46.370 Removing: /var/run/dpdk/spdk_pid70006 00:36:46.371 Removing: /var/run/dpdk/spdk_pid70044 00:36:46.371 Removing: /var/run/dpdk/spdk_pid70076 00:36:46.371 Removing: /var/run/dpdk/spdk_pid70147 00:36:46.371 Removing: /var/run/dpdk/spdk_pid70225 00:36:46.371 Removing: /var/run/dpdk/spdk_pid70645 00:36:46.371 Removing: /var/run/dpdk/spdk_pid70692 00:36:46.371 Removing: /var/run/dpdk/spdk_pid70739 00:36:46.371 Removing: /var/run/dpdk/spdk_pid70755 00:36:46.371 Removing: /var/run/dpdk/spdk_pid70813 00:36:46.371 Removing: /var/run/dpdk/spdk_pid70829 00:36:46.371 Removing: /var/run/dpdk/spdk_pid70886 00:36:46.371 Removing: /var/run/dpdk/spdk_pid70896 00:36:46.371 Removing: /var/run/dpdk/spdk_pid70945 00:36:46.371 Removing: /var/run/dpdk/spdk_pid70963 00:36:46.371 Removing: /var/run/dpdk/spdk_pid71005 00:36:46.632 Removing: /var/run/dpdk/spdk_pid71023 00:36:46.632 Removing: /var/run/dpdk/spdk_pid71150 00:36:46.632 Removing: /var/run/dpdk/spdk_pid71181 00:36:46.632 Removing: /var/run/dpdk/spdk_pid71270 00:36:46.632 Removing: /var/run/dpdk/spdk_pid71431 00:36:46.632 Removing: /var/run/dpdk/spdk_pid71504 00:36:46.632 Removing: /var/run/dpdk/spdk_pid71535 00:36:46.632 Removing: /var/run/dpdk/spdk_pid71963 00:36:46.632 Removing: /var/run/dpdk/spdk_pid72050 00:36:46.633 Removing: /var/run/dpdk/spdk_pid72161 00:36:46.633 Removing: /var/run/dpdk/spdk_pid72203 00:36:46.633 Removing: /var/run/dpdk/spdk_pid72223 00:36:46.633 Removing: /var/run/dpdk/spdk_pid72307 00:36:46.633 Removing: /var/run/dpdk/spdk_pid72913 00:36:46.633 Removing: /var/run/dpdk/spdk_pid72946 00:36:46.633 Removing: /var/run/dpdk/spdk_pid73389 00:36:46.633 Removing: /var/run/dpdk/spdk_pid73481 00:36:46.633 Removing: /var/run/dpdk/spdk_pid73586 00:36:46.633 Removing: /var/run/dpdk/spdk_pid73628 00:36:46.633 Removing: /var/run/dpdk/spdk_pid73648 00:36:46.633 Removing: /var/run/dpdk/spdk_pid73668 00:36:46.633 Removing: /var/run/dpdk/spdk_pid75504 00:36:46.633 Removing: /var/run/dpdk/spdk_pid75630 00:36:46.633 Removing: /var/run/dpdk/spdk_pid75634 00:36:46.633 Removing: /var/run/dpdk/spdk_pid75646 00:36:46.633 Removing: /var/run/dpdk/spdk_pid75685 00:36:46.633 Removing: /var/run/dpdk/spdk_pid75689 00:36:46.633 Removing: /var/run/dpdk/spdk_pid75701 00:36:46.633 Removing: /var/run/dpdk/spdk_pid75740 00:36:46.633 Removing: /var/run/dpdk/spdk_pid75744 00:36:46.633 Removing: /var/run/dpdk/spdk_pid75756 00:36:46.633 Removing: /var/run/dpdk/spdk_pid75797 00:36:46.633 Removing: /var/run/dpdk/spdk_pid75801 00:36:46.633 Removing: /var/run/dpdk/spdk_pid75813 00:36:46.633 Removing: /var/run/dpdk/spdk_pid77201 00:36:46.633 Removing: /var/run/dpdk/spdk_pid77287 00:36:46.633 Removing: /var/run/dpdk/spdk_pid78685 00:36:46.633 Removing: /var/run/dpdk/spdk_pid80415 00:36:46.633 Removing: /var/run/dpdk/spdk_pid80467 00:36:46.633 Removing: /var/run/dpdk/spdk_pid80540 00:36:46.633 Removing: /var/run/dpdk/spdk_pid80634 00:36:46.633 Removing: /var/run/dpdk/spdk_pid80720 00:36:46.633 Removing: /var/run/dpdk/spdk_pid80805 00:36:46.633 Removing: /var/run/dpdk/spdk_pid80862 00:36:46.633 Removing: /var/run/dpdk/spdk_pid80934 00:36:46.633 Removing: /var/run/dpdk/spdk_pid81027 00:36:46.633 Removing: /var/run/dpdk/spdk_pid81113 00:36:46.633 Removing: /var/run/dpdk/spdk_pid81198 00:36:46.633 Removing: /var/run/dpdk/spdk_pid81250 00:36:46.633 Removing: /var/run/dpdk/spdk_pid81324 00:36:46.633 Removing: /var/run/dpdk/spdk_pid81418 00:36:46.633 Removing: /var/run/dpdk/spdk_pid81499 00:36:46.633 Removing: /var/run/dpdk/spdk_pid81589 00:36:46.633 Removing: /var/run/dpdk/spdk_pid81647 00:36:46.633 Removing: /var/run/dpdk/spdk_pid81711 00:36:46.633 Removing: /var/run/dpdk/spdk_pid81822 00:36:46.633 Removing: /var/run/dpdk/spdk_pid81899 00:36:46.633 Removing: /var/run/dpdk/spdk_pid81988 00:36:46.633 Removing: /var/run/dpdk/spdk_pid82046 00:36:46.633 Removing: /var/run/dpdk/spdk_pid82115 00:36:46.633 Removing: /var/run/dpdk/spdk_pid82180 00:36:46.633 Removing: /var/run/dpdk/spdk_pid82243 00:36:46.633 Removing: /var/run/dpdk/spdk_pid82341 00:36:46.633 Removing: /var/run/dpdk/spdk_pid82426 00:36:46.633 Removing: /var/run/dpdk/spdk_pid82506 00:36:46.633 Removing: /var/run/dpdk/spdk_pid82567 00:36:46.633 Removing: /var/run/dpdk/spdk_pid82630 00:36:46.633 Removing: /var/run/dpdk/spdk_pid82699 00:36:46.633 Removing: /var/run/dpdk/spdk_pid82763 00:36:46.633 Removing: /var/run/dpdk/spdk_pid82861 00:36:46.633 Removing: /var/run/dpdk/spdk_pid82945 00:36:46.633 Removing: /var/run/dpdk/spdk_pid83079 00:36:46.633 Removing: /var/run/dpdk/spdk_pid83355 00:36:46.633 Removing: /var/run/dpdk/spdk_pid83381 00:36:46.633 Removing: /var/run/dpdk/spdk_pid83824 00:36:46.633 Removing: /var/run/dpdk/spdk_pid83997 00:36:46.633 Removing: /var/run/dpdk/spdk_pid84082 00:36:46.633 Removing: /var/run/dpdk/spdk_pid84186 00:36:46.633 Removing: /var/run/dpdk/spdk_pid84225 00:36:46.633 Removing: /var/run/dpdk/spdk_pid84250 00:36:46.633 Removing: /var/run/dpdk/spdk_pid84551 00:36:46.633 Removing: /var/run/dpdk/spdk_pid84589 00:36:46.633 Removing: /var/run/dpdk/spdk_pid84640 00:36:46.633 Removing: /var/run/dpdk/spdk_pid85005 00:36:46.633 Removing: /var/run/dpdk/spdk_pid85151 00:36:46.633 Removing: /var/run/dpdk/spdk_pid85947 00:36:46.633 Removing: /var/run/dpdk/spdk_pid86063 00:36:46.633 Removing: /var/run/dpdk/spdk_pid86222 00:36:46.633 Removing: /var/run/dpdk/spdk_pid86308 00:36:46.633 Removing: /var/run/dpdk/spdk_pid86605 00:36:46.633 Removing: /var/run/dpdk/spdk_pid86859 00:36:46.633 Removing: /var/run/dpdk/spdk_pid87200 00:36:46.633 Removing: /var/run/dpdk/spdk_pid87355 00:36:46.633 Removing: /var/run/dpdk/spdk_pid87568 00:36:46.633 Removing: /var/run/dpdk/spdk_pid87604 00:36:46.633 Removing: /var/run/dpdk/spdk_pid87791 00:36:46.633 Removing: /var/run/dpdk/spdk_pid87805 00:36:46.633 Removing: /var/run/dpdk/spdk_pid87848 00:36:46.633 Removing: /var/run/dpdk/spdk_pid88109 00:36:46.633 Removing: /var/run/dpdk/spdk_pid88325 00:36:46.633 Removing: /var/run/dpdk/spdk_pid88759 00:36:46.633 Removing: /var/run/dpdk/spdk_pid89418 00:36:46.894 Removing: /var/run/dpdk/spdk_pid89976 00:36:46.894 Removing: /var/run/dpdk/spdk_pid90751 00:36:46.894 Removing: /var/run/dpdk/spdk_pid90893 00:36:46.894 Removing: /var/run/dpdk/spdk_pid90969 00:36:46.894 Removing: /var/run/dpdk/spdk_pid91310 00:36:46.894 Removing: /var/run/dpdk/spdk_pid91357 00:36:46.894 Removing: /var/run/dpdk/spdk_pid92251 00:36:46.894 Removing: /var/run/dpdk/spdk_pid92737 00:36:46.894 Removing: /var/run/dpdk/spdk_pid93705 00:36:46.894 Removing: /var/run/dpdk/spdk_pid93825 00:36:46.894 Removing: /var/run/dpdk/spdk_pid93854 00:36:46.894 Removing: /var/run/dpdk/spdk_pid93919 00:36:46.894 Removing: /var/run/dpdk/spdk_pid93975 00:36:46.894 Removing: /var/run/dpdk/spdk_pid94039 00:36:46.894 Removing: /var/run/dpdk/spdk_pid94222 00:36:46.894 Removing: /var/run/dpdk/spdk_pid94301 00:36:46.894 Removing: /var/run/dpdk/spdk_pid94359 00:36:46.894 Removing: /var/run/dpdk/spdk_pid94430 00:36:46.894 Removing: /var/run/dpdk/spdk_pid94453 00:36:46.894 Removing: /var/run/dpdk/spdk_pid94521 00:36:46.894 Removing: /var/run/dpdk/spdk_pid94660 00:36:46.894 Removing: /var/run/dpdk/spdk_pid94874 00:36:46.894 Removing: /var/run/dpdk/spdk_pid95794 00:36:46.894 Removing: /var/run/dpdk/spdk_pid96506 00:36:46.894 Removing: /var/run/dpdk/spdk_pid97334 00:36:46.894 Removing: /var/run/dpdk/spdk_pid98380 00:36:46.894 Clean 00:36:46.894 19:27:04 -- common/autotest_common.sh@1453 -- # return 0 00:36:46.894 19:27:04 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:36:46.894 19:27:04 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:46.894 19:27:04 -- common/autotest_common.sh@10 -- # set +x 00:36:46.894 19:27:04 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:36:46.894 19:27:04 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:46.894 19:27:04 -- common/autotest_common.sh@10 -- # set +x 00:36:46.894 19:27:04 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:36:46.894 19:27:04 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:36:46.894 19:27:04 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:36:46.894 19:27:04 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:36:46.894 19:27:04 -- spdk/autotest.sh@398 -- # hostname 00:36:46.894 19:27:04 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:36:47.154 geninfo: WARNING: invalid characters removed from testname! 00:37:13.800 19:27:31 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:18.029 19:27:34 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:20.575 19:27:37 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:23.869 19:27:40 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:26.413 19:27:43 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:28.963 19:27:46 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:32.269 19:27:49 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:37:32.269 19:27:49 -- spdk/autorun.sh@1 -- $ timing_finish 00:37:32.269 19:27:49 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:37:32.269 19:27:49 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:37:32.269 19:27:49 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:37:32.269 19:27:49 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:37:32.269 + [[ -n 5759 ]] 00:37:32.269 + sudo kill 5759 00:37:32.280 [Pipeline] } 00:37:32.296 [Pipeline] // timeout 00:37:32.301 [Pipeline] } 00:37:32.317 [Pipeline] // stage 00:37:32.323 [Pipeline] } 00:37:32.338 [Pipeline] // catchError 00:37:32.348 [Pipeline] stage 00:37:32.351 [Pipeline] { (Stop VM) 00:37:32.364 [Pipeline] sh 00:37:32.652 + vagrant halt 00:37:35.194 ==> default: Halting domain... 00:37:40.623 [Pipeline] sh 00:37:40.915 + vagrant destroy -f 00:37:43.460 ==> default: Removing domain... 00:37:44.416 [Pipeline] sh 00:37:44.698 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:37:44.709 [Pipeline] } 00:37:44.725 [Pipeline] // stage 00:37:44.731 [Pipeline] } 00:37:44.747 [Pipeline] // dir 00:37:44.753 [Pipeline] } 00:37:44.770 [Pipeline] // wrap 00:37:44.777 [Pipeline] } 00:37:44.791 [Pipeline] // catchError 00:37:44.803 [Pipeline] stage 00:37:44.806 [Pipeline] { (Epilogue) 00:37:44.821 [Pipeline] sh 00:37:45.106 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:37:51.705 [Pipeline] catchError 00:37:51.707 [Pipeline] { 00:37:51.721 [Pipeline] sh 00:37:52.011 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:37:52.011 Artifacts sizes are good 00:37:52.023 [Pipeline] } 00:37:52.038 [Pipeline] // catchError 00:37:52.052 [Pipeline] archiveArtifacts 00:37:52.060 Archiving artifacts 00:37:52.167 [Pipeline] cleanWs 00:37:52.181 [WS-CLEANUP] Deleting project workspace... 00:37:52.181 [WS-CLEANUP] Deferred wipeout is used... 00:37:52.188 [WS-CLEANUP] done 00:37:52.191 [Pipeline] } 00:37:52.208 [Pipeline] // stage 00:37:52.214 [Pipeline] } 00:37:52.230 [Pipeline] // node 00:37:52.236 [Pipeline] End of Pipeline 00:37:52.283 Finished: SUCCESS